Recent mobile and automated audiometry technologies have allowed for the democratization of hearing healthcare and enables non-experts to deliver hearing tests. The problem remains that a large number of such… Click to show full abstract
Recent mobile and automated audiometry technologies have allowed for the democratization of hearing healthcare and enables non-experts to deliver hearing tests. The problem remains that a large number of such users are not trained to interpret audiograms. In this work, we outline the development of a data-driven audiogram classification system designed specifically for the purpose of concisely describing audiograms. More specifically, we present how a training dataset was assembled and the development of the classification system leveraging supervised learning techniques. We show that three practicing audiologists had high intra- and inter-rater agreement over audiogram classification tasks pertaining to audiogram configuration, symmetry and severity. The system proposed here achieves a performance comparable to the state of the art, but is significantly more flexible. Altogether, this work lays a solid foundation for future work aiming to apply machine learning techniques to audiology for audiogram interpretation.
               
Click one of the above tabs to view related content.