LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Computational models of speech perception by cochlear implant users

Photo by yolk_coworking_krakow from unsplash

Cochlear implant (CI) users have access to fewer acoustic cues than normal hearing listeners, resulting in less than perfect identification of phonemes (vowels and consonants), even in quiet. This makes… Click to show full abstract

Cochlear implant (CI) users have access to fewer acoustic cues than normal hearing listeners, resulting in less than perfect identification of phonemes (vowels and consonants), even in quiet. This makes it possible to develop models of phoneme identification based on CI users’ ability to discriminate along a small set of linguistically-relevant continua. Vowel and consonant confusions made by CI users provide a very rich platform to test such models. The preliminary implementation of these models used a single perceptual dimension and was closely related to the model of intensity resolution developed jointly by Nat Durlach and Lou Braida. Extensions of this model to multiple dimensions, incorporating aspects of Lou’s novel work on “crossmodal integration,” have successfully explained patterns of vowel and consonant confusions; perception of “conflicting-cue” vowels; changes in vowel identification as a function of different intensity mapping curves and frequency-to-electrode maps; adaptation (or lack ther...

Keywords: models speech; cochlear implant; computational models; speech perception; implant users

Journal Title: Journal of the Acoustical Society of America
Year Published: 2017

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.