Dorman a graph (Fig. 1) that initially inspired

 Dorman et al. (1998) conducted a study in order
to determine number of channels needed to reach maximum performance on tests
measuring speech understanding abilities. Taking sentences from the H.I.N.T.,
they used a cochlear implant simulation to process the sentences and measured them
through 6, 8, 12, 16, and 20 channels. The sentences were presented in quiet,
with +2 dB SNR, and -2 dB SNR representing noise, presented to normal hearing
(NH) listeners. The results were plotted using a graph (Fig. 1) that initially
inspired our research on this subject. The percent of words correct from the
H.I.N.T. sentences were plotted as a function of the number of channels
measured (6, 8, 12, 16 and 20). The dark circles represent the quiet condition,
the squares represent the +2 dB SNR condition, and the diamonds represent the
-2 dB SNR condition.

Fig. 1. Percent of
words correct as a function of the number of channels for H.I.N.T. in quiet, +2
dB, and -2 dB S/N.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

It was found that maximum
performance for the +2 dB SNR condition was reached at 12 channels. For the -2
dB SNR condition, maximum performance was reached at 20 channels. And lastly,
for the quiet condition, maximum performance was reached at 5 channels. The
study found that more channels are needed in noise than in quiet in order to have
better speech recognition ability. It also noted that as the signal to noise
ratio decreases or becomes worse, more channels are required to have better
speech recognition. Finally, this study also demonstrated that while the number
of channels used had a significant improvement in speech recognition between 8 and
12 channels, there was no significant improvement from 12 to 16 channels. So,
while the number of channels does increase speech recognition, there may be a
limit to the amount of improvement provided by the number of channels used.

Shannon et al.’s (2004) research
focused on the number of channels in CIs that are needed to reach adequate
speech recognition in difficult listening situations. The study was a
meta-analysis of current research at the time on speech and music recognition that
included various listening situations. It referred to a model created by Plomp
entitled, “Plomp’s Speech Reception Threshold (SRT) Model”, in which the
difficulty of the listening environment is given a single numerical quantity in
relation to the number of spectral channels needed for sufficient speech
recognition. Results were plotted on various graphs with percent-correct on the
y-axis that measured vocoded speech recognition in quiet to vocoded speech recognition
in difficult listening situations as a function of the number of channels. Overall,
the meta-analysis concluded that sufficient speech recognition of simple
sentences was achieved in quiet using only 3-4 channels. Also, in more
difficult listening situations, sufficient speech recognition required upwards
of 30 or more channels. Finally, in assessing the difficulty presented in
multiple listening environments, more spectral channels were needed in order
for speech recognition to be adequate.

Deniz Ba?kent’s (2006) research focuses on speech recognition as a function
of spectral channels in NH listeners compared to those with a sensorineural
hearing loss (SNHL). Ba?kent tested
hearing impaired (HI) listeners and NH listeners using vocoded consonants and
vowels in midst of a background noise at various levels and different spectral
channels. The results were plotted on a graph which analyzed the
percent-correct of words repeated correctly as a function of the number of
channels used in different listening situations. Her results yielded similar
results for HI and NH listeners in both noise and in quiet situations, in which
more spectral channels were required for better speech. The results also showed
that as the number of channels increased for HI listeners, their performance
increased as well, but to an extent. The HI listener’s speech recognition
performance saturated at 8 channels whereas performance increased for NH
listeners up to 12 channels with vowels, and 10-16 with consonants. Ba?kent concluded that HI subjects may not
benefit as much with the use of more channels as their scores saturated at 8
channels, similar to results from other studies that measured this in CI users.

Friesen et al.’s (2001) research examined
speech recognition as a function of the number of spectral channels between five
NH listeners and nineteen CI users. The study used vowels, consonants, CNC words,
and H.I.N.T. sentences as the stimuli to assess speech recognition abilities as
a function of spectral channels (or electrodes) in quiet and in noise with SNR’s
of 15, 10, 5, and 0 dB using SPEAK, CIS, and SAS speech processors. The results
were plotted on graphs displaying the percent-correct score for each stimulus
used as a function of the number of channels tested at each SNR. They found
that for CI users, they needed about a 5-10 dB increase in the S/N ratio in
order to reach similar performance scores as the NH listeners at 20 channels. Their
results also yielded similar findings in which speech recognition drastically
improved when the number of electrodes increased. However, this improvement was
particularly with 7 to 10 electrodes as there was no improvement measured when
that range increased from 7 to 20 electrodes. This study emphasized that CI
users receive benefit with the addition of more electrodes, but only up to
about 7-10. This result was due in part to possible electrode interactions and
because CI users are still not able to fully utilize what little spectral
information is provided to them. Overall, this study shows how an increase in
the number of channels/electrodes used, aids in CI users’ speech recognition
abilities but again, only to an extent. Rather than seeing a linear type growth
in the number of channels used in relation to speech recognition ability in NH
listeners, this type of growth may plateau at around 7-10 electrodes for actual
CI users.

Fu et al.’s (2005) research examined
the role of spectral resolution and smearing in relation to noise
susceptibility in CI users. Their research pinpoints that CI users have a higher
susceptibility to noise and that it could be due to their poor spectral
resolution and high amount of spectral smearing attributed to channel
interaction. Using both NH listeners and CI users, the study compared the
sentence recognition abilities of both groups when sentences were presented in
midst of square-wave modulated speech-shaped noise. The NH listeners had
sentences relayed to them using a CI simulation similar to those used in all
the studies aforementioned. They were tested using 4, 8, or 16 channels for
spectral resolution, and carrier filter slopes of -24 dB or -6 dB per octave
for spectral smearing. The study described the results in terms of a release
from masking. They found that for spectral smearing with unprocessed sentences,
NH listeners had a greater release from masking, whereas CI users had none. Additionally,
as the number of channels or spectral resolution decreased, NH listeners
experienced a diminished release from masking. The study concluded that CI
users performed the best when 8-16 channels of spectrally smeared noise-band
speech was used. Additionally, as fewer channels were used, there was less
spectral information and spectro-temporal cues provided to the CI user. As a
result, the subjects found it harder to hear and could possibly benefit from
more spectral information given to them with the use of more channels. 

x

Hi!
I'm Johnny!

Would you like to get a custom essay? How about receiving a customized one?

Check it out