The Rehabilitation Process
When a person is beginning to think about cochlear implantation for them or their child, it is natural to focus on the assessment and surgery phases of the process. However it is important to remember that following these there is an ongoing process of rehabilitation (or “habilitation” for those who have had no access to sound before)
This takes the form of two main streams: Programming the device – so the user can hear sounds through it, and a Rehabilitation process – teaching the user to “make sense of” the sounds they’re hearing
Programming can also be referred to as “tuning”, or “mapping”. (This section assumes basic knowledge of what a cochlear implant is).
All implant users’ hearing nerves will respond slightly differently to the electrical signals generated by the implant electrodes. Therefore “one size does not fit all”. Consider two people with identical implants: for one person an electrical signal level of (e.g.) 300 units may represent a quiet sound. For the other person this same level could be very loud, or even uncomfortably loud. It all depends on their nerve’s response to the signals.
Each implant user therefore needs a personalised set of information regarding their nerve’s response. This information is termed a “Map” (or Programme) and is held in the speech processor’s memory. The Map “tells” the processor that for this person, (e.g.) 300 units sounds quiet and (e.g.) 500 is loud. Therefore when the processor registers a quiet sound, a level of around 300 units should be sent to the nerve, and conversely, 500 for a loud sound. In this way the level (or loudness) of sounds the processor receives are mapped to the electrical levels that should be sent to the implant. However sound has another important characteristic: it’s frequency, or pitch. In cochlear implants this is represented primarily by using channels: Nearly all modern implants have a number of channels corresponding to the individual electrodes in the array. The speech processor splits the sound signal into these channels depending on the sound’s pitch. Lower pitch sounds will be sent to electrodes deeper within the cochlea, and high pitch sounds to more shallowly placed electrodes. This mimics the way that the fully functioning inner ear responds, and helps to transmit information about the pitch of the sound to the implant user. Not only do the quiet / loud levels differ between people, they also differ to some degree between these channels. Therefore as above 300 units on one channel could be quiet but on another loud, even for the same person. So information on each channel is required. So far levels have been described in terms of quiet and loud.
We need to introduce some formal definitions. Each channel has three characteristics: Threshold, Comfortable Level, and Dynamic Range. The Threshold is the lowest electrical level that causes a detectable sound sensation for the user. The lowest sound levels that the speech processor responds to will be mapped to electrical stimulation at this level, on this channel. The Comfortable Level is the highest electrical level that causes a sound sensation that is loud but still comfortable for the user (no-one wants to use a device causing uncomfortable stimulation). The highest sound levels that the speech processor responds to will be mapped to electrical stimulation at this level, on this channel. The Dynamic Range is simply the difference between these two levels.
The actual numbers, or the size of the dynamic range are not too important. It is more important that the levels cause the right degree of loudness for the user. In this way sounds picked up by the processor will be mapped to the correct loudness levels giving the user the maximum information from the sound signal.
The speech processor samples incoming sounds many times a second, and sends corresponding electrical signals to the implant. Consider a sound that in this instant has a very high level in the low pitch area, a medium level at mid pitches, and a medium to high level at high pitches. In-between it’s relatively quiet. This may produce a pattern of electrical signals with the above map like this:
A further aspect of programming is which speech processing strategy is employed. This governs factors such as the rate at which information is sent to the channels and whether the channels are stimulated together or separately, as well as a number of other technical issues.
Determining these levels
Who does the task?
In most centres programming is carried out by audiologists: audiology technicians or audiological scientists. However other professionals with appropriate experience and training may be involved.
How are the levels found?
In most cases behavioural methods are used. This means that the CI user responds in some way to signals sent to their implant. Usually the speech processor will be interfaced to a computer running specialised software, and an automated check of the implant function (Telemetry) carried out . The user then wears their transmitting coil as normal. However they will not hear external sounds at this point, instead the computer generates electrical signals that are sent to the implant under the control of the audiologist.
Threshold: When programming adults, the audiologist presents the signals first at a level that the user should be able to easily detect so they become familiar with the task. The user responds whenever they hear a sound, perhaps by pressing a button, or simply by telling the audiologist they heard it. The audiologist adjusts the level until the lowest level at which the signal is detected is found. The process is repeated for the different channels.
With older children, a similar process can be employed. Sometimes instead of pressing a button, a game activity will be substituted such as placing a peg in a board or similar. It can take quite a long time to get the child used to this activity so patience is definitely required.
For younger children an alternative is visual reinforcement testing. The audiologist sends a signal thought to be audible and at the same time a toy in a box is brightly lit. The toy is pointed out to the child. This is repeated until the child pairs the signal with the toy. Now the sounds are presented again, but the toy is not lit until the child turns to look for the toy. In this way the audiologist can determine whether the child has detected the signal. An assistant distracts the child during this with another toy to ensure the child isn’t simply guessing.
In some cases none of these techniques are possible. Then the audiologist will often just observe the child as different levels are presented. With experience the audiologist can determine to a good extent which signal levels are detected, and therefore create a map. As the child becomes more accustomed to the sound signals and the testing routine they can move on to the more advanced testing described above.
With adults, the audiologist will increase the level and ask the user to indicate when it becomes loud but still comfortable. Alternatively the user can rate the sounds on a scale, i.e. quiet / medium / loud and comfortable / too loud, or similar.
Again with older children a similar technique can be employed. With younger children the audiologist can increase the signal level and observe the child. Indications that the sound is becoming loud include blinking, stopping play activity, or looking to parents for re-assurance. The audiologist can interpret this and therefore find the comfortable levels.
In the very early days following implantation a cautious approach is taken and the audiologist may set comfortable levels at low electrical levels. This is to allow time to become accustomed to the new stimulation and avoid over-stimulating. This is especially important with younger children who need things to proceed at a pace suitable for them.
There are some alternatives to behavioural measures. Sometimes comfortable levels can be estimated by finding at what level a middle ear reflex occurs (Electrical Stapedial Reflex Thresholds – ESRT). Nerve and brain responses to stimulation can be recorded from small disc electrodes attached to the head, allowing estimation of levels (Electrical Auditory Brainstem Response – EABR). However these methods have certain technical limitations so behavioural methods are favoured in most cases.
Time frame for programming
Usually quite a few appointments will be necessary in the first few months post-implantation, as a cautious approach is required. In addition some levels can change as the user adapts to the new stimulation – particularly the comfortable levels. As time goes on the appointments become less frequent. It is common though not always required for even well established users to attend yearly review appointments. Once all levels have stabilised and the audiologist is sure that the levels are correct the map is said to be optimal. There is no set time by which this has to be reached – again, it depends on the individual.
The aim of rehabilitation is to make the sound through the implant become meaningful. Even for those recently deafened, it is likely that the sound through the implant will (at least at first) be different to what they remember. For those with no memory of sound (for example, those born profoundly deaf) the rehabilitation process may take longer.
Who carries out rehabilitation?
This is usually a team consisting of specialist speech and language therapists and specialist teachers of the deaf. Other professionals with appropriate skills will commonly be involved.
Where is it carried out?
Some of the appointments will be carried out at the implant centre. However most CI centres employ an outreach programme involving home and/or school visits as necessary.
In addition, the specialist speech therapists and teachers of the deaf will arrange to visit and advise the local supporting professionals.
What is it?
The rehabilitation programme is a structured set of exercises designed to facilitate the CI user making sense of the sound signal. It might begin with exercises exploring how the user can simply detect sounds. Some users report that when they first begin to use the implant, many things sound the same. One of the goals of the process is to help the user differentiate between sounds, and then words in speech.
The CI user’s speech and language skills are taken into account during the rehabilitation programme. A specialist speech and language therapist may advise on promoting speech and language development in children who have been without access to sound before their implant.
The overall rehabilitation programme is tailored to the needs of the individual adult or child. As with programming it is usual that more visits/appointments will be required in the earlier stages of the process.