Well, it’s been a long time between posts! One of my New Years resolutions was to try and be more consistent with posting here, as it’s a great link between the volunteers who contribute their time and efforts into doing research with us, and the final results.
So, here are a few ‘final results’ from the data we’ve collected over the last year or so. If you’ve been a research participant here at the Bionics Institute over the last two years, the result of your listening and button presses will be reflected in one of the little dots on a graph in one the papers below! At the moment we’re looking for more volunteers to participate in our research – if you’re interested, get in touch by emailing firstname.lastname@example.org.
Overall, our research so far has focussed on the ability to hear multiple ‘streams’ of sound. We believe that, in addition to accurately hearing pitch, loudness, and other low-level auditory cues, the enjoyment of music is largely affected by the ability to hear the different streams in music. This could take the form of different instruments playing in an ensemble, or even of different lines of melody played by the same instrument. In normal hearing, the ability to hear these different streams is based on different types of acoustic differences between the streams (loudness, pitch, instrument timbre etc). The brain can then use these cues provided by the ears and auditory system to disentangle the different streams of sound. When using a cochlear implant or hearing aid, however, the basic perception of these cues is changed, and so therefore is the perception of different ‘streams.’
Our research so far has been based on understanding how different hearing devices effect these auditory ‘streaming cues.’ Our first published study found that visual cues also affected the ability to hear separate streams of sound. When a visual cue was provided, most participants found it easier to hear two mixed-up streams of sounds. In addition, we found that no special training was needed in order to interpret the visual cues (although musically-trained people were better at the tasks overall). This research was published in ‘PLoS ONE’, an international general science journal, and can be downloaded free of charge at this address (look for the ‘PDF’ link on the right hand side):
Our next paper was also focussed on stream segregation, and found that visual cues were also able to make the streaming task easier for people using cochlear implants. In future, it may thus be possible to devise ways to make music more enjoyable using visual cues, either in a live setting, pre-recorded music with pre-made visual aids. Listening to music live, where visual cues can be provided by the musicians themselves, may also be more enjoyable than listening to the radio or CDs at home. This research was also published in PLoS ONE, and can be downloaded here:
We have also been analysing further data from the same experiment that compares the different types of acoustic cues against each other, to determine which cues have the greatest effect, especially for people using cochlear implants and hearing aids, though that data is not published yet. We just received notification that the paper we wrote has been accepted into a journal called ‘Music Perception’ though, so that should be available soon.
We are also just about to publish the results from a large concert that was held last February at the Arts Centre Melbourne (more info here: http://musicalbionics.wordpress.com/interiordesign/) . In that event, six new works were composed especially for CI users, and survey data was collected after each of the six pieces. So far, the results are very promising – there were big differences between the appreciation of each piece, and between each survey item, but on most of the survey items there was no difference between the responses of the CI users and normally-hearing listeners in the audience. So although some pieces were liked more than others, both normally-hearing and CI users had similar responses!
In other projects, Dr Tom Francart, a visiting scientist from Belgium, is also developing a system specifically designed for a cochlear implant combined with a hearing aid in the other ear, to improve sound quality, and improve the ability to localise sound sources. The system also improves loudness relations between different sounds, to make soft sounds better audible and loud sounds more comfortable. Tom is working with Cochlear to implement the results of his research in real devices. Examples of other results from research at the Institute and partners are the SmartSound Beam function to reduce background noise and ADRO.
Mohammad Marefaand, a new PhD student in our team, is also working on improving music perception for people using a hearing aid and cochlear implant together. His approach will be to discover the essential elements of music that are necessary to transmit to hearing devices in order to best enjoy music. Watch this space!
PS, here is our new sound treatment in the booth! This is where some of the testing will be held, looks cool hey!