Skip to main content
Uncategorized

Study demonstrates the importance of attention in oral comprehension

By 20 de May de 2005November 18th, 2020No Comments
< Back to news
 20.05.2005

Study demonstrates the importance of attention in oral comprehension

Investigators in the Cognitive Neuroscience Research Group (), part of the UB and based at the PCB, have published an article in which they affirm that the audiovisual integration performed by the brain in order to understand spoken language is not automatic but requires attention on the part of the listener. Directed by the researcher Salvador Soto-Faraco, the study has been published in this month's issue of Current Biology.

Why is it difficult to understand a person in a noisy bar when we are unable to read his/her lips well? Why does it bother a bilingual person to see a simultaneous translation between language that he/she knows? Why, on the other hand, is it easy to ignore lip movement in a film in which the language is unknown to us?

During verbal communication, in addition to acoustic correlations, associated visual correlation also occurs, for example lip movement, which corresponds to the sounds produced. When received by the listener, this gestural-visual code, such as in a person-to-person conversation, plays an essential role in speech perception. This code is critical for understanding verbal language in situations with intense background noise and for the hard of hearing. “Many of us have experienced this difficulty when we have a conversation over the phone in a language which is not familiar to us, such as English, and during which we cannot observe the lip movements of the other person”, commented Dr. Soto-Faraco

Until now, it had been assumed that the integration of acoustic and associated visual information was automatic, regardless of whether the listener is performing another activity and is not paying much attention to the speaker’s lips. A classic demonstration of the automatic nature of audiovisual integration of speech is the McGurk effect. For example, when you hear a sound (for example /ba/) while seeing the lip movements associated with a different sound (for example /ga/), the acoustic effect is that of having heard a sound that falls between what you have heard and what you have read from the lips of the speaker (such as /da/).

This study measured the degree to which the listener is subjected to the McGurk effect in a condition of excess attention, that is to say, when the listener is concentrating on a difficult task, either auditive or visual, and in a condition with no additional task. In contrast to what would be expected if audiovisual integration were automatic, the McGurk effect in the double task condition decreased considerably, almost disappearing. On the basis of these results, the authors propose that audiovisual integration of speech is not completely automatic but is subjected to attention requirements.

These conclusions may contribute to our understanding of the bases of language perception in everyday situations, particularly those in which lip reading is important (in noisy places, for people with hearing difficulties,…). According to Dr. Soto-Faraco, “the increasing popularity of audiovisual communication systems is due, in part, to the increase in the quality and amount of data, since these systems provide access to additional codes such as gestures and also linguistic information read from lips. With respect to the use of audiovisual technology to improve communication, it is necessary to bear in mind that the brain has a limited capacity to combine information from auditive and visual channels”.

Furthermore, the confirmation that integration is not automatic will help to elucidate why lip movements in certain situations can be overlooked, such as in a foreign film dubbed into Catalan or Spanish, or why in other situations it is more difficult to ignore these movements, such as when films are dubbed from a language that an individual knows into another that he/she is also familiar with.