Language Comprehension in the Brain

Spoken language produces a lot of brain activity in the region just above your ear, where the cells recognize speech instantly and without any obvious effort. Neurosurgeon Dr. Eddie Chang and his team at the University of California, San Francisco, study what makes such recognition possible.

The team recruited eight patients having a form of brain surgery that requires them to remain conscious throughout. During the surgery, the doctors temporarily implanted a new type of probe in an area of the brain’s outer layer, the cortex, which is critical for speech perception. Then, the researchers had the patients listen to dozens of recordings that contained all the speech sounds of American English.

While this was going on, the probe, roughly the size and shape of an eyelash, monitored nearly 700 individual brain cells. Chang notes there’s a highly organized map of that region where specific points in that area of the cortex are listening in for different speech sounds, like the different aspects of consonants and vowels.

CHANG: There is, in fact, a map where specific spots along that cortex are tuned to different speech sounds, like the different parts of consonants and vowels. Some cells look for ah sounds, while others pick up the oh or buh (ph) or cuh (ph) sounds. The researchers had previously mapped these cells across the surface of the cortex, but the new probe gave them a three-dimensional view, including cells beneath the surface layer. They predicted that the deeper cells might respond to the same sounds as those across the surface, but that’s not what they found.

“There’s a tremendous amount of diversity” [in these cells],” Chang comments. What that means is that just below the cells that respond to the ah sound, there may be cells listening for the buh or cuh. This indicates that different speech sounds just microns apart are processing different aspects of what we hear. The researchers theorize the organization may help the brain process the many sounds of speech quickly and efficiently.

David Poeppel, a neuroscientist at New York University, says the study, published in the journal Nature, addresses some basic questions about how the human brain processes language: What are the parts required for language processing? How do the parts combine their efforts to make it easy to speak and understand?

While Poeppel notes that the study looked at only one brain area in just a few patients, it showed the potential of a technology (the eyelash-size probe) that can monitor hundreds of individual neurons instead of just a few and see their connections.

The researchers believe their results add to the evidence that the human brain is wired to recognize individual speech sounds instead of entire words or sentences, confirming a theory that dates back almost 100 years.

While the subjects all used American English, Poeppel notes that the brain contains an organization of speech sounds that’s abstract and holds true for all languages. What disparate languages such as English, Norwegian, Urdu, and Bantu have in common is a set of speech sounds (phonemes) the brain can transform into meaningful words and sentences.

What we at AceReader would like to see are data showing such a high level of organization for sounds we hear in our heads while reading and how that information can be applied to help students learning to read more effectively.

 

 

Citation:

[1] Hamilton, Jon. (December 22, 2023). “Scientists are using new technology to study the cells behind language comprehension.” NPR. Retrieved from https://www.wvtf.org/2023-12-22/scientists-are-using-new-technology-to-study-the-cells-behind-language-comprehension.

 

Author: AceReader Blogger

The AceReader blogging team is made up of specialists in a number of different areas: literacy, general education, content development, and educational software. For questions about posts, please submit them in the form below. For suggestions about blog topics, please email them to blogger@acereader.com.

Leave a Reply