News Science and space 03-19-2025 at 16:44 comment views icon

Language models and human language work in a similar way — AI has helped scientists study the brain

author avatar

Oleksandr Fedotkin

Author of news and articles

Language models and human language work in a similar way — AI has helped scientists study the brain

Researchers from the Hebrew University of Jerusalem have used the language AI models compared the mechanisms of artificial intelligence and the human brain in processing language information.

The similarity of the mechanisms used by models of artificial intelligence and the human brain when processing language information, can further help in the development of devices that will help people communicate.

Based on the way the Whisper language model transforms audio recordings of real people’s conversations into text, the researchers were able to more accurately reflect brain activity during everyday conversations than traditional AI language models, that encode certain features language structures, including the phonetic sounds that make up words and individual parts of speech.

Instead, Whisper was trained on audio recordings and text transcriptions of conversations. Using the statistical data obtained, the language model was trained by matching to create text from audio recordings that it had never heard before. Thus, this model uses only the statistical data obtained, and not the features of the language structure encoded in the the initial settings are not changed. Nevertheless, the research results showed that Whisper still used the features of the language structure after training.

The study also demonstrated how large language models of artificial intelligence work. However, the researchers were more interested in understanding how such models illuminate the understanding of speech-related processes in the human brain.

«It’s really about how we think about cognition We should think about cognition in terms of this type of model», — explains Associate Professor at the Hebrew University and author of the study Ariel Goldstein.

The study involved 4 volunteers with epilepsy who had already undergone surgery to implant electrodes to monitor brain function. With their consent, the researchers recorded the conversations of these patients during their stay in the hospital. In total, more than 100 hours of audio conversations were recorded.

Each of the participants was implanted with 104 to 255 electrodes to track brain activity. As noted by Ariel Goldstein, most of these studies are conducted in laboratory, controlled conditions, but his colleagues expressed a desire to investigate brain activity in real life.

The scientist emphasized that there is an ongoing debate about whether individual parts of the brain are activated during the process of speech and speech recognition, or whether the brain reacts more collectively. Researchers suggest that one part of the brain may be responsible for recognizing phonetic sounds, while another — is involved in interpreting word meanings. At the same time, the third part of the brain processes the movements and facial expressions that accompany speech.

Goldstein believes that different areas of the brain work in concert, distributing different tasks among themselves. For example, areas known to be involved in sound processing, such as the superior temporal gyrus, showed greater activity when processing auditory information, and areas involved in higher-level thinking, such as the inferior frontal gyrus, were more active when understanding the meaning of speech.

The researchers also recorded that different areas of the brain were activated sequentially. In particular, the area responsible for word perception was activated before the area responsible for word interpretation however, the researchers also clearly saw that brain regions also activated in response to actions that scientists thought they were not designed to process.

What surprised researchers about the AI language model

The researchers used 80% of the recorded audio conversations and accompanying transcriptions to train the Whisper language model, which then had to learn to create a text form on its own the remaining 20% of audio recordings.

The study was published in the journal Nature

Source: LiveScience



Spelling error report

The following text will be sent to our editors: