
Posted November 10, 2018 07:51:38In a new paper, published in the journal Nature, researchers at the University of California, Berkeley, and the University at Buffalo describe the brain’s “auditory-motor coordination network” that is essential for making sense of the sounds and images that other humans create.
The team says the coordination network is not only crucial for understanding other people’s speech, but also the way the brain interprets the sounds it hears in other people.
For example, a sound that is heard by a person who is walking down the street is called a “run,” and it is important to understand how that sound is perceived by a brain that is not trained to understand other people speaking.
The researchers say the coordination system is essential in making sense not only of other people, but for other things that the brain cannot understand.
“We are able to use the coordination networks of human beings to learn how to understand what other people are saying, even if they don’t speak a language,” said Dr. Robert E. Zwiers, a research scientist at the UC Berkeley Brain Imaging Research Center.
The research team also developed a software tool that allows researchers to create custom, artificial language-learning programs that automatically learn the “voice” of others, using the coordination systems of human subjects.
The tool, called Speech-Motor Cognition and Automation (SMAC) software, is an open source program that can be used to learn from audio recordings of other humans.
The researchers said that their software can also be used for speech recognition.
The software is a collaborative effort between UC Berkeley and the Center for Language Sciences at the NYU Langone Medical Center.
The program uses a technique called functional neuroimaging, which uses functional MRI scans of the brain to analyze the brain activity of a person while they speak.
The technique is similar to a brain-scanning technique called fMRI, which is used to look at the activity of the entire brain when a person has a seizure.
The scientists say that the software can be easily adapted to learn a new language, or to help people understand other human speech.
Dr. Zweiers said that the system can also use information from the voice-recognition capabilities of smartphones to learn the sounds of other smartphones.
“In the next generation of the voice recognition software, we will be able to take the information from that to generate a language that people can learn,” he said.
“For example we can learn to understand another’s speech and how to make it sound the way they want it to sound.
We’ll be able then to use that information to train speech recognition software to understand a different language, like Spanish or Mandarin.”
The researchers said they hope to build a system that can translate a person’s spoken words into another language, which could be used by other people to communicate with each other.
The scientists said they will also use the system to learn about the meaning of people’s faces and expressions, and to develop artificial speech systems that understand speech from others, as well as those from other human beings.
“The goal of this research is to build an artificial language that is designed to be intelligible to other human language users, and also to be highly effective in learning to use language,” Dr. Zwaers said.