Single-Neuron Recordings Show How the Brain Plans Speech
Key findings
- Little is known about which neurons and neuronal constructs are used to plan and produce words during speech
- In this study, ultrahigh-density microelectrode recordings, tracking of natural speech, and real-time intraoperative neurophysiology were used to study human language at single-neuron scale in five patients undergoing planned neurosurgical procedures
- The activities of many neurons closely mirrored the way in which the word sounds were produced—they reflected how individual planned phonemes were generated through specific articulators
- Rather than simply representing phonemes independently of their order or structure, many of the neurons coded for their composition in the upcoming words and reliably predicted the arrangement and segmentation of phonemes into distinct syllables
- Prefrontal recordings could be useful for building synthetic speech prostheses and enhancing brain–machine interfaces for restoring language function
A core component of human language is a succession of processes involved in planning the arrangement and structure of phonemes (single sounds) in individual words. These processes are thought to recruit prefrontal regions in parts of the broader language network and connect with downstream areas that play a role in the motor articulation of speech.
Subscribe to the latest updates from Neuroscience Advances in Motion
In a groundbreaking study, researchers at Massachusetts General Hospital have determined how neurons represent the fundamental elements in constructing spoken words, from phonemes to more complex assemblies such as syllables. What's more, they were able to determine reliably what combinations of consonants and vowels individuals would produce before the words were spoken.
The results, published in Nature, are expected to lead to new options for treating speech and language disorders. The authors are Arjun R. Khanna, MD, William Muñoz, MD, PhD, and Ziv M. Williams, MD, of the Mass General Department of Neurosurgery; Young Joon Kim, a student at Harvard Medical School; Sydney Cash, MD, PhD, co-director of the Mass General Center for Neurotechnology and Neurorecovery; and colleagues.
Methods
The study participants were five patients being prepared for deep brain stimulation. Their neurosurgical plans called for the placement of clinical microelectrodes which also provided the unique opportunity to briefly perform Neuropixels recordings while they were awake. Adaptation of the Neuropixels system for single-neuron recordings was pioneered at Mass General, as previously described.
The region to be traversed included the left (language-dominant) prefrontal cortex, specifically a part of the posterior middle frontal gyrus known to be involved in word planning and sentence construction. This presented the opportunity to study the action potential dynamics of neurons during natural speech in real time.
The participants viewed a scene and were asked to describe it in a specific order and format. For example, the scene might be highlighted in a way that required the participants to produce the sentence "The mouse was being chased by the cat" or "The cat was chasing the mouse." Importantly, the participants did not receive phonetic cues (for example, they did not hear and then repeat the word "cat"). Collectively, they produced 4,263 words.
Three of the participants performed a "perception control" by listening to words spoken to them. One of them further performed a "playback control" by listening to their own recorded voice.
Results
The key findings were:
- Neurons studied in the prefrontal cortex represented the specific order and structure of phonetic sequences before they were uttered and reflected their segmentation into distinct syllables
- The neurons accurately predicted the phonetic, syllabic, and morphological components of upcoming words in a consistent temporal order
- The activities of the neurons were broadly organized along the cortical column and their patterns transitioned from planning of speech to speech production
- The neurons reliably tracked the detailed composition of consonant and vowel sounds during listening and distinguished processes specifically related to speaking from those related to listening
Looking Ahead
It seems possible that prefrontal recordings could be useful for building synthetic speech prostheses and enhancing brain–machine interfaces for restoring language function. Future work will need to evaluate additional brain regions and test more complex processes, such as word finding, prosody (rhythm, intonation, stress and related attributes) and the arrangement of words in sentences.
view original journal article Subscription may be required
Learn more about the Department of Neurosurgery
Learn more about the Center for Neurotechnology and Neurorecovery