Recent advances in brain-computer interfaces have made it possible to more accurately extract speech from neural signals in humans, but language is just one of the tools we use to communicate. “When my young nephew asks for ice cream before dinner and I say ‘no,’ the meaning is entirely dictated by whether the word is punctuated with a smirk or a stern frown,” says Geena Ianni, a neuroscientist at the University of Pennsylvania. That’s why in the future, she thinks, neural prostheses meant for patients with a stroke or paralysis will decode facial gestures from brain signals in the same way they decode speech.
To lay a foundation for these future facial gesture decoders, Ianni and her colleagues designed an experiment to find out how neural circuitry responsible for making faces really works. “Although in recent years neuroscience got a good handle on how the brain perceives facial expressions, we know relatively little about how they are generated,” Ianni says. And it turned out that a surprisingly large part of what neuroscientists assumed about facial gestures was wrong.
The natural way
For a long time, neuroscientists thought facial gestures in primates stemmed from a neat division of labor in the brain. “Case reports of patients with brain lesions suggested some brain regions were responsible for certain types of emotional expressions while other regions were responsible for volitional movements like speech,” Ianni explains. We’ve developed a clearer picture of speech by tracing the origin of these movements down to the level of individual neurons. But we’ve not done the same for facial expressions. To fill this gap, Ianni and her team designed a study using macaques—social primates that share most of their complex facial musculature with humans.
The study started by putting the macaques in an fMRI scanner to monitor their brain activity while recording their faces with a high-resolution camera. The team then exposed their participants to various stimuli: videos with other macaques making faces at them, interactive avatars, or other live macaques. “This elicited socially meaningful facial expressions that are part of the subjects’ natural repertoire,” Ianni says.
Based on the video analysis, scientists identified three facial gestures they wanted to focus on: the lipsmack macaques use to signal receptivity or submission; the threat face they make when they want to challenge or chase off an adversary; and chewing, a non-social, volitional movement. Then, using the fMRI scans, the team located key brain areas involved in triggering these gestures. And when this was done, Ianni and her colleagues went deeper—quite literally.
Under the hood
“We targeted these brain areas with sub-millimeter precision for implantation of micro-electrode arrays,” Ianni explains. This, for the first time, allowed her team to simultaneously record the activity from many neurons spaced across the areas where the brain generates facial gestures. The electrodes went into the primary motor cortex, the ventral premotor cortex, the primary somatosensory cortex, and the cingulate motor cortex. When they were in, the team once again exposed the macaques to the same set of social stimuli, looking for neural signatures of the three selected facial gestures. And that’s when things took a surprising turn.
The researchers expected to see a clear division of responsibilities, one where the cingulate cortex governs social signals, while the motor cortex is specialized in chewing. Instead, they found that every single region was involved in every type of gesture. Whether the macaques were threatening a rival or simply enjoying a snack, all four brain areas were firing in a coordinated symphony.
This led Ianni’s team to the question of how the brain distinguished between social gestures and chewing, since it apparently wasn’t about where the brain processed the information. The answer was in different neural codes—different ways that neurons represent and transmit information in the brain over time.
The hierarchy of timing
By analyzing neural population dynamics, the team identified a temporal hierarchy across the cortex in macaques. The cingulate cortex used a static neural code. “The static means the firing pattern of neurons is persistent across both multiple repetitions of the same facial gesture and across time,” Ianni explains, and maintained their firing pattern till 0.8 seconds after that. “A single decoder which learns this pattern could be used at any timepoint or during any trial to read out the facial expression,” Ianni says.
The firing rates in the motor and somatosensory cortex, on the other hand, looked radically different. This suggests there is a dynamic neural code defined by rapidly changing firing rate relationships amongst the neural populations.
The team thinks this means that the cingulate cortex manages the social purpose and context of the facial gesture, which is relatively stable. This may be where the brain integrates sensory cues—like the look of another monkey’s expression—with its own internal state to produce the right facial gesture for the occasion.
The dynamic code areas implement this expression by driving individual muscles. “They’re driving the kinematics like ‘move this lip a millimeter to the left, now a millimeter to the right,’” Ianni explains. These constant miniscule movements of muscles are necessary because facial expressions are usually dynamic—macaques’ eyelids, lips, cheeks, and ears are constantly twitching and changing their position even if the end effect looks still from a distance.
The journey begins
But that’s just the beginning of a long journey.
“There have been such fabulous, impressive advances in the field of assistive communication devices even since I started this project,” Ianni says, and points to neural prostheses developed by Maitreyee Wairagkar’s team Ars covered in 2025 as one of the examples. Building a similar neural prosthesis for decoding facial gestures is unlikely to happen overnight, though. We’re a rather long way away from truly restoring the ability to communicate through facial gestures to patients who lost it.
Ianni’s study is the first that recorded neurons producing facial gestures with multi-electrode arrays. This means we’ve only just begun to build up base science on the neural mechanism behind making faces. This is the point where neural speech decoding technology was in the late 1990s, and we still can’t make that reliable enough for regular clinical products.
Still, Ianni remains hopeful. “I hope our work goes towards enabling the field, even the tiniest bit, towards more naturalistic and rich communication designs that will improve lives of patients after brain injury,” she says.
Originally published at Ars Technica








