Skip to content

Artificial intelligence technology revitalizes speech in an individual suffering from paralysis for 18 years, thanks to a brain implant.

Artificial intelligence-driven brain-computer interfaces successfully restore close-to-real-time speech for a paralysis patient at UC Berkeley and UCSF.

Artificial intelligence-driven brain device helps speech recovery in individual with paralysis for...
Artificial intelligence-driven brain device helps speech recovery in individual with paralysis for 18 long years.

Artificial intelligence technology revitalizes speech in an individual suffering from paralysis for 18 years, thanks to a brain implant.

In a groundbreaking development, researchers have successfully restored the speech of stroke survivors using an AI-powered brain-computer interface (BCI). The technology, which involves implanting a neural prosthesis that records brain signals from the language motor cortex, has given a voice to individuals who cannot physically speak due to paralysis or locked-in syndrome [1][5].

Initially, the technology had a delay of about 8 seconds between the brain signal and the resulting speech output. However, through advancements reported in early 2025, this delay has been significantly reduced to just 1 second, achieving near real-time speech conversion [1][2]. This improvement is due to better neural signal acquisition, faster and more accurate AI decoding algorithms, and optimized communication pipelines between the implanted device and the speech synthesizer [1].

One of the most inspiring stories of this technological leap is that of Ann Johnson, a former high school teacher and coach, who became paralyzed after a brainstem stroke at the age of 30. The BCI has not only restored Ann's speech but has also allowed her to communicate at a conversational speed of about 160 words per minute, significantly faster than the 14 words per minute with her previous eye-tracking system [2].

The goal of the research team is to make neuroprostheses "plug-and-play," turning them from experimental systems into standard clinical tools. They envision digital "clones" that replicate not just a user's voice but also their conversational style and visual cues [3].

Ann Johnson participated in a clinical trial aimed at restoring speech for people with severe paralysis, led by researchers at the University of California, Berkeley, and UC San Francisco [3]. When Ann attempts to speak, the implant detects neural activity and sends the signals to a connected computer. The AI decoder then translates these neural signals into text, speech, or facial animation on a digital avatar [3].

Ann selected an avatar to match her appearance, which can mimic facial expressions such as smiling or frowning [4]. The breakthrough could help people who lose the ability to speak due to stroke, ALS, or injury reclaim faster, more natural communication [5].

Future improvements could include wireless implants and photorealistic avatars for more natural interactions [3]. Researchers are also working on decoding "inner speech," or silent thinking of words, with up to 74% accuracy [3][4]. This would expand communication options for those with severe paralysis who cannot physically attempt to speak, allowing more intuitive and effortless interaction in future BCI developments.

The system uses a streaming architecture, allowing near-real-time translation with just a one-second delay [1]. Crucially, the system only works when the participant intentionally tries to speak, preserving user agency and privacy [1]. Researchers even recreated Ann Johnson's voice from a recording of her 2004 wedding speech [2].

Ann Johnson, who hopes to one day work as a counselor in a rehabilitation center, using a neuroprosthesis to talk with clients, is a testament to the potential of this technology [2]. The device relies on an implant placed over the brain's speech production area [1]. The clinical trial used a neuroprosthesis that records signals from the speech motor cortex, bypassing damaged neural pathways to produce audible words [1].

This breakthrough marks a significant step forward in the field of neuroprosthetics, offering hope for those who have lost their ability to communicate due to neurological conditions.

  1. The advancement in technology has not only reduced the delay in speech output for stroke survivors using AI-powered brain-computer interfaces (BCIs) but also enabled faster communication, such as Ann Johnson's conversational speed of about 160 words per minute.
  2. Scientists are working on decoding "inner speech," or silent thinking of words, with up to 74% accuracy, which could provide more intuitive and effortless interaction in future BCI developments, especially for those with severe paralysis.
  3. Researchers aim to make neuroprostheses "plug-and-play," allowing them to become standard clinical tools that replicate not just a user's voice but also their conversational style and visual cues, potentially revolutionizing medical-conditions management in health-and-wellness.

Read also:

    Latest