Aphasia will soon be a thing of the past

Hispanic Engineer & Information Technology >> National News >> Aphasia will soon be a thing of the past

Aphasia will soon be a thing of the past

 
POSTED ON Feb 18, 2025
 

Research funded by the National Institute on Deafness and Other Communication Disorders, which is part of the National Institutes of Health, along with the Whitehall Foundation, the Alfred P. Sloan Foundation, and the Burroughs Wellcome Fund, is bringing attention to language disorders caused by damage to the brain areas that are responsible for language processing.

According to the University of Texas at Austin, two researchers are collaborating with UT’s Dell Medical School and Moody College of Communication, specializing in aphasia, to test whether their advanced brain decoder is effective for individuals with this language disorder.

The UT Austin researchers have showcased an AI-based tool capable of translating a person’s thoughts into continuous text without requiring that individual to comprehend spoken words.

Remarkably, training this tool on a person’s unique brain activity patterns takes about an hour.

This is a significant improvement over the team’s earlier work, which involved a brain decoder that required extensive training—many hours of a participant listening to audio stories.

In this latest study, the researchers developed a method that allows for much quicker adaptation of their brain decoder for new users.

This advancement indicates that, with further improvements, brain-computer interfaces might enhance communication for people with aphasia.

Jerry Tang is a postdoctoral researcher at UT in Alex Huth‘s lab and the first author of a paper describing this work in Current Biology.

In their previous work, the team had trained a system, including a transformer model similar to ChatGPT’s, to convert a person’s brain activity into continuous text.

This semantic decoder can produce text whether a person is listening to an audio story, thinking about telling a story, or watching a silent video that narrates a story.

To train this brain decoder, participants previously had to lie motionless in an fMRI scanner for about 16 hours while listening to podcasts.

This process is impractical for most individuals and potentially impossible for those who struggle to comprehend spoken language.

The original brain decoder also only works for individuals it was explicitly trained on.

With this latest advancement, the team has devised an approach to adapt the existing brain decoder—previously trained by the intensive method—to new individuals with just one hour of training in an fMRI scanner while watching short, silent videos, such as Pixar shorts.

They developed a converter algorithm that maps the brain activity of a new participant onto that of someone whose brain activity was previously used to train the decoder, achieving similar decoding results in significantly less time.

Huth remarked that this work reveals profound insights into how our brains function: our thoughts transcend language.

Comment Form

Popular News

American Council on Education reaffirms impact of IBM’s apprenticeship model

IBM announced this week that its apprenticeship program has earned…

USACE opens additional material distribution points in Puerto Rico

The U.S. Army Corps of Engineers has been tasked with…

Dr. Allegra da Silva: Water Reuse Practice Leader

Brown and Caldwell, a leading environmental engineering and construction firm,…

 

Find us on twitter