Scientists at Northeastern University have invented a wearable sensor that warns caregivers three minutes before an autistic child has an aggressive meltdown. Although the sensor is still in its early stages, the technology could one day fill an important gap for parents of autistic kids. Because people with autism often cannot express how they feel with words or facial expressions, caregivers may be left to guess at their charges’ mounting tensions. The tragic result is that many parents of autistic children dread leaving home with their unpredictable kids in tow—or avoid it altogether.
“We found that if we use the last three minutes of physiological data, we could predict whether that person is going to behave aggressively in the subsequent one minute with 84 percent average accuracy,” said Matthew Goodwin, the behavioral scientist at Northeastern University who conceived of the sensor, in a press release.
Goodwin and colleagues designed the sensor after observing how a small sample of 20 autistic children experienced changes in their heart rates, temperatures, and movements in the moments leading up to seemingly unpredictable outbursts. They then constructed a library of telltale physiological signs that appear before an autistic child becomes aggressive. “We had 87 hours of observations using that method with 20 inpatient youth with autism, capturing 548 time-stamped aggressive episodes with accompanying biosensor data,” Goodwin says.
Beyond providing an early warning system for parents, Goodwin suspects the technology may help healthcare providers develop strategies for autistic people to implement on their own before an outburst. But these are all long-term goals. For now, Goodwin says, the algorithm behind the wearable sensor still needs to learn the quirks of more autistic children. To that end, the Department of Defense recently awarded Goodwin three years of funding to develop his sensor.
In its current form Goodwin likens his product to the first iterations of Siri, which had to learn to nuances of each user’s speech before its language recognition software kicked in properly. “Back in the day, you had to read known passages so it could learn how you pronounce certain words,” he says. “We’re kind of in the same boat here. As we get more data from more people over longer periods of time, we should have a larger dataset that will work with any new person coming in.”