Speak from the Heart: It Can Show Your Heart Health

A new study has found that doctors may be able to identify the risk of a heart attack in heart disease patients by recording their speech. A computer algorithm listening to voice recordings was able to spot heart disease patients at the highest risk of severe complications. Tiny changes in the voice are undetectable to the human ear but are signs of heart health. They include the frequency, tone and pitch of a person’s voice.

The researchers working on the project couldn’t explain why the voice was impacted by heart health. It could be changes in blood pressure or heart rate affecting the nervous system. Whatever the cause was, people who were marked as “high” by the algorithm were more than two-and-a-half times more likely to suffer from a heart attack. On top of that, people with high scores were three times more likely to have fatty build-up in their arteries when tested.  

The algorithm looks at more than 80 aspects of people’s voices. In the study, 108 people who doctors suspected had coronary artery disease recorded 90-seconds of their voices on their phones. They read from a script and then spoke from the heart about positive and negatives experiences in their own lives. The algorithm marked people as being either high or low risk. Then the team tracked the participants for two years. More than 58 percent of the high scorers went to the hospital for chest pains or heart attacks. More than 30 percent of the low scorers did.

The study’s co-author, Dr. Jaskanwal Sara, a research fellow at Mayo Clinic, said, “We’re not suggesting that voice analysis technology would replace doctors or replace existing methods of healthcare delivery, but we think there’s a huge opportunity for voice technology to act as an adjunct to existing strategies. Providing a voice sample is very intuitive and even enjoyable for patients, and it could become a scalable means for us to enhance patient management.”

It could be a valuable tool as doctors can learn to hear the changes. They are imperceptible to someone listening. “We can’t hear these particular features ourselves,” Dr. Sara said. “This technology is using machine learning to quantify something that isn’t easily quantifiable for us using our human brains and our human ears.”

With only 108 study participants, Dr. Sara wants to perform larger studies with more diverse groups of people to see if the algorithm is accurate. The people in the group were all around the same age, 60, and from the same location, so they mostly had similar accents. Having a large group of people from different backgrounds would challenge the algorithm and show if it can work for a broad range of people.  

Banner image: Elliot Sloman via Unsplash

Related Posts

Thank you! Your submission has been received!
Please check your email to confirm your subscription.
Oops! Something went wrong while submitting the form
By clicking the "Subscribe" button you agree to our newsletter policy