The invisible warning signs that predict your future health – BBC News

By | January 17, 2019

It was a sunny day outside, with a hint of spring in the air. I followed Angela, whose name has been changed to protect her identity, down the corridor towards my consulting room in Melbourne. She’d been my patient for several years, but that morning I noticed her shuffling her feet a little as she walked. Her facial expression seemed a bit flat and I noticed she had a mild tremor.

I referred her to a neurologist and within a week she was commenced on treatment for Parkinson’s disease, but I kicked myself for not picking up on her symptoms sooner.

Sadly, this is a common situation for patients all over the world. They are only diagnosed once they begin to show noticeable signs of illness, the body’s warning signal to doctors that something is wrong. If only disease could be spotted earlier, patients might have a chance of receiving early treatment, with even the possibility of their condition being halted before it begins to set in. New technology is beginning to offer some hope.

You might also like:
How your voice betrays a doomed romance
The musicians without ears
Will this technology end traffic jams?

With the help of artificial intelligence, patients and doctors could be alerted to potential changes in their health months, or even years, before symptoms appear.

Futurist Ross Dawson, founder of the Future Exploration Network, predicts a shift from the current model of remedial “sick-care” to a new healthcare ecosystem, focused more on prevention and the tracking of potential health problems before they have a chance to develop.

“Shifting societal attitudes, with increased expectations to live full and healthy lives, are driving these changes,” he says. “This decade, the explosion of new technology and algorithms has given rise to deep learning in artificial intelligence, becoming vastly more effective at pattern recognition than humans.”

These systems can discern patterns that are invisible to the human eye, revealing surprising aspects of how our bodies betray our future health

By harnessing AI to track our heart rate, breathing, movement and even chemicals in our breath, the technology has the ability to detect potential health problems at an individual level long before obvious symptoms appear. This could help doctors to intervene or allow patients to change their lifestyle to allay or prevent illness.

Perhaps most excitingly, these systems can discern patterns that are invisible to the human eye, revealing surprising aspects of how our bodies betray our future health.

Windows to your health

Dawson highlights studies in which AI is better able to anticipate people who are likely to suffer heart attacks by constant monitoring of their pulse. One study even pulled out variables that cardiologists had not thought of as having predictive value – a home visit from the GP requested by the patient, for example.

A recent study by researchers at Google showed that AI algorithms could also be used to predict if someone might suffer a heart attack by looking into their eyes. They trained an AI on retina scans from 284,335 patients. By looking for patterns in the crisscross of blood vessels, the machine learned to spot the tell-tale signs of cardiovascular disease.

Daily movements

If Dina Katabi has her way, delays in the diagnosis of genetic disorders and debilitating conditions such as Parkinson’s disease, depression, emphysema, heart problems and dementia will be a thing of the past.

She has designed a device that transmits low-power wireless signals through a house. These electromagnetic waves reflect off a patient’s body. Every time we move, we change the electromagnetic field around us. Katabi’s device senses these minute reflections and tracks them, using machine learning to follow a patient’s movements through walls.  

Read More:  How Oxalates Can Wreck Your Health

Katabi describes the wireless signals as “amazing beasts” that go beyond our natural senses. Deploying a device in a patient’s home allows their sleep patterns, mobility and gait to be continuously monitored. It can pick up on their breathing rates – even with multiple people in a room – and detect if someone has a fall. It can monitor their heartbeats and even provide information about their emotional state.

“We don’t see them, but they can complement our current knowledge in almost magical ways,” she says. “Our new device is able to traverse walls and extract vital information which can augment our limited ability to perceive change.”

This ability to look for changes in the daily behaviour of patients can provide early clues of something being wrong

This ability to look for changes in the daily behaviour of patients can provide early clues of something being wrong, perhaps before they even know it themselves.

Many of us already utilise a myriad of gadgets to self-monitor everything from our calorie intake to the number of steps we take each day. Artificial intelligence can play a vital role in helping make sense of all this information.

This ability to predict changes in health could be particularly important as our population grows ever older – according to the United Nations, people aged over 60 will account for a fifth of the global population by 2050.

“More and more elderly people are living alone, burdened by chronic disease, which leads to enormous safety concerns,” says Katabi. She believes her device will allow medical professionals to intervene sooner and potentially ward off medical emergencies.

Face value

Artificial intelligence could also use the way we look to help us predict future disease. New research suggests it can pick up on subtle differences in our faces that might be the hallmarks of disease.

FDNA, a Boston-based startup, has developed an app called Face2Gene that uses something it calls “deep phenotyping” to identify possible genetic diseases from a patient’s facial features. It uses an AI technique known as deep learning, teaching its algorithms to spot facial features and shapes that are typically found in rare genetic disorders such Noonan Syndrome.

The algorithm was trained by feeding it with more than 17,000 photographs of patients affected by one of 216 different genetic conditions. In some of these disorders the patients develop certain facial hallmarks of their condition, such as in Bain-type intellectual disability, where children have characteristic almond-shaped eyes and small chins. FDNA’s algorithm has learned to recognise these distinctive facial patterns that are often undetectable to human doctors.

Tests of Face2Gene’s system show that it shortlisted the correct syndrome 91% of the time, outperforming human doctors in spotting patients with conditions such as Angelman syndrome and Cornelia de Lange syndrome.

Early diagnosis of rare genetic syndromes like these means that medical treatments can be delivered more promptly – while sparing families the diagnostic odyssey that identifying these conditions often involves.

With rare diseases affecting an estimated 10% of the world’s population, AI tools such as these are likely to change the face of medicine.

Read More:  Latest Klonopin Shipping News

Inside your brain

Not all illnesses are obvious from the outside, however. Doctors and surgeons have long relied on X-rays and scans to help them diagnose the reason for their patients’ symptoms. But what if it was possible use these scans to spot a disease before it starts to cause problems? Ben Franc is no ordinary radiologist. The professor of clinical radiology at Stanford University is on a quest to unlock the secrets buried inside millions of whole body PET scans that are performed routinely in oncology departments every year. Doctors focus on these scans to determine where cancerous tumours lie, but never analyse them for other unrelated potential risks to their patients’ health. Extracting information from these images could arm doctors with more information about a patient’s disease or even reveal another previously undiagnosed condition.

The algorithm was able to detect the disease on average six years before human doctors finally diagnosed them with Alzheimer’s

In a pilot project, Franc is working with a team to study whether changes in brain metabolism that show up in these PET scans can be used to predict if someone might develop Alzheimer’s disease, a condition that affects 10% of people over the age of 65.

Using AI, they have developed algorithms that are capable of spotting these subtle changes in metabolism, in this case the uptake of glucose in certain areas of the brain, which are thought to occur early on in the development of Alzheimer’s disease. In tests on sets of images from 40 patients it had never seen before, the algorithm was able to detect the disease on average six years before human doctors finally diagnosed them with Alzheimer’s.

It raises the prospect of being able to spot this devastating condition years before the symptoms that lead to diagnosis begin to appear. 

“Computers can find associations that would take humans a lifetime,” says Franc. “AI gives us the opportunity to harness the expertise of exposure to millions of cases. This can lead to early diagnosis and hopefully, more timely and effective treatment for patients.”

And it is not just Alzheimer’s disease. His group of researchers also recently published a paper showing that combining the enormous sets of raw data that come from MRI and PET scans can be used to predict a patient’s subtype of breast cancer as well as their chances of recurrence-free survival. This growing new field is known as radiomics and uses the raw data from scans to identify features that cannot be spotted with the naked eye. There are more than 5,000 different independent imaging features that can be used and AI offers a powerful new way of analysing all of these.

“Using machine learning, we were able to identify subsets of these features that may be used to make these predictions,” says Franc. Here, he is hoping to find ways of using AI in settings outside the hospital to predict health. He envisages smart toilets, for example, that can look for changes in a person’s urine or faeces in order to predict disease.

How you speak

While scans and images can give clues about our physical health, our mental health remains somewhat harder to diagnose. But mental health conditions are on the rise, currently affecting around 25% of the global population and reaching epidemic proportions in some countries. As a leading cause of disability, this places enormous strain on society.

Machine learning is offering new ways of detecting mental health conditions early by tuning into tell-tale signs hidden in a person’s choice of words, tone of voice and other nuances of language

Machine learning is offering new ways of detecting mental health conditions early by tuning into tell-tale signs hidden in a person’s choice of words, tone of voice and other nuances of language. Ellie, a digital avatar developed by the University of Southern California’s Institute for Creative Technologies, is a virtual therapist who can analyse more than 60 points on a patient’s face to determine if they might be depressed, anxious or suffering from PTSD. How long a person pauses before answering a question, their posture or how much they nod their head all provide Ellie with further clues about the patient’s mental state during the “consultation”.

Read More:  Like the flu? Trump's coronavirus messaging confuses public, pandemic researchers say

This way of using machine learning is expected to bring major advances to psychiatric outcomes by “improving prediction, diagnosis, and treatment of mental illness”, says Nicole Marinez-Martin from the Stanford School of Biomedical Ethics, and her colleagues, in a recent article in the Journal of Ethics.

Advances in AI have also produced emotionally-intelligent bots that are able to have natural conversations with humans ­– technology that is enabling far more people access to treatments that are currently limited by the availability of human resources. Wysa, for example, is a bot designed by therapists and AI researchers to help build people’s mental resilience skills by using evidence-based talking techniques such as cognitive behavioural therapy. The idea is for the bot to ask probing questions that help people untangle how they are feeling after a difficult day.

Tough decisions

When all these individual biometric measurements, alongside genetic profiling, are combined, the result could enable the prediction of individual risk factors that could supersede today’s sweeping medical guidelines. In the world of precision medicine, AI could make the annual routine check-up at the doctor anachronistic.

But how much trust are we willing to put in an algorithm when it comes to our lives? A recent article in the AMA Journal of Ethics poses a scenario where machine learning is used to make decisions by predicting a patient’s end of life choices. The authors point out that “an algorithm will not lose sleep if it predicts with a high degree of confidence that a person would wish for a life-support machine to be turned off”.

The question is, do we want something that doesn’t worry about the decisions it makes to be making such important calls?

We might prefer the bedside manner of a human doctor over that of a machine, but in the near future an AI doctor might be able to pick up on issues long before their organic counterpart. By being perfectly tailored to our individual personalities, behaviours and emotions they could give us an early warning that just might save our lives.

So, while we might not expect a computer to feel, we may want it to understand what and how we are feeling. 

Join 900,000+ Future fans by liking us on Facebook, or follow us on Twitter or Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Culture, Capital, and Travel, delivered to your inbox every Friday. 

"Health|HealthCare" – Google News