AI will fuel online self-diagnoses, doctors warn

Online patient self-diagnosis is about to become an even thornier problem for doctors, with the market set to be flooded by AI-driven medical apps. That appears to be the verdict from healthcare experts.

Online patient self-diagnosis is about to become an even thornier problem for doctors, with the market set to be flooded by AI-driven medical apps. That appears to be the verdict from healthcare experts.

In fairness, when used responsibly by qualified professionals, AI does appear already to be showing great promise in healthcare, as evidenced by a smartphone app for diagnosing ear infections successfully trialed in Pittsburgh in the US.

But what about less scrupulous providers, who may simply see an AI ‘healthcare’ app as an easy way to make some quick money out of scared and credulous patients?

Anxious to learn more, we asked healthcare workers for their opinions – after all, the impact of the internet on self-diagnosis among the public, making it harder for physicians to do their jobs, is already a well-documented issue.

Sure enough, a lot of healthcare professionals seem already to be genuinely concerned about the impact of AI on the phenomenon of inaccurate self-diagnosis.

“AI-based self-diagnosis apps will make consultations trickier in the future when they become more mainstream”Dr Elena Salagean warns that patients self-diagnosing using AI will only make physicians’ jobs more difficult.

Dr Elena Salagean at Holistic Allergy in the UK thinks AI will only exacerbate this problem – even if the apps themselves are designed in good faith.

“AI-based self-diagnosis apps will make consultations more tricky in the future when they become more mainstream,” she told me. “One of the main problems we have at the moment as well is that patients will Google their symptoms for self-diagnosis and will act on the advice found online.”

Such advice, she says, could point a patient to a certain kind of treatment, going to A&E, or making an appointment to see their physician – whether or not they actually need any of the above.

“I believe easy access to AI self-diagnosis apps will only exacerbate this problem in the future because the apps still cannot address the underlying issue,” she says.

This issue being that without further medical investigation, observation, or examination, a language learning model (LLM) will be unable to give 100% accurate diagnoses in all cases.

“Regardless of how well-turned the AI is, its use will be restricted by the limitation in the number of variables that can be entered,” said Dr Salagean. “A lot of people may not have access to a blood-pressure machine, thermometer, or pulse oximeter, which can be vital in establishing a diagnosis when interpreted in the context of symptoms.”

What this means is that app users are much more likely to misinterpret the data given by an AI-powered medical app, leading them to misread their symptoms and draw false conclusions about their health.

“The result here could either be false reassurance or an increased sense of urgency to consult with a doctor in order to address the rare but serious differential diagnoses suggested by the app,” said Dr Salagean. “The former could put their wellbeing at risk, while the latter would increase strain on the healthcare services.”https://open.spotify.com/embed/episode/6RGTTT4zfefEXhecO7mAqq?utm_source=generator

Sure enough, the trend already appears established – if study findings cited by Dr Colin Banas at DrFirst, a technology-focused healthcare consultancy in Maryland in the US, are anything to go by.

These indicate that in the past year – during which LLMs like ChatGPT have become increasingly popular – just over four in ten patients tried to diagnose themselves using online resources, and more than a quarter of these attempts were done using AI tools.

Health conditions that Dr Banas observed patients trying to self-diagnose included headaches (29%), abdominal pain (20%), STDS (10%), Cancer (9%), and other sexual issues (16%).

Those findings came despite more than a third of patients saying their doctors had specifically told them not to use AI for healthcare purposes, including diagnoses and prescriptions.

On the other hand, Dr Thomas Pontinen, a double board-certified physician and co-founder of Midwest Anesthesia and Pain Specialists, points out that self-diagnosis is not new and even precedes the coming of the internet.

“Self-diagnosis has pretty much always been a variable, even back before technology and the internet was as pervasive as it is now,” he told me. “All seasoned doctors are bound to have anecdotes of patients who go in and tell them about their preconceived analyses of what they’re experiencing, and it has always been a challenge to deal with these things – because you can’t just invalidate what a patient believes without properly educating them.”

But the rise of search engines such as Google has amplified this problem, simply by making information of all kinds much more widely available.

“They could simply type in their symptoms, go down a rabbit hole, and arrive at their own conclusions about their condition before even seeing a doctor,” he said. “It’s good for people to be more educated about their health, but doctors are trained to comprehensively evaluate all of a patient’s individual circumstances and make certain decisions based on protocol and their best judgment. This is what makes personalized treatment so valuable – everyone is different and will likely need slightly different care.”

“As far as the ordinary patient is concerned, ChatGPT and its peers are the de-facto AI solutions”Dr Thomas Pontinen warns that already AI is viewed as a go-to source of reliable healthcare information – even when it isn’t.

AI-powered LLMs like ChatGPT will only advance the trend further, taking it into dangerous waters – simply by being more convincing than a regular Google search engine return.

“As far as the ordinary patient is concerned, ChatGPT and its peers are the de-facto AI solutions,” said Dr Pontinen. “This presents a real concern because even though ChatGPT itself would tell you that it is not qualified to make any diagnoses, it can still help people self-diagnose if they ask it to. It delivers information in a more personalized and streamlined way than Google, but that information is still not meeting the holistic medical needs of an individual.”

This will put the onus on doctors to improve their people skills when dealing with patients, whom they will have to take greater pains to educate than before. Gone, it seems, are the days of the public simply accepting the maxim ‘doctor knows best’ at face value.

“I don’t think there is a definitive solution to this yet, but I always go back to my point earlier about patient education because there really isn’t any other alternative to it,” said Dr Pontinen. “It may be an exhausting and ultimately futile endeavor in some cases, but doctors have to take the time to explain everything to their patients in a compassionate manner so that they’re empowered with the right information without feeling gaslighted or invalidated.”

Dr Pontinen even thinks the advent of AI medical apps could lead to physicians cultivating better, more human relationships with their patients.

“Best-case scenario, AI will motivate physicians to spend more time connecting with their patients and cultivating trusted relationships,” he said. “This way, individuals feel they can rely on their doctors rather than feeling like they need to turn to an AI alternative.”

©️cybernews