Your i-DOC will see you now …

A few weeks ago, I was “forced” by circumstances to get a new laptop, and while setting up this new contraption, I started to be bombarded with options to integrate Artificial Intelligence (AI) to help “personalise” my browsing history, my newsfeeds and music choice options.

This experience, together with the recently recurring medical news pop-ups regarding AI in medicine, got me thinking … and voilà … here is the result of my “conversation” with an ether-sphere chat-bot (this is my disclaimer here!) about how AI can potentially impact and transform Clinical Medicine.

So, starting with the positives:

  1. Enhanced Diagnostic Accuracy: AI algorithms, particularly those based on machine learning, can analyze vast amounts of data quickly and accurately. In medicine, this capability aids in early detection and diagnosis of conditions like diabetes, cardiovascular diseases, and cancer. AI-powered tools can interpret medical images, lab results, and patient histories with high precision, often identifying patterns that might be missed by human practitioners.

Fair enough, especially when dealing with radiological imaging … but will AI be able to pick up on the nuances of a patient’s body language (if consultations are not face-to-face)? And how will it be able to combine the physical examination of the patient with the other data to make an accurate final diagnosis from the myriad of differential options any given combination of signs and symptoms and physical findings each individual presents with?

  1. Personalized Treatment Plans: By considering genetic information, lifestyle factors, and existing medical conditions, AI can recommend the most effective interventions and tailored treatment plans regarding medications, and lifestyle changes. This personalized approach can improve patient outcomes and enhance the efficiency of treatment protocols.
This is assuming AI has access to genetic information on the patient … and how was that collected and who consented to this? And how do you train AI to deal with those “difficult” and “poorly-cooperative” patients? Would an i-DOC even know how to subtly incorporate some reverse psychology to finally convince a patient to start an absolutely necessary treatment that they had totally ungrounded fears of? Would AI know how to ask the same question in at least three different ways to finally get to the true answer? Hmmm, I wonder …
  1. Administrative Efficiency: AI can streamline administrative tasks, reducing the burden on healthcare providers through automated documentation, appointment scheduling, and billing processes. This allows clinicians to spend more time on direct patient care and less on paperwork, improving the overall quality of healthcare delivery. Although it may job displacement, particularly in administrative roles.

OK, on that point we are both on the same page. Bring it on!

  1. Telemedicine and Remote Monitoring: AI-powered telemedicine platforms facilitate remote consultations, which is particularly beneficial in rural or underserved areas. AI can also enable continuous monitoring of patients through wearable devices, alerting physicians to any critical changes in a patient’s condition in real-time. This proactive approach helps in managing chronic diseases more effectively.

Again, this is assuming an underprivileged community has access to electricity, internet, wi-fi, etc etc … has AI ever been in a rural community in Africa or India? And what about all the off-griders? This is a totally flawed assumption and highlights the narrow-mindedness of such enterprises. Diagnosing is one thing, but if you have no-one to physically administer the treatment or medication, what use is that then?

And the negative points?

  1. Ethics and Data Privacy Concerns: The vast amounts of sensitive information needed to train AI models are vulnerable to data privacy and security breaches. Moreover, the ethical implications of using patient data without explicit consent, and the potential for AI to make decisions that impact patient care without human oversight also pose significant ethical dilemmas.

That’s a big one, but the safety and security of electronic records have been an issue already before the advent of AI. So, nothing new here! However, the ethics of consent opens a huge can of worms, as it has a direct impact on liability, culpability and responsibility in the case of something going wrong and possible litigation … who takes the blame? AI? The programmer? The AI service provider? Or the doctor who implemented the AI plan?

  1. Diagnostic Inaccuracies and Bias: If the data systems are biased or incomplete, AI can produce flawed results. For instance, if an AI system is trained predominantly on data from a specific demographic, its diagnostic accuracy might be compromised when applied to a broader, more diverse population, thus exacerbating health disparities. This can lead to misdiagnoses and inappropriate treatment plans, potentially jeopardizing patient safety. Moreover, AI systems can struggle with rare conditions due to a lack of sufficient training data, further exacerbating the risk of incorrect diagnoses.

These issues are, for me, still the most impelling in terms of my reluctance to really embrace AI in medicine. No one is infallible and we all make diagnostic mistakes, however, how quickly can AI recover from going down the wrong diagnostic rabbit hole and realise that a new and drastic turn in clinical approach is needed? We still need that sixth sense, that inspired moment of clarity, that virtuosity that AI may never attain.

  1. Overreliance and Reduced Human Touch: Family medicine thrives on the physician-patient relationship, which includes empathy, communication, and personalized care. Excessive dependence on AI tools might undermine these critical aspects of the human touch, potentially affecting patient satisfaction and outcomes as well as deterioration in clinical skills and judgment. Deferring to an AI diagnosis may imply missing important nuances that a human might catch.

This is why it’s called “the human touch”. Humans do not thrive without physical interaction, contact and touch … this is a documented fact … affecting our physical and mental development. This patient-doctor relationship is crucial and sacrosanct, and I believe it will take many generations before patients will trust a virtual reality doc rather than an in-the-flesh one.

  1. Infrastructure Development and Training Needs: Healthcare professionals need to be trained to work alongside AI systems, requiring significant resource-intensive investment in education and infrastructure development.

Kind of says it all, doesn’t it?

So overall, AI, pretty much like any new invention, is as much of a double-edged sword in medicine as it is in other spheres of our lives. Let us not go blindly into that brave new world

By Dr Jo

|| features@portugalresident.com
Dr Joanna Karamon is a General Practitioner with over 20 years’ experience. She is Clinical Director of Luzdoc International Medical Services Network

Joanna Karamon
Joanna Karamon

Dr Joanna Karamon is a General Practitioner with over 20 years’ experience. She is Clinical Director of Luzdoc International Medical Services Network

Related News