We’re long past 2001, the setting for the movie 2001: A Space Odyssey, a disturbing tale in which a highly advanced onboard spaceship computer has a mental breakdown that proves fatal to most of the crew. To both our advantage and our peril, we continue to fetishize technology despite the fictionalized risks. Some would even say that, much like the HAL 9000 computer in the Space Odyssey science fiction books that inspired the movie, our computers and other forms of technology have come to control us rather than the other way around.
The coining of new phrases—“tech neck” and “text neck”—points to just how addicted some people are to their phones and other handheld devices. (New York Presbyterian describes tech neck as “the act of stressing muscles while using phones, tablets, and computers, resulting in neck and shoulder pain, stiffness, and soreness.” The Cleveland Clinic is a bit more blunt, defining text neck as “a repetitive strain injury that’s becoming more common as more people hunch over smartphones.”) Chiropractors and ophthalmologists are the medical professionals whose offices may be filling up most thanks to overuse of these devices, but other doctors and allied medical professionals, administrators, and patients are also affected by artificial intelligence (AI). Where is AI in healthcare now, and where might it be headed?
There’s no doubt that advancements in technology and AI have their place in patient care, including right down to the operating room. They have also contributed to improving diagnostic tools and play a role in rehabilitation, physical therapy, and robotic “pets” have even been used for improving mental health and reducing use of psychoactive medications in elderly patients with dementia. However, AI and technology in healthcare are double-edged swords and may bring unintended consequences.
The authors of a paper on the future of AI in healthcare said it well:
“Scarcely a week goes by without a research lab claiming that it has developed an approach to using AI or big data to diagnose and treat a disease with equal or greater accuracy than human clinicians. Many of these findings are based on radiological image analysis, though some involve other types of images such as retinal scanning or genomic-based precision medicine. Since these types of findings are based on statistically-based machine learning models, they are ushering in an era of evidence- and probability-based medicine, which is generally regarded as positive but brings with it many challenges in medical ethics and patient/clinician relationships.”
An interesting example of this is an algorithm based on criteria from the American Heart Association and American College of Cardiology, which purports to calculate an individual’s 10-year risk for heart disease or stroke. LDL-cholesterol, which many physicians still consider a key risk factor for cardiovascular disease (CVD), is not included in the assessment criteria. It’s not a factor at all! While the role of LDL-C in CVD is currently a hotly debated topic, algorithms should not replace a physician’s clinical experience nor a patient’s goals and values.
Some physicians are already using AI to help improve patient compliance and patients are using it for their own self-education, such as by gathering data from continuous glucose monitors (CGM) and blood ketone meters. These kinds of tools—not to mention home blood pressure cuffs, heart rate monitors and other biosensors—can help “nudge patient behaviour in a more anticipatory way based on real-world evidence.” Not all would be amenable to wearing a CGM or pricking their finger to measure blood ketones, but those who are willing may be able to take a much more active—and effective—role in improving various health outcomes for themselves via changes to diet, physical activity, stress management, etc.
Apart from the clinical results it can help deliver, use of AI in healthcare opens a minefield of ethical issues. As we see daily with regard to various social media platforms, developments in technology may outpace administrators’ capacity to control these tools and for users to operate within certain parameters of decorum.
There may be a parallel to this in pharmaceutical drug approval. Drugs that performed well in terms of safety and efficacy in small, limited duration trials are taken off the market when more widespread and longer-term use show them to have unintended and dangerous effects. New technology should be adopted cautiously and judiciously to ensure measures are in place to protect the privacy of both patients and providers, and that the technology is delivering on its promises. As we explored in a recent article, the widening availability of genetic sequencing may help contribute to personalized dietary and medical advice, but having these mountains of data doesn’t ensure that we know what to do with all of it yet. In the meantime, patients may become unduly alarmed by findings in their DNA that may or may not contribute to elevated risk for illness, depending on context. (For example, the ApoE4 gene is the strongest known genetic risk factor for Alzheimer’s disease, but this gene does not cause Alzheimer’s.) Moreover, we must ensure that we are using the technology, rather than the technology using us.
Medicine is as much an art as it is a science. Despite how attached some of us are to our phones, tablets and other gadgets, it’s unlikely that computers will be capable of creating the emotional bonds that underlie the doctor-patient relationship. When we think of a bedside manner, what likely comes to mind are respectful eye contact, a warm handshake, and maybe even a pleasant smile. Robots may do an eerily good job of imitating these behaviors, but they will probably never be able to convincingly replicate the empathy and compassion human health professionals deliver.