“Mark my words. AI [artificial intelligence] is more dangerous than nuclear weapons.” — Elon Musk
The other day, as soon as I walked into work, already my usual 10 minutes late, I was jostling from my office at the clinic when I saw one of our medical assistants in the hallway. She is otherwise known for being calm, so it seemed odd that she was visibly distressed. The reason was that she forgot her laptop at home and couldn’t find a loan. I tried to save his life by giving him my laptop to work on. This would mean that I would have a computer to work on in the administrative area, but I would have to see patients all day without having a computer in the room with them.
The AP resisted, but I insisted. We both went separate ways to continue our days.
I hadn’t seen patients like that for some time now. By “like this”, I mean without a computer in the room. This shouldn’t be a big deal, but I realized that a computer has now become an entity that is always part of our patient encounters. If you don’t have a stethoscope, you can get by, but you can’t without a computer.
It is as if a third “being” is present in the room – the first two being the doctor and the patient. We interact briefly with patients, but are constantly working on the computer, from viewing their records, labs and scans to placing orders and scheduling instructions.
And now that there is the “secure instant chat” function, the computer responds to us all the time. Other providers are constantly texting us, and somehow the computer keeps demanding our attention, robbing it of the patient in the room. It gives us alerts if there are drug interactions. It reminds us to change our passwords, prompts us to order tests, and prevents us from closing patient records if certain rules are broken.
That day, as I didn’t have this third “being” in the room, there was only the patient and me. I had the impression that the meeting was incomplete. I kept wondering if the patient also felt incomplete, because patients usually feel reassured when they see their medical records on the computer: “I don’t remember what medications I take and what operation I suffered 10 years ago. It’s all in the IT documentation!”
Patients actually have a relationship with the computer, because now they go through their own records and try to make sense of things. Sometimes they do a good job, and other times they suck. They love the computer for it, but if the doctor spends too much time staring at the computer instead of the patient, they begin to feel disrespected like a jealous lover would.
That day, my mind was telling me that since I will have to document everything on the computer and use it to place orders, I should hurry up and finish the visit with the patient if I have to meet the schedule. Then I wondered, “How did you see patients just a few years ago when there were no computers in the room – relax!”
A feeling of calm came over me. I forgot about having to respond to constant messages and having to place orders or start documenting the visit. I was about to spend more time assessing the patient to talk about their life and share my own stories. I felt like a crowd had dissipated. The air in the room revived the intimate doctor-patient relationship that had existed for centuries.
We talked about the number of cows my patient had on her farm. We talked about how many of them they end up eating and how many they give. We recounted that 20 years ago a patient of mine saw her daughter hang herself in the closet. Her wound is still so fresh that she truly believes whoever says time is the best healer is full of bullshit.
We talked about how one of my patients had a robust sex life, but he felt his prostate cancer treatment had taken away from him the man he was. We talked about how my patient’s nephew was found dead of a drug overdose. She was sad for him but happy that her children didn’t turn out like this. We talked about things that usually the third “being” in the room – the computer, that is – does not allow us to talk about because we are too busy with the computer more than the patient .
We also talked about how I thought medicine would be practiced in a few hundred years. How a patient will walk through any “booth”. Its symptoms would be heard as Siri hears us, its clinical signs photographed and interpreted. He would be scanned from the skull down to the toes with all of his internal organs anatomically examined. A drop of blood taken by a painless finger prick would measure all kinds of laboratory tests, and the computer would produce the most adequate diagnostic and treatment options, and could even inject the most precise dose of highly effective drugs into the veins. against disease.
The genetic profile of patients would be analyzed instantly and mutations would be identified and modified to be corrected quickly. Complex surgical procedures would be carried out meticulously by ambidextrous robots. Humans would trust these “cabins” more than their own clinical judgement. Like when you tell me how to calculate 89573 × 74823, I would trust a calculator more than my computer skills.
When Elon Musk warns us about the dangers of artificial intelligence, he is not referring to medicine specifically, but one can certainly analyze his statement in the context of the future of our profession. Will there come a day when this computer and this cubicle will become more intelligent than the clinical judgment of the doctor?
“Never!” we say. A computer must be programmed by a human to give the results. A computer can never replace the complex clinical judgment of a human. Well, I would say that if you told a human from 500 years ago that I will be flying tonight from New York to Kuala Lumpur, and that I could do it in one night, he would undoubtedly laugh and ridicule us for wasting his time.
If computers started to process us more accurately than we do, we’d be happy to accept it. But if they start making decisions for us, no matter how, but if that actually happens, how will computers decide, say, when it’s time to stop dialysis and go to comfort care? How will they decide how much pain is too much, when to give narcotics, and when to withhold them when they worry about addiction?
How will they connect personally with patients, share stories and discuss hobbies? How will they go to the funeral and shed tears with the patients and their families when nothing else can be done? How will these computers learn to bring comfort and consolation to these patients? And even if they do, will patients accept it as they accept it from us human doctors?
What if computers turned against us? What if they started selecting which pregnancies to carry and which others to terminate? What if they started dictating patients’ advance directives? What if computers assigned a monetary value to the number of years lived? What if they limit the number of children we can have? What if they tell me that my child is not worth living for because of his disability?
What if they tell me that my grandmother is occupying a bed in a hospital that is needed for a younger patient and she will be denied other life-prolonging treatments? What if they told me it’s OK to clone humans and select the “best”? What if they attributed abortion and sexual identity choices to patients?
Some of you might say, isn’t that what humans already do to humans? Well, yes, you are right. But will we accept it if someone other than humans – in this case, artificial intelligence – imposes these restrictions on us?
Farhan S. Imran, MD, is a hematology-oncology physician.
This post appeared on KevinMD.
#Reviews #Elon #Musks #warning #future #hold #true #medicine