The woman became a viral to the right, stating that the doctor used artificial intelligence to read her ECG – and sisjormed that she had a heart attack.
Meg Bitchell (@megitchell) said she had entered a normal inspection before losing its current insurance. But she said she had left a shocking diagnosis.
According to her, after reading her, she noted her as a man who had a heart attack and was “really bad health.” Her doctor directed her to the cardiologist more tests.
“One month I passively thought I would die,” Bitchell said.
When she finally saw a cardiologist, she was told she was “fine”. Bitchell said the specialist explained that her primary care doctor had signed an AI reading without even looking at her chart.
“They have Ai, who now reads ECG,” Bitchell clearly disappointed.
“I hate me, I hate,” she wrote in her clip header. Since Friday, the video has attracted more than 44,800 views.
The problem that allows a call to the shots
Bitchello’s story is not exceptional. PG reading ECG can produce incorrect results for a variety of reasons, including poor data quality, improper signal quality, or restrictions specific to algorithm training. These systems are expected to help doctors rather than replace them, which only works if the person doubles the result.
One big problem is bias. If PG is taught primarily according to one demographic group, such as white men, it can mistakenly scan ECG from persons who are not suitable for such a profile. Models can also fight when the data they encounter in the real world do not resemble data in which they were trained. And unlike the tools clearly created for cardiology, general purpose, such as Chatgpt, are not trained in sufficiently annotated ECG to make reliable diagnoses.
Andriy Bezuglov/Adobe Stock
Even when the technology is strong, other factors can lift it. The patient moving the electrode is a bit out of place, or electrical interference from another device can cause noise that mimics the heart problem. PG can misconstrued those artifacts as something serious. And since many models act as a “black box”, even doctors can not always see how his conclusion has come to his conclusion.
Higher risk may be over -self -confidence. If your doctor signs on what the computer says without looking at the chart, the error slips – just like what Bitchell tells her. That is why experts recommend using AI as a second eye set when a cardiologist reviews the results before the patient is subjected to further examination.
Viewers doubt why doctors would use ai
The commentators were shocked that the health care provider would rely on AI to reading the patient’s charts. Many wondered if it was even legal.
“It must be inappropriate medical behavior,” wrote one user. “If not, it’s just because the law did not end with technology.”
“We should be able to give up to provide our medical information to AI,” another said.
“Algorithms read very badly ECGS,” the third commentator added. “That’s why we still teach people to read them.”
Others pointed out that AI is already widely used in health care, which many believe that a frightening perspective.
“I had one AI to read the questionnaire for my autism selection,” one woman shared. “He said I’m an alcoholic, even though I have only one drink every three months.”
“I just had an older doctor to tell me … that he’s using the CHATGPT to help him make treatment plans,” another said. “Many companies appear and say to doctors that it is good to use their AI tools now to diagnose.”
Some encouraged Bitchell to take legal action on what happened.
“Right to court for misconduct,” advised one person.
“I’m a doctor and it’s very problematic,” the other wrote. “You need to go to court.”
“Inappropriate action, or someone?” The third viewer asked.
Daily Dot contacted Bitchell to get more information via Tikk.
The Internet is chaotic, but we will break it for you in one daily email. In the letter. Sign up to the Daily Dot Information Ballpaper here;