Elon Musk asked people to upload their medical data to X so his AI company could learn to interpret MRIs and CT scans

In Elon Musk’s world, AI is the new MD. X’s owner encourages users to upload their medical test results, such as CT scans and bone scans, to the platform so that Grok, X’s AI chatbot, can learn how to interpret them effectively.

He previously said that this information will be used to train X’s AI chatbot, Grok, on how to interpret it effectively.

Earlier this month, Elon Musk reposted a X video of himself talking about uploading medical data to Grok, saying, “Try it!”

“You can upload X-ray or MRI images to Grok and it will give you a medical diagnosis,” Musk said in the video, which was uploaded in June. “I’ve seen cases where it’s actually better than what the doctors tell you.

In 2024, Musk said medical images uploaded to Grok will be used to train the bot.

“This is still early stages, but it’s already pretty accurate and will get extremely good,” Musk wrote on X. “Let us know where Grok is getting it right or needs work.”

Musk also claimed in his response that Grok had saved a man in Norway by diagnosing a problem his doctors had missed. Owner X was willing to upload his own medical information to his bot.

“I recently did an MRI and sent it to Grok,” Musk said in an episode of Moonshots with Peter Diamandis podcast released on Tuesday. “None of the doctors or Grok found anything.”

Musk did not reveal in the podcast why he got an MRI. XAI, which owns X, said wealth in a statement: “Legacy Media Lies.”

Grok faces some competition in the AI ​​health space. This week, OpenAI released ChatGPT Health, an experience within the bot feature that allows users to securely connect medical records and wellness apps like MyFitnessPal and Apple Health. The company said it will not train models using personal medical information.

AI chatbots have become a ubiquitous source of medical information for humans. OpenAI reported this week that 40 million people seek health information from the model, 55 percent of whom used to search for or better understand symptoms.

Dr. Grok sees us now

So far, Grok’s ability to detect medical anomalies has been mixed. AI has successfully analyzed blood test results and identified breast cancer, some users claim. But it also misinterpreted other information, according to doctors who responded to some of Musk’s questions about Grok’s ability to interpret medical information. In one case, Grok mistook a “manual case” of tuberculosis for a herniated disc or spinal stenosis. In another, the bot mistook a mammogram of a benign breast cyst for an image of testicles.

A May 2025 study found that while all AI models have limitations in processing and predicting medical outcomes, Grok was the most effective compared to Gemini and Google’s ChatGPT-4o when determining the presence of pathologies in 35,711 pieces of brain MRI.

“We know they have the technical ability,” Dr. Laura Heacock, associate professor at New York University’s Langone Department of Radiology, wrote on X. “Whether or not they want to put in the time, data and [graphics processing units] to include medical imaging is up to them. For now, non-generative AI methods continue to outperform in medical imaging.”

The problems with Dr. Grok

Musk’s lofty goal of training his AI to make medical diagnoses is also a risky one, experts said. While AI has increasingly been used as a means of making complicated science more accessible and creating assistive technologies, teaching Grok to use data from a social media platform raises concerns about both Grok’s accuracy and user privacy.

Ryan Tarzy, CEO of health technology company Avandra Imaging, said in an interview with Fast that asking users to enter data directly, rather than pulling it from secure databases of de-identified patient data, is Musk’s way of trying to speed up Grok’s development. Also, the information comes from a limited sample of anyone willing to upload their images and tests, meaning the AI ​​doesn’t collect data from sources representative of the broader and more diverse medical landscape.

Medical information shared on social media is not subject to the Health Insurance Portability and Accountability Act (HIPAA), the federal law that protects patients’ private information from being shared without their consent. This means there is less control over where information goes after a user chooses to share it.

“This approach has a lot of risks, including accidentally sharing patient identities,” Tarzy said. “Personal health information is ‘burned into’ too many images, such as CT scans, and would inevitably be released under this plan.”

The dangers Grok may pose to privacy are not fully known because X may have privacy protections not known to the public, according to Matthew McCoy, an assistant professor of medical ethics and health policy at the University of Pennsylvania. He said users share medical information at their own risk.

“As an individual user, would I feel comfortable contributing health data?” previously told him New York Times. “Absolutely not.”

A version of this story originally published on Fortune.com on November 20, 2024.

More on AI and health:

  • OpenAI launches ChatGPT Health in a push to become a hub for personal health data

  • OpenAI Suggests ChatGPT to Play Doctor as Millions of Americans Face Rising Insurance Costs: “In the US, ChatGPT Has Become an Important Ally”

  • Ace Utah gives the AI ​​the power to prescribe some drugsdoctors warn of risks for the patient

This story was originally featured on Fortune.com

Leave a Comment