Avoid Uploading Medical Images to AI Chatbots

On October 15, Elon Musk's AI chatbot Grok on X drew scrutiny as users uploaded medical scans like X-rays for AI medical advice.

On October 15, Elon Musk’s AI chatbot Grok on X drew scrutiny as users uploaded medical scans like X-rays for AI medical advice, raising privacy uncertainties and fears over misdiagnoses from overreliance on AI tools.

The use of AI for medical advice has shifted as people seek fast answers to complex medical questions from intelligent technology. Grok’s ability to scan and interpret medical scans is a fascinating leap in AI assisting doctors or patients.

Experts highlight these tools’ limitations, especially in their present manifestation. Apart from privacy, there is also misdiagnosis and overreliance on algorithms without nuanced medical judgment. This calls for cautious oversight and public awareness as the role of ChatGPT medical advice grows.

The Rising Use of AI Medical Advice

Since October, X owner Elon Musk has urged users to upload medical imagery to Grok, saying it would help train the AI to read scans with greater precision and hence generate AI medical data that is accessible and easier for user to read. Though Musk himself describes technology as “still early stage,” he believes it is bound to get much better with time.

Uploading sensitive medical information to AI platforms creates privacy risks, particularly as AI generates medical data and datasets. Unlike healthcare providers, which face at least some limitations under US privacy laws like Health Insurance Portability and Accountability Act (HIPAA). Consumer apps like Grok are not legally obligated to safeguard your medical information. That makes users vulnerable to sensitive information shared with third-party vendors or being used to train future AI Medical advice models without explicit user consent.

Generative AI models often improve accuracy using uploaded data, but how this data is stored, shared, or repurposed is unclear. This is why AI makes up medical data because of all the information it has been fed. Grok’s privacy policy says some user information may be shared with “related” companies, though it’s unclear who those entities are or how the data will be used.

Security advocates have cautioned against uploading private medical records, warning that such AI medical advice data could inadvertently end up in AI training datasets accessible to healthcare providers, employers, or government agencies. Instances of private records being found in public datasets underscore this risk.

Most of the time, whatever goes online stays online, and hence, one can never be really sure of privacy. “People should think twice before they trust AI with sensitive medical information,” experts say.

Think Before You Share

While AI medical advice tools are exciting, users must weigh the benefits against privacy risks. For the time being, consulting a qualified healthcare professional to interpret medical imagery remains the safest choice. Anything less would be to risk personal data through consumer-grade AI tools, with possible long-term consequences for privacy and security.

As Musk puts it, Grok may eventually get “incredibly good,” but for the time being, it’s smart to be guarded about what you say.


Inside Telecom provides you with an extensive list of content covering all aspects of the Tech industry. Keep an eye on our Medtechsection to stay informed and updated with our daily articles.