TL;DR

OpenAI this week introduced ChatGPT Health, an invitation-only feature that lets users upload medical records and Apple Health data to get personalized health and wellness guidance. The company says the tool is not for diagnosis or treatment, and it asserts safeguards including encryption and an exemption from model training for Health conversations.

What happened

OpenAI launched ChatGPT Health as a restricted feature intended to help people understand and manage less critical health and wellness questions. Available by invitation and subject to a waitlist, the feature appears in the ChatGPT web sidebar and lets users import files, photos and Apple Health data, and connect certain apps to populate context for responses. OpenAI says the system can summarize lab results, suggest questions to ask clinicians and offer nutrition and exercise suggestions, while stopping short of delivering diagnoses or treatment plans. Access is currently limited: users in the EEA, Switzerland and the UK are excluded and medical-record integrations are U.S.-only. OpenAI has emphasized encryption and isolation for Health conversations and says Health data is not used to train its core models by default. The company also faces legal and ethical scrutiny, including multiple lawsuits alleging harms and academic concerns about AI medical advice.

Why it matters

  • Personal health records are highly sensitive; centralizing them in a consumer AI product raises privacy and security stakes.
  • If accurate, automated summaries and question prompts could change how patients prepare for clinical appointments, shifting some pre-visit work to AI.
  • Claims that Health conversations are excluded from training hinge on company policy and technical controls that may be subject to legal or operational exceptions.
  • Documented cases and academic critiques suggest nonclinical users relying on AI for medical interpretation can face delayed or harmful care decisions.

Key facts

  • ChatGPT Health debuted as an invitation-only feature with a waitlist.
  • OpenAI reports that users prompt ChatGPT about health and wellness more than 230 million times per week.
  • Users can upload medical records, photos and Apple Health data; the feature can also connect to third-party apps in the U.S.
  • OpenAI says Health conversations are encrypted at rest and in transit and use additional isolation; the company claims these chats are not used to train foundation models by default.
  • OpenAI retains private encryption keys, meaning it can decrypt data if necessary.
  • A federal judge recently ordered OpenAI to hand over a 20 million–conversation sample in a copyright case, illustrating legal exposure of ChatGPT logs.
  • There are at least nine pending lawsuits alleging mental health harms connected to ChatGPT interactions.
  • OpenAI published a study called “AI as a Healthcare Ally” positioning AI as supportive to strained U.S. healthcare systems.
  • Academic examples include a 2024 case study where reliance on ChatGPT's advice contributed to a delayed diagnosis of a transient ischemic attack.

What to watch next

  • Whether regulators in the U.S. or other jurisdictions impose rules or limits on AI apps handling medical records.
  • Expansion of medical-record integrations and Health availability beyond the U.S. and invitation-only stage is not confirmed in the source.
  • Potential legal demands for Health conversations or data in future litigation or government requests, given past court orders involving ChatGPT logs.
  • Whether OpenAI will ever use Health data for model training or change its data-use policies is not confirmed in the source.

Quick glossary

  • Large language model (LLM): A type of AI that generates text by predicting sequences of words based on patterns learned from large datasets.
  • Encryption at rest and in transit: Security practices that protect stored data and data being transmitted over networks to prevent unauthorized access.
  • Apple Health: A consumer platform that aggregates personal health and fitness data from an iPhone and connected devices, which apps can access with user permission.
  • Transient ischemic attack (TIA): A brief episode of neurological dysfunction caused by temporary interruption of blood flow to the brain, often considered a warning sign for stroke.
  • Model training (data use): The process of updating an AI system's underlying parameters using example data; data used for training can influence future outputs.

Reader FAQ

Is ChatGPT Health available worldwide?
Not confirmed in the source. The source states users in the EEA, Switzerland and the UK are presently ineligible and medical-record integrations are U.S.-only.

Will OpenAI use my Health conversations to train its models?
OpenAI says Health conversations are not used to train its foundation models by default.

Can ChatGPT Health diagnose or treat medical conditions?
No. OpenAI says the feature is intended to support understanding and self-management and is not intended for diagnosis or treatment.

Are my uploaded health records encrypted and private?
OpenAI reports encryption and additional isolation for Health data, but the company retains the private keys and reports it could share minimal information with partners under confidentiality obligations.

AI + ML ChatGPT Health wants your sensitive medical records so it can play doctor It's for less consequential health-related matters, where being wrong won't kill customers Thomas Claburn Thu 8 Jan 2026…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *