ChatGPT Health: This is how OpenAI wants to become the digital medical assistant of the future

  • ChatGPT Health integrates personal medical data and wellness apps to offer more contextualized answers, but without replacing the doctor.
  • The new section operates in a separate space, with encryption and a promise not to use health data to train the main models.
  • European experts warn of serious legal and privacy concerns under the GDPR and the new AI Law, which complicates its arrival in the EU.
  • The rollout is limited and, for now, is focused on the United States and territories outside the European Economic Area.

ChatGPT Health and Digital Health

OpenAI has taken another step in its strategy to make artificial intelligence an everyday tool with the launch of ChatGPT Healtha new experience within your assistant in digital health It promises to combine personal medical data, wellness information, and advanced conversational capabilities. The idea is that any user can ask questions about medical tests, symptoms, habits, or health insurance and receive explanations tailored to their specific situation.

The company estimates that each week more than 230 million people make inquiries about health and well-being. on the platform, which already represents around 5% of all messages sent to ChatGPT globally. With this new section, OpenAI aims to channel this enormous volume of questions into a more structured environment, with greater privacy guarantees and an explicit focus on patient support, not diagnosis.

What exactly is ChatGPT Health and what does it aim to do?

ChatGPT Health presents itself as a specific section within the ChatGPT appDesigned to go beyond the one-off queries users were already making, the assistant aims to function as a kind of health "control panel," centralizing Electronic health record, data from the iPhone Health app and metrics from wellness applications and exercise.

The platform allows the user, if they so choose, Connect your medical history and apps like Apple Health, MyFitnessPal, Function, Peloton, or even prescription servicesFrom there, the assistant can offer answers that take into account analysis results, physical activity levels, hours of sleep, weight, heart rate, or other parameters collected by the mobile phone or smartwatch.

In practice, this means that the user can ask the system to Interpret medical test results, summarize blood tests Before a date, suggest Questions for the doctor, help organize a diet with adapted exercise or explain the differences between various health insurance plans based on your healthcare system usage pattern.

In addition to the usual conversational features, ChatGPT Health supports photo and file upload, advanced search, voice mode and dictationYou can also set custom instructions so that the assistant avoids certain sensitive topics, prioritizes certain types of recommendations, or organizes the information in a specific way, which provides some flexibility over the tone and level of detail of the responses.

Enhanced privacy and a separate medical data storage space

One of the points that arouses the most suspicion is the health data management, considered to be of a particularly sensitive category in almost all legislations. OpenAI insists that it has created a Isolated Health section within ChatGPTso that conversations and medical files are stored separately from the rest of the chats and have independent memories.

According to the company, interactions in this mode are They count both in transit and at rest Additional layers of technical isolation are added to prevent unauthorized access or context mixing with other uses of the assistant. Users can review and delete their medical records, disconnect linked applications, or revoke permissions at any time through the settings.

OpenAI explicitly states that Conversations within ChatGPT Salud will not be used to train the company's core linguistic models.Connected apps, in theory, cannot access more data than strictly necessary and are subject to additional privacy and security requirements. Even so, the company acknowledges that it could be compelled to provide information to authorities in the event of court orders or emergency situations.

In the United States, alliances such as the one have been established b.wellA network connecting millions of healthcare providers facilitates access to electronic health records for adults over 18. This connection aims to ensure the system has official medical data and not just information manually entered by the user, although it further limits the sensitive nature of the data to be handled.

A development endorsed by healthcare professionals

To try to allay fears about the quality of the responses, OpenAI emphasizes that ChatGPT Health It has not been built solely from a technological perspective.The company claims to have collaborated for more than two years with more than 260 doctors from 60 countries and dozens of specialties that have reviewed around 600.000 interactions of the System.

That work gives rise to HealthBenchA clinical evaluation framework that focuses not only on whether AI "corrects" a theoretical test, but also on how it communicates: what level of urgency is suggested, when it is recommended to go to a healthcare professional in person, how to avoid unnecessary alarms or confusing messages, and how to prioritize user safety in the face of potentially serious symptoms.

The tool is intended for to support both patients and professionalsFor users, it can help decipher medical terms that were unclear during a consultation, organize information from various reports, or identify health patterns over time. For healthcare professionals, it serves as an ally in administrative and documentation tasks that contribute to the well-known burnoutfreeing up time that could be devoted to direct care.

In clinical settings, examples of integration with platforms such as OpenEvidencewhich allows doctors in less resourced areas to access answers based on recent scientific literature; there are also initiatives such as The Mind Guardian for early detection of Alzheimer's, or academic projects such as Clinical Mind AI at Stanford University, which uses AI to create patient simulations and train clinical reasoning in telemedicine contexts.

Massive use that is already generating behavioral changes

Internal data shared by OpenAI indicates that health is one of the most recurring topics in conversations with the chatbotIt is estimated that approximately 40 million people use ChatGPT for guidance on medical or wellness issuesand that between Between 1,6 and 1,9 million messages per week are dedicated solely to comparing insurance plans, decipher prices or manage coverage denials.

Among adults who already use AI tools, more than half use them as a first filter to check symptoms before going to the doctorAnd almost half use them to reread clinical instructions that weren't entirely clear during the consultation. This change in habits presents both advantages—more informed and prepared patients—and challenges in the relationship with healthcare professionals.

Doctors consulted in Spanish media indicate that Patients are starting to arrive at the clinic with their ChatGPT responses printed out. or on their mobile phones, and they discuss diagnoses or treatments based on what the machine has said. For now, these are isolated cases, but experts anticipate that, as tools like ChatGPT Salud become more widespread, Tensions between professional judgment and algorithmic "second opinion" could intensify.

It has also been documented that the use of AI in professional healthcare settings is growing rapidly. In the United States, various surveys place its use at around 66% is the percentage of doctors who already use AI algorithms in some part of their practice, and a majority consider that these tools improve both diagnostic capacity and efficiency in bureaucratic tasks such as coding or billing.

Risks, mistakes, and problematic examples

Although OpenAI insists that ChatGPT Health is not designed to diagnose or prescribe treatmentsHowever, the risks associated with its use are not insignificant. On the one hand, the company warns that safeguards have been deployed to redirect users to on-site or emergency services when dangerous situations are detected, especially in the area of... mental healthOn the other hand, he acknowledges that the tool can be wrong and that its answers do not replace clinical opinion.

The experience of recent years shows that errors are not theoretical but realMedical journals have reported cases such as that of a patient who suffered bromide poisoning after following dietary recommendations from a chatbot, or that of a person with a sore throat whose symptoms were downplayed by AI, delaying a cancer diagnosis by several months. These are isolated incidents, but they illustrate the extent to which AI can be a problem. Over-reliance on probabilistic systems can have serious consequences.

Artificial intelligence experts consulted in Spain point out that, essentially, ChatGPT doesn't "know" medicine in the human sense of the term, but rather generates plausible texts from statistical patterns in large amounts of data. From this perspective, its behavior would be more like "Googling symptoms and writing a summary" than a proper medical assessment.

Another area of ​​concern is the way many AI tools They limit their liability through legal noticesOften, warnings that information may be erroneous, partial, or outdated appear in small print or in sections that the user rarely checks, contributing to a perception of greater reliability than the machine should actually have.

The legal framework in Europe: GDPR, AI Act and healthcare skepticism

While ChatGPT Health is already being tested with a a small group of users in the United States and other countries outside the European Economic AreaIts entry into the European Union seems considerably more complicated. The combination of Clinical data, electronic records, and habit information It places the service at the very heart of the strictest rules of the General Data Protection Regulation (GDPR).

Spanish specialists in healthcare management, such as the medical director of the San Carlos Clinical Hospital, point out that, With the current regulations, it is very difficult for a health-focused mode of transport like this to operate freely in the EUEuropean regulations are particularly restrictive regarding the international transfer of health data and require guarantees of proportionality, minimization, and control of processing that could clash with the business model and technical architecture of a company like OpenAI.

This framework is further enhanced by the recently approved European AI Actwhich classifies many healthcare uses of artificial intelligence as high-risk. This implies the obligation to subject the systems to impact assessments, audits, comprehensive documentation, and enhanced human controls before its deployment. Some AI professors at Spanish universities believe that, as ChatGPT Salud is currently designed, Its widespread use in Europe could be considered outright illegal at this time.

In addition to the formal requirements, several privacy and digital ethics experts point out that sharing medical data with a chatbot for commercial purposes can be a decision that is difficult to reverseThe main fear is not only the potential security breach—OpenAI already suffered an incident in 2023 that exposed some user information—but also the secondary use of that data for advertising targeting or personalized financial products, something that the company's own management has hinted at as a possible future business avenue.

Integration with the iPhone Health app and Apple's role

One of the movements that has generated the most headlines is the Direct connection between ChatGPT Health and the iPhone Health appOpenAI has confirmed that users who wish to do so will be able to link the records collected by their mobile phone and, where applicable, by the Apple Watch or other devices. w: daily steps, sleep patterns, heart rate, activity metrics, weight logs, and other indicators of well-being.

With this integration, the assistant is able to detect trends in the data and offer explanations or general recommendationsIt's not the same for a chatbot to respond based on a subjective description of symptoms as it is to do so with historical data such as heart rate, hours of rest, or oxygen saturation. In theory, having more quantitative context should reduce the number of clearly inaccurate responses.

The synchronization with Apple's app also comes at a delicate moment, just before the Cupertino company presents its own AI-based healthcareA very conservative approach, strongly validated by clinical research, is expected. Apple typically subjects its medical features to rigorous testing. lengthy validation processes, testing with specialists, and usage restrictions before opening them to the general public.

However, This integration with iPhone data is not available in the European Union or the United KingdomIn the territories where it does operate, access is being rolled out gradually and via a waiting list, while both privacy safeguards and the warning messages that accompany health-related responses are being refined.

A potential ally in the face of collapse and "health deserts"

Beyond the legal and ethical concerns, OpenAI insists on the potential of ChatGPT Health for mitigate some structural deficiencies in healthcare systemsespecially in countries with large rural areas or limited access to specialists. In the United States, the company estimates that in so-called "hospitable deserts" —areas where the nearest hospital is more than half an hour away— are already being generated near 600.000 weekly health-related messages.

It is noteworthy that approximately the 70% of those conversations take place outside of business hourswhen centers are closed and telephone support is limited. In those contexts, the chatbot becomes a kind of first point of contact available 24 hoursespecially to clarify minor doubts, translate reports or help decide whether a specific discomfort justifies going to the emergency room or can wait for a regular appointment.

States like Wyoming, Oregon, Montana, or South DakotaAreas with low population densities and long distances between urban centers are among the territories where ChatGPT is most used in medical deserts. While this situation cannot be directly extrapolated to Europe, many healthcare systems on the continent face similar problems in rural or aging areas, making the idea of ​​a constantly available medical assistant appealing to some administrators.

For professionals, the use of AI is also perceived as a tool against chronic exhaustionAutomating documentation tasks, preparing clinical summaries, or drafting information letters can alleviate some of the bureaucratic burden. However, several professional organizations point out that Excessive delegation to opaque systems could introduce new risks, from diagnostic biases to errors in the transmission of critical information.

Business model, targeted advertising and regulatory future

OpenAI's commitment to the healthcare sector also has a clear strategic component. Health is a field of recurring queries and with a strong emotional component, where user dependence and retention can be very high. Integrating medical records, daily routines, and connected devices makes the chatbot more like a personal medical schedule than a generic answer search engine.

In this context, statements by OpenAI CEO Sam Altman have generated debate by opening the door to potential commercial uses of the data in the future, including targeted advertising products. Although the company assures that ChatGPT Health incorporates additional protections and that medical data is not currently used to train the main models, digital rights advocacy organizations warn that Privacy policies may change over time.

Meanwhile, health policy experts point out that the rise of these types of tools will force update regulatory frameworksIn the United States, there is talk of the need for the FDA to develop a specific scheme for AI medical devices that don't fit into traditional categoriesIn the medium term, three main pillars are being considered: integrating global data (including genomics and imaging) under strong privacy guarantees, scale robotic laboratories that can translate AI findings into therapies and redefine evaluation and approval criteria.

In Europe, the debate intersects with regulations such as the GDPR and the AI ​​Act, as well as with differing sensitivities regarding the commodification of health data. Many academics are wondering whether the EU will be willing to sacrificing part of their level of data protection in order to have tools as advanced as those that may be developed in other markets.

The arrival of ChatGPT Health confirms that the combination of clinical data, connected devices and Artificial Intelligence It will gain significant traction in how we understand healthcare, from daily consultations to insurance management and research. The movement opens up exciting possibilities for patients to be better informed and for professionals to have additional support, but it also raises serious questions about privacy, accountability, bias, and technological dependence that, at least in Europe, will require more than just good intentions and promises of encryption.

Related article:
iOS 12 will also give prominence to "digital health"