Skip to main content

Q&A: Experts on GPT-5 and HIPAA compliance, part two

Dr. Doug Fridsma, former ONC chief science officer and current chief medical informatics officer at Health Universe, discusses GPT-5, including HIPAA compliance, FDA oversight and potential risks for patients uploading healthcare data.
By Jessica Hagen , Executive Editor
Dr. Doug Fridsma, chief medical informatics officer at Health Universe
Dr. Doug Fridsma, former ONC chief science officer and current chief medical informatics officer at Health Universe
Photo courtesy of Dr. Doug Fridsma

Editor's note: MobiHealthNews spoke with two experts about whether GPT-5 should be regulated or comply with HIPAA, and the broader implications of AI in healthcare. This is part two of a two-part series.

This month, OpenAI CEO Sam Altman announced the launch of GPT-5, highlighting its role in healthcare and its potential to support individuals in managing their health journeys.

Dr. Doug Fridsma, chief medical informatics officer at Health Universe and former chief science officer for the Office of the National Coordinator for Health Information Technology (ONC) during the Obama administration, sat down with MobiHealthNews to discuss GPT-5 HIPAA compliance, potential FDA oversight and what patients should consider before uploading sensitive healthcare information into OpenAI's platform.

MobiHealthNews: Do you think GPT-5 should comply with HIPAA?

Doug Fridsma: I think there needs to be HIPAA compliance with any technology in which information from a patient is shared by a third party, out to ChatGPT or whatever. It's a fundamental sort of requirement, I think, in healthcare.

If I, as a physician, share information to ChatGPT, it needs to be HIPAA compliant. I need to have a Business Associate Agreement (BAA). They need to follow the rules of HIPAA because I'm taking someone else's information without their permission and sharing it to a third party in the process of care or care delivery.

MHN: What about FDA clearance? Should the FDA regulate GPT-5?

Fridsma: FDA clearance is a little bit different.

So, for example, a compliance [system] governs a third-party physician or healthcare provider, they are called covered entities. What covered entities do with information that patients have entrusted to them, in that situation, they cannot share it, but it's all identifiable because you need that for treatment, payment, operations, all those sorts of things.

If you have a third-party biller, or if you have a third-party app like GPT or something like that, you cannot share that information in an identifiable way, except if you have a BAA, and what that does is it sort of extends the HIPAA umbrella to that third party, for example, GPT-5.

Images are difficult to de-identify. So, for example, if somebody has a unique metallic implant that sort of makes them stand out, you know, they are in a ZIP code or a rural community, but they have a heart valve and a shoulder replacement and there's only two of them in their entire county that have that, those things can be identifiable. So, it's really hard sometimes to be able to fully de-identify images.

There are specific rules with text. There are 18 identifiers that HIPAA says you have to remove if you are going to consider it de-identified. Images don't have that level of sophistication, if you will. When the HIPAA rules were written, primarily it was text-based information that was being shared, and now with multimedia and images and other things like that, it becomes much more important for physicians to be very careful about the kind of information they share outside their organizations without BAAs in place.

FDA is a different kind of beast. The FDA has nothing to do with privacy. It has everything to do with whether a medical device operates safely for patients and whether it does not cause any harm.

MHN: What about GPT-5 being considered as Software as a Medical Device (SaMD)?

Fridsma: There are a couple of things that are important. One is that the FDA uses Software as a Medical Device, and in large part, it does that because a lot of medical devices have software embedded in them, and then that becomes part of their purview around devices.

GPT is a little bit different, and I'm not quite sure exactly where this is all going to play out just yet. It's kind of uncharted territory, if you will.

The issue is that when the HITECH Act [Health Information Technology for Economic and Clinical Health Act] was passed in 2009, it led to widespread EHR adoption and things like that. 

There's a specific clause in that legislation that says the FDA does not have the ability or regulatory authority to regulate electronic health records, and so health records were carved out under HITECH during that adoption.

So, technology that exists within an EHR is not usually subject to the FDA. So, Epic is not FDA-approved, and Cerner is not FDA-approved; however, diagnostic imaging, as well as therapeutic imaging, like radiation therapy equipment, all of that does require FDA approval. 

Part of that all goes back to an incident called Therac-25; it's the first recorded incident of software killing patients.

MHN: How did it do that?

Fridsma: Well, it had both software and physical systems that would safeguard patients, and it was a radiation therapy equipment, and so it delivered doses of radiation for people who had cancer. 

And it had a couple of things: One is it had a mode in which it would deliver therapeutic levels of radiation, and [secondly] it had these gates that would come in, shield the radiation, essentially, and allow people to do setup and testing of the equipment.

So, you put the patient in there, you would make sure they were positioned right, you put the gates in place, and then you would test to make sure that all of your markers were correct and the patient was positioned correctly, and things like that.

Well, it takes time for those gates to come in, and the technologists got really good at sort of running through the menus – like you do this and that and you sort of configure things.

What happened is this software got ahead of the physical systems, and so the gates did not close, and they thought they were in test mode, but instead they blasted these patients with lethal doses of radiation because the software got ahead of itself.

It overwrote a very important part of the memory that recorded where the gate position was, and so the computer thought that the gates were closed, even though, physically, they were open. And that was a software bug; it's called a race condition. It's something that sometimes happens in software.

So, Therac-25 led, in large part, to the FDA being very rigorous in testing imaging and things that involve radiation, with the end being using Software as a Medical Device, essentially, because that was the first recorded time in which software actually killed a patient.

Now, one can also argue that as we start to develop these GPT-type of things and we start to have more autonomous agents, whether or not those kinds of conditions could reproduce themselves, maybe not with the untoward effects of having radiation therapy deliver lethal doses, but you can imagine multiple agents working on a patient's case and conflicting with each other because they have not been sequenced or properly orchestrated.

So, if it makes a therapeutic decision based on a presumptive diagnosis, but that diagnosis agent has not completed yet, now what happens is you have got an incomplete diagnosis, it has not finished, you have started down that therapy tract, and then when it goes back, the diagnosis gets updated, but it does not match the therapy, for example.

There could be instances where GPT and other things like that become more powerful than you might actually get into situations in which multiple processes all working at the same time could potentially conflict with one another and cause harm.

MHN: What do you think about Sam Altman advocating for GPT-5 to be used for healthcare?

Fridsma: I think that what Sam Altman probably is not sharing is what their terms of service are related to patients sharing their own personal identified information to a GPT window or to an interface.

So, if you have a GPT account, they know your name, your phone number, where you live – lots of information that is not all that important. It helps you bill if you have the Pro Plan or something like that. But if you upload information, and that information contains personally identifiable information, it could then be linked to the account information that they have.

The thing that regulates that is not HIPAA, though. The only thing that regulates that is the Federal Trade Commission (FTC), and what companies say they are going to do with your information, and whether or not they follow through with that.

So, if Sam Altman, in his terms of service – you know that 500-page document that you just scroll through and click "okay," because nobody reads it – if it says that they could use any information that you upload for training purposes or to learn more about you or to develop a profile and then share that or sell that to someone else, if those are in the terms of service and you as a patient click "okay," then Sam Altman and OpenAI are completely within their rights to be able to use your information however they want to use it.

And I think a lot of patients believe that if it is healthcare information that HIPAA follows that information, but it does not. HIPAA exists within healthcare systems, and as soon as the data leaves the healthcare system, so, for example, I have a copy of my medical record and I upload it to GPT, it is no longer covered by HIPAA.

What patients have to understand is that there is a significant risk if they start to use systems like GPT, and if their terms of service say we can do anything we want with this information, the patients have given up that right to having privacy around their own information, and it is a flaw in our privacy sets of regulations.

We have a patchwork of different healthcare and privacy rules. I think as healthcare has become more technologically savvy, it is starting to leak into some of these other areas, which in the past have not been particularly problematic. 

But now, you have OpenAI and others advocating to use personally identifiable information to help with their chat using people's healthcare, and it is not clear yet from OpenAI exactly what its terms of service are going to be. 

Are they going to protect that information? Do patients have an option to say I do not want you to use this information for training purposes, or to use this information in other ways, because I do not think patients right now have the ability to be that specific about how that information is used by these large organizations with large language models.