In a matter of months, ChatGPT has radically changed our country’s view of artificial intelligence—shattering old assumptions about AI’s limitations and opening the door to exciting new possibilities.
One aspect of our lives that will certainly be touched by this rapid acceleration of technology is US healthcare. But the extent to which technology will improve our nation’s health depends on whether regulators embrace the future or cling stubbornly to the past.
Why do our minds dwell in the past?
In the 1760s, Scottish inventor James Watt revolutionized the steam engine, a tremendous leap forward in engineering. But Watt knew that if he wanted to sell his invention, he would have to convince potential buyers of its unprecedented capabilities. In a stroke of marketing genius, he began telling people that his steam engine could replace 10 cart-pulling horses. People immediately realized that a machine with 10 “horsepower” must be a worthy investment. Sales of Watts have been discontinued. And the long-ancient measure of his power remains with us today.
Still, people struggle to realize the breakthrough potential of revolutionary inventions. When faced with a new and powerful technology, people feel more comfortable with what they know. Instead of embracing a completely different mindset, they remain stuck in the past, making it difficult to exploit the full potential of future opportunities.
Often, this is exactly how US government agencies regulate the progress of health care. In medicine, the consequences of applying 20th-century assumptions to 21st-century innovations proved dire.
Here are three ways regulators do harm by failing to keep up with the times:
1. Underestimating the ‘virtual visit’
Established in 1973 to combat drug abuse, the Drug Enforcement Administration (DEA) now faces an opioid epidemic that claims more than 100,000 lives a year.
One solution to this dire problem, according to public health advocates, is combining modern information technology with an effective form of addiction treatment.
Thanks to the Covid-19 public health emergency (PHE) declaration, telehealth use skyrocketed during the pandemic. Out of necessity, regulators loosened previous telemedicine restrictions, allowing more patients to access medical services remotely while enabling doctors to prescribe controlled substances, including buprenorphineVia video visit.
For people struggling with drug addiction, buprenorphine A “goldilocks” drug with sufficient efficacy to prevent withdrawal but not sufficient to cause severe respiratory depression, overdose, or death. National Institutes of Health (NIH) research has shown that buprenorphine improves retention in drug-treatment programs. It has helped thousands of people reclaim their lives.
But because the opiate produces little euphoria, drug officials worry that it could be abused and that telemedicine prescriptions would make it easier for bad actors to push buprenorphine onto the black market. Now that the PHE announcement has expired, the DEA has developed plans to limit telehealth prescribing of buprenorphine.
The proposed regulations would allow doctors to prescribe a 30-day course of medication via telehealth, but would require an in-person appointment with a doctor for any renewals. The agency believes it will “prevent online overprescribing of controlled drugs that can cause harm.”
The DEA’s assumption that an in-person visit is safer and less harmful than a virtual visit is outdated and contradicted by clinical research. A recent NIH study, for example, found that buprenorphine overdose deaths did not increase disproportionately during the epidemic. Similarly, a Harvard study found that telemedicine is as effective as in-person care for opioid use disorder.
Of course, regulators must monitor the prescribed frequency of controlled substances and conduct audits for adulterated weed. Furthermore, they should demand that consultant physicians receive appropriate training and document their patient-education efforts regarding medical risks.
But these requirements should apply to all physicians regardless of whether the patient is physically present. After all, abuse can happen just as easily and easily in person as it does online
The DEA’s mindset needs to move into the 21st century because our nation’s outdated approach to addiction treatment is not working. More than 100,000 deaths a year prove it.
2. Limiting an overwhelming new technology
Technologists predict that generative AI, like ChatGPT, will transform American life, drastically altering our economy and workforce. I am confident that it will also transform medicine, giving patients more (a) access to medical information and (b) control over their own health.
So far, the rate of progress in generative AI has been staggering. Just a few months ago, the original version of ChatGPT passed the US medical licensing exam, but just barely. A few weeks ago, Google’s Med-PaLM 2 scored an impressive 85% on the same test, placing it in the realm of specialist doctors.
With great technological power comes great fear, especially from US regulators. At the Health Datapalooza conference in February, Food and Drug Administration (FDA) Commissioner Robert M. Califf underscored his concerns when he noted that ChatGPT and similar technologies could help or exacerbate the challenge of helping patients make health decisions.
Concerned comments have also come from the Federal Trade Commission, thanks in part to a letter signed by billionaires like Elon Musk and Steve Wozniak. They claim that the new technology “poses profound risks to society and humanity.” In response, FTC Chair Lina Khan pledged to pay close attention to the growing AI industry.
Efforts to control generative AI will almost certainly happen, and probably soon. But agencies will struggle to accomplish this.
To date, US regulators have assessed hundreds of AI applications as medical devices or “digital therapeutics”. In 2022, for example, Apple received premarket clearance from the FDA for a new smartwatch feature that lets users know if their heart rhythm is showing signs of atrial fibrillation (AFib). For each AI product that the FDA scrutinizes, the agency tests the embedded algorithms for drug-like efficacy and safety.
ChatGPT is different. It is not a medical device or digital therapy programmed to solve a specific or measurable medical problem. And it lacks a common algorithm that regulators can evaluate for effectiveness and safety. The reality is that any GPT-4 user today can type in a question and receive detailed medical advice in seconds. ChatGPT is a comprehensive facilitator of information, not a narrowly focused, clinical tool. Therefore, it defies the types of analysis controls that are traditionally applied.
In this way, ChatGPT is similar to telephone. Regulators can assess a smartphone’s safety, measuring how much electromagnetic radiation it gives off or whether the device itself poses a fire risk. But they cannot control the security of how people use it. Friends can and often do give each other terrible advice over the phone.
Therefore, other than blocking ChatGPT outright, there is no way to stop individuals from asking for help in making diagnoses, drug recommendations, or deciding on alternative treatments. And while the technology has been temporarily banned in Italy, it’s unlikely to happen in the US.
If we want ChatGPT to ensure safety, improve health, and save lives, government agencies should focus on educating Americans about this technology rather than trying to restrict its use.
3. Preventing doctors from helping more people
Doctors can apply for a medical license in any state, but the process is time-consuming and laborious. As a result, most physicians are licensed only where they live. It also deprives patients in the other 49 states of access to their medical expertise.
The cause of this procedure dates back 240 years. When the Bill of Rights was passed in 1791, the practice of medicine varied greatly by geography. Thus, states were given the right to license physicians through their state boards.
In 1910, the Flexner Report highlighted the widespread failure of medical education and recommended a standard curriculum for all doctors. This process of standardization culminated in 1992 when all US physicians were required to take and pass a set of national medical exams. And yet, 30 years later, fully trained and board-certified doctors are still required to apply for a medical license in each state in which they wish to practice medicine. Without a second license, a Chicago doctor cannot provide care to a patient across the Indiana state border, even if separated by miles.
The PHE declaration allows doctors to provide virtual care to patients in other states. However, with that policy expiring in May, physicians will once again face centuries of excessive restrictions.
Given the advances in medicine, the availability of technology, and the growing shortage of skilled physicians, these rules are absurd and problematic. Heart attacks, strokes and cancer know no geographical boundaries. With air travel, people may be exposed to medical conditions far from home. Regulators can safely implement a simple national licensing process—assuming states will recognize it and grant a medical license to any doctor without a history of professional impropriety.
But that is unlikely to happen. The reason is financial. Licensing fees are supported by state medical boards. And state-based restrictions limit competition from out-of-state, allowing local suppliers to raise prices.
To address the challenges of quality, access and affordability of healthcare, we must achieve economies of scale. Better to do this by allowing all doctors in the US to join a care-delivery pool rather than having 50 separate ones.
Doing so would allow for a national mental-health service, give people in underserved areas access to trained therapists, and help reduce the 46,000 suicides that occur in America each year.
Regulators need to catch up
Medicine is a complex profession where errors kill people. That’s why we need healthcare regulations. Doctors and nurses need to be well trained, so that life-threatening drugs do not fall into the hands of people who would abuse them.
But when outdated thinking leads to drug overdose deaths, prevents patients from improving their own health, and limits access to the nation’s best medical experts, regulators need to recognize the harm they are doing.
Healthcare is changing as technology advances. Regulators need to catch up.
fruit me Twitter or LinkedIn. Check out Some of my other work here on my website or here.