Editor’s note: Find the latest COVID-19 news and guidance in Medscape’s Coronavirus Resource Center.
The drugmaker Pfizer recently announced that vaccinated people are likely to need a booster shot to be effectively protected against new variants of covid-19 and that the company would apply for Food and Drug Administration emergency use authorization for the shot. Top government health officials immediately and emphatically announced that the booster isn’t needed right now — and held firm to that position even after Pfizer’s top scientist made his case and shared preliminary data with them last week.
This has led to confusion. Should the nearly 60% of adult Americans who have been fully vaccinated seek out a booster or not? Is the protection that has allowed them to see loved ones and go out to dinner fading?
Ultimately, the question of whether a booster is needed is unlikely to determine the FDA’s decision. If recent history is predictive, booster shots will be here before long. That’s because of the outdated, 60-year-old basic standard the FDA uses to authorize medicines for sale: Is a new drug “safe and effective”?
The FDA, using that standard, will very likely have to authorize Pfizer’s booster for emergency use, as it did the company’s prior covid shot. The booster is likely to be safe — hundreds of millions have taken the earlier shots — and Pfizer reported that it dramatically increases a vaccinated person’s antibodies against SARS-CoV-2. From that perspective, it may also be considered very effective.
But does that kind of efficacy matter? Is a higher level of antibodies needed to protect vaccinated Americans? Though antibody levels may wane some over time, the current vaccines deliver perfectly good immunity so far.
What if a booster is safe and effective in one sense but simply not needed — at least for now?
Reliance on the simple “safe and effective” standard — which certainly sounds reasonable — is a relic of a time when there were far fewer and simpler medicines available to treat diseases and before pharmaceutical manufacturing became one of the world’s biggest businesses.
The FDA’s 1938 landmark legislation focused primarily on safety after more than 100 Americans died from a raspberry-flavored liquid form of an early antibiotic because one of its ingredients was used as antifreeze. The 1962 Kefauver-Harris Amendments to the Federal Food, Drug and Cosmetic Act set out more specific requirements for drug approval: Companies must scientifically prove a drug’s effectiveness through “adequate and well-controlled studies.”
In today’s pharmaceutical universe, orlistat reviews a simple “safe and effective” determination is not always an adequate bar, and it can be manipulated to sell drugs of questionable value. There’s also big money involved: Pfizer is already projecting $26 billion in covid revenue this year.
The United States’ continued use of this standard to let drugs into the market has led to the approval of expensive, not necessarily very effective drugs. In 2014, for example, the FDA approved a toenail fungus drug that can cost up to $1,500 a month and that studies showed cured fewer than 10% of patients after a year of treatment. That’s more effective than doing nothing but less effective and more costly than a number of other treatments for this bothersome malady.
It has also led to a plethora of high-priced drugs to treat diseases like cancers, multiple sclerosis and Type 2 diabetes that are all more effective than a placebo but have often not been tested very much against one another to determine which are most effective.
In today’s complex world, clarification is needed to determine just what kind of effectiveness the FDA should demand. And should that be the job of the FDA alone?
For example, should drugmakers prove a drug is significantly more effective than products already on the market? Or demonstrate cost-effectiveness — the health value of a product relative to its price — a metric used by Britain’s health system? And in which cases is effectiveness against a surrogate marker — like an antibody level — a good enough stand-in for whether a drug will have a significant impact on a patient’s health?
In most industrialized countries, broad access to the national market is a two-step process, said Aaron Kesselheim, a professor of medicine at Harvard Medical School who studies drug development, marketing and law and recently served on an FDA advisory committee. The first part certifies that a drug is sufficiently safe and effective. That is immediately followed by an independent health technology assessment to see where it fits in the treatment armamentarium, including, in some countries, whether it is useful enough to be sold at all at the stated price. But there’s no such automatic process in the U.S.
When Pfizer applies for authorization, the FDA may well clear a booster for the U.S. market. The Centers for Disease Control and Prevention, likely with advice from National Institutes of Health experts, will then have to decide whether to recommend it and for whom. This judgment call usually determines whether insurers will cover it. Pfizer is likely to profit handsomely from a government authorization, and the company will gain some revenue even if only the worried well, who can pay out-of-pocket, decide to get the shot.
To make any recommendation on a booster, government experts say they need more data. They could, for example, as Dr. Anthony Fauci has suggested, eventually green-light the additional vaccine shot only for a small group of patients at high risk for a deadly infection, such as the very old or transplant recipients who take immunosuppressant drugs, as some other countries have done.
But until the United States refines the FDA’s “safe and effective” standard or adds a second layer of vetting, when new products hit the market and manufacturers promote them, Americans will be left to decipher whose version of effective and necessary matters to them.
Source: Read Full Article