“What were they thinking?”
It was the first thought to cross the mind of computational biologist Steven Salzberg after reading about a recent controversial Boston University study that combined strains of the virus that causes COVID-19, creating a form of omicron, the dominant SARS-CoV-2 variant currently circulating in the U.S., that is significantly more deadly among mouse test subjects.
The study, which caused waves in the media for its creation of a potential “superbug,” also renewed an ongoing debate among scientists about the value of gain-of-function research—studies that artificially enhance a microorganism’s genome to give it advantageous attributes, such as greater transmissibility or virulence. The study’s authors and Boston University argue that the study does not qualify as gain-of-function research, and the U.S. National Institutes of Health is conducting a review of the study documentation to determine if that is indeed the case.
An expert in genomics, Salzberg has studied the genomes of viruses including influenza and SARS-CoV-2, and has written about gain-of-function research in his regular Forbes column since 2014. He says it is clear that the BU study does qualify as gain-of-function research, and, as such, carries tremendous risks. The Hub reached out to Salzberg, the Bloomberg Distinguished Professor of biomedical engineering, computer science, and biostatistics, for his take on the issue and what should be done to curtail the creation of superbugs in the future.
You’ve spoken out against gain-of-function research for many years. What initially made you get involved?
There have been a number of highly pathogenic flu viruses circulating in birds, sometimes called bird flu or avian influenza, over the years, and there were some flu virologists who announced, rather boldly, some years ago that they were doing experiments to modify the avian flu to make it transmissible in people. It’s not normally transmissible in people, but several pandemics over the past century have been shown to be caused by a flu jumping from animals to humans. The SARS-CoV-2 virus, for example, may have jumped from an animal host to humans—we’re not entirely sure what animal host, but it might have been bats.
But I think a minority of the virology community thought that this gain-of-function research on avian influenza was a good idea. These scientists in the flu world were saying, “We’re going to take a flu that is infectious in birds and give it a new ability to infect mammals and maybe people.” That’s clearly gaining a function, and that was the intent of the experiments.
In response to concerns back in 2014 about this avian influenza gain-of-function research, the National Institutes of Health actually put a pause on it. They didn’t shut it down, but they agreed not to fund it “while we look at this more closely.” Three years later, in late 2017, they lifted this pause very quietly. They kind of waited until the publicity died down, in my opinion. And so the NIH restarted this kind of research, but they were pretty opaque about their process for reviewing and approving it. There were a lot of scientists, including Tom Inglesby at Hopkins, who were questioning the decision, asking, “Well, where’s the committee doing this evaluation? And where are their hearings, and where’s their report?” I wrote blogs about it at Forbes at the time.
And the fundamental disagreements amongst scientists, especially those in virology, is some think these experiments are really somehow valuable, and I don’t agree at all. I think the experiments are almost without any value at all.
Walk us through what happened in the BU study. How does it differ from the kinds of viral evolution that takes place naturally?
Evolution occurs through a process of mutation and selection. Mutation is random—or, it’s random when it occurs in nature, anyway. And the vast majority of random things that happen during viral mutation don’t benefit the virus. But now and then, by chance, something will happen that makes the new mutated organism a little bit more capable of its functions, and then natural selection allows that organism to multiply and replicate. That’s what we’ve seen happen with these waves of new strains in the COVID-19 pandemic: They occur randomly, and the new strain only survives if it’s able to outcompete the previous one.
The BU study took part of one virus, omicron, and part of another virus—what they were calling the Washington State virus, but think of it as the original strain that came to the U.S. from Wuhan, China, in early 2020. They took a very small part of the genome of omicron virus—the spike protein, which lets the virus break into the host cells—and combined it with the rest of the genome from the Washington State virus, which they called the backbone. And they said, “Let’s see what happens!” And that’s where people like me say, “What the heck are you thinking?”
But why embark on this kind of research, when the end result is a potential superbug?
Proponents claim it can help us prepare for the next pandemic. That it’ll help us design better vaccines, or help us better understand the pathogenicity of the virus. But these are all just hand-waving arguments. The sort of artificial viruses created in these experiments are not going to look like the viruses that appear in nature. No one’s going to design a vaccine against something that’s been created in a lab. So how is it supposed to help us with vaccine design or pandemic preparation? You’re making some artificial construct that would never have existed otherwise, and no company, no private entity would ever use that as a basis for a vaccine or would ever invest in it. The benefits are, at best, very narrow bits of science that we might understand about virus strains that aren’t natural and won’t really ever occur.
These scientists at Boston University, and the university itself, are arguing that it’s not gain-of-function research, because the new recombinant strain was less virulent in mice than the original Wuhan strain, which is true from what we know. But the new strain is in fact much more deadly in mice than one of its source viruses, the omicron strain, and it might very well be more deadly in people, but we’re hopefully not going to find out. They argued this wasn’t gain-of-function research, but that’s just really tortured reasoning.
And the currently circulating strain is omicron, while the Wuhan strain is extinct—it’s not circulating in the population. Therefore the omicron strain is not going to encounter the spike protein of the Wuhan strain—this combination would never occur in nature.
What are the risks of gain-of-function research?
The risks are that one of these viruses will be genuinely dangerous, very pathogenic in humans. And that it will be leaked. Sure, the scientists conducting this research will point to the secure facilities they work in, but there have been documented cases of pathogens getting out of these secure labs now and then by accident. They’re invisible, they’re microscopic, and accidents happen. Even if you’re careful, it’s possible that a virus will infect someone or get onto the clothing of someone who’s in the lab, and then they’ll leave and take it with them. And that’s not just hypothetical—that’s happened. It’s been documented. There is a non-zero risk of a lab leak.
The people who raised concerns about the 2014 avian flu gain-of-function research, who included Marc Lipsitch, a Harvard scientist who led a group that became the Cambridge Working Group, and Tom Inglesby, were saying that this kind of research is a bad idea and that the risks are incredibly high. They wrote a couple of papers that attempted to quantify the risk. Let’s say these lab leaks happen only once every several years based on data, and let’s say that the likelihood of a large epidemic is really tiny. But let’s multiply those small risks by the number of people who could become infected with a highly pathogenic virus and die. And the actual risk is really enormous, they found. So why do it? Just don’t do it.
And on top of that, the scientific questions these scientists are trying to address, even if you grant that there’s some scientific benefit—which I don’t—there are other ways to address those scientific questions. So I think they should stop. They should find something else to do. I don’t think that they’re evil. I don’t think they’re mad scientists. I just think they’re genuinely misguided.
One reason I’m speaking out is I want the public to realize that there are many, many scientists who don’t think this is a good idea. I think we should call it out and say it’s a bad idea. And I’ve been doing so for years and I’m not the only one. Scientists argue and fight over things all the time, and it’s important that we’re open about it. It’s important that the flaws are exposed to the public.
What do you think should happen in the U.S. in the face of the BU study?
First, I should emphasize this is not simply a U.S. problem. It’s not that I want to shut this down only in the U.S.—it shouldn’t be done anywhere. And in fact, we need international agreements and the U.S. needs to be part of those.
But at least in the U.S., we should immediately shut down all this research, not just pause the funding. We’ve done that with different kinds of research in the past. The U.S. did it with stem cell research 20 years ago. But beyond that, the U.S. should also strive very hard to coordinate internationally with the World Health Organization or other international bodies to get an international agreement making it clear this kind of work is very dangerous and carries pretty minimal benefits. And if there are requests for exceptions, they should be scrutinized very critically. And the first question should be, “Can you achieve the same scientific ends without creating a novel pathogen?” And if you can, well then that’s what you should do.
Source: Read Full Article