You are on page 1of 72

Gain

of Function Research: The Second Symposium, Day 2


Session 6: Informing Policy Design: Insights from the Science of Safety and Public Consultation
Fineberg: We are very privileged today to have another opportunity to work together to raise ideas, to
raise suggestions. I want to remind everyone, in your comments and ideas, to try to be as specific as
possible. The more specific we can be in reference to the NSABB draft, the more useful it is likely to be
to the policymakers ultimately. During the course of this day our aim is to dig deeper into the key issues
that were developed in a preliminary way yesterday. And we're very privileged to begin with a group who
will reveal some of the insights that they have gleamed from the science of safety and the science of
public communication, two areas that abut directly on the consideration of gain of function research of
concern. Without any further ado, let me turn the program over to our moderator, Dr. Baruch Fischhoff.
Fischhoff: Thank you, Harvey. Thank you all for being here. One of the things that was clear as we
organized these workshops was that the human element is central to this process. There are individuals
who maintain or violate safety. There are organizations that reward or discourage safety. There are
institutions that do or do not take full advantage of the technology. There are industries that do or do not
learn from their experience and organize the way there are government -- there are regulatory and other
mechanisms that encourage people to work on the technology, or make -- or create burdens that make it
difficult to pursue careers in research in this area, and perhaps drive the research to places that are less
safe. And there are adversaries who may want to take advantage of the inventions here, the information
risk, as described.
So, at our first workshop, we had two of the present speakers, Monica Schoch-Spana, to talk about the
communications, the two-way communications that are needed with the public in order to inform the
industry about the constraints under which it has a license to operate and to keep the public appraised of
what the industry is doing, and Gavin Huntley-Fenner, who spoke about human factor science that
underlies engineering for safety. In following the deliberations since then, and particularly the NSABB
meetings where there was a great deal of attention to the issue of safety culture, we realized that there was
another element of science which we needed, which is the science of organizational behavior that
determines whether or not organizations function in a safe way.
So a challenge for those of us in the social, behavioral, and decision sciences is that each of us is a social,
behavior, and decision scientist. In order to get through life, we all have intuitions about how to get along
with other people and how to make decisions and, as a result, it's often tempting to let our intuitions guide
us when the time comes to decide how to instruct people or how to learn from experience. And, as a
result, we thought it was important to provide access to this community in which the social, behavior, and
decision sciences have typically not played a central role, to provide access to that science so that the
work can be done in a way that minimizes the risks and realizes the potential of the technology. So each
of our speakers will speak for about 15 minutes and then we'll have open discussion. So, Monica -- oh,
Ruthanne is first. And you have the biographies in the material.
Ruthanne Huising, McGill University: Thank you. So, today, I'm going to summarize a few of the ways in
which organizational and cultural factors influence how researchers interpret and respond to issues of
safety and security in their laboratories. I'm a sociologist and I study regulation in scientific laboratories. I
was part of a research team led by Susan Silbey at MIT in which we spent over ten years studying the
implementation of an environment, health, and safety management system at a major research university
in the Northeastern United States. So we have detailed longitudinal observations of laboratory practices in
chemistry, biology, and engineering labs, and thinking about how researchers understand and interpret
and interact with the law and with requirements related to safety.

Gain of Function Research: The Second Symposium, Day 2


Since 2002, I've been observing Canadian regulators as they design through extensive public
consultations, with the public but with scientists and also with the organizations in which they work, new
biosafety and security regulations that have just gone into force. I'm going to take you to the heart of the
matter - the laboratories in which researchers are handling materials and making decisions that have
implications for their health, for public health, and also for national and international security.
Sociologists and anthropologists have long observed that such decision-making is mediated by social
organization, that is human action and decisions are mediated, they're influenced, they're shaped, they are
constrained by the ways in which we organize ourselves, be it families, be it communities, professions,
and in this case laboratories and organizations.
Our decision-making is also mediated or influenced by the culture in those social environments. So the
values or principles that emerge in those realms, the underlying assumptions about what it means to be a
member of those realms. So in what we believe and we observe in organizations, in scientific laboratories,
is that people act in part based on the meaning or the significance of the act in the given context.
So I'm going to start with the issue of culture. I'm going to go back to the summer of 2014, the Committee
on Energy and Commerce held a hearing to understand the well-known anthrax and smallpox incidents.
And over the course of two hours and 40 minutes, the term "culture" was used 33 times. So, every five
minutes, someone, be it an executive, a politician, a scientist, a consultant, raised the issue of culture.
Later that month, the National Research Council released an excellent report entitled "Safe Science:
Promoting a Culture of Safety in Academic Chemical Research." It's a 100-page report well worth the
read that made recommendations about how safety cultures have operated successfully in organizations
and how they might be graft onto academic laboratories. In NSABB meetings in September of 2015,
Professor Drew Endy described the need for a culture of citizenship in science. And, as you see in
recommendation four from the board, the need to foster a culture of responsibility.
So safety culture, which was pioneered in the aviation and energy industries and then later moved into the
chemical industries, has been credited with improving safety in these organizations. There's now a
concerted effort to think about how we might bring these into scientific laboratories. The idea of bringing
in a culture of safety, a culture of responsibility is that culture can self-consciously be engineered and it
can be a means of going beyond compliance. So integrating concerns for health, safety, and security into
everyday practices, work, and interactions so that those concerns influence how we make decisions in
relation to how we use materials, which materials we use, and what we do with the waste products of the
materials. It is also understood very clearly as a managerial tool for influencing compliance.
What is culture? This is something that's often missing from the conversation. Culture is a term that is
used, it's often meant to describe a residual, whatever we cannot explain with bureaucratic procedures,
rules, regulations, practices, what's missing or what can account for the gap is a lax culture, poor safety
culture, and that if we fix the culture, we can actually deal with the problems. Culture is a system. It's a
system of norms and values that directs, delineates, and guides people in a culture about the appropriate
way to think, talk, act, and feel. So a culture is made up of norms -- norms are expectations about
behavior -- what defines successful role performance in a given setting. And they have a very specific
"ought" or "must" quality. Values provide rationales for these normative requirements; okay?
Culture is not an individual level characteristic. It is something that we can observe in a setting. We can
observe it both by what people state formally, so what are the stated values, priorities, and expected
behaviors in an organization, in a profession, in a laboratory. We can also observe it when we enter those
spaces. We can see how are people dressed, what are the symbols, what are the rituals, the routines, the
language used, and the stories that people tell?

Gain of Function Research: The Second Symposium, Day 2


So, just to make this a little more concrete, when I walk into a lab or when I study laboratories, I look for
indications of the safety culture. All labs, all universities already have a safety culture. Trying to audit or
understand what that culture is important as well as trying to change it. And so when I go into labs, I can
observe very quickly are researchers wearing coats, are they wearing glasses? When you talk to graduate
students to understand why they don't, what are their rationales for not wearing laboratory coats? Do PIs
actually participate in inspections? What does it mean to graduate students, to administrators, to
undergraduate students when they see PIs doing this work, and what does it mean when they are absent
from these kinds of processes? What does it signify about the role, the place, the value of safety in
science? Or, for example, does the PI start lab meetings by talking about safety? Do they talk about recent
breaches, potential accidents, near misses, or is safety the butt of the joke? Is it a priority in the
organization?
So, one of the recommendations that the National Research Council and the Safe Science Report had is
that safety performance should be part of the promotion decisions in universities, it should inform tenure
decisions, it should inform pay increases. Do organizations actually do this? And when they do this, if
they were to do this, it would signal very strongly what role, what place safety has in science, in the
organization, and, for example, in the tenure process.
So those are aspects that you can observe about a culture of a lab or a culture of an organization. You're
going to get the idea today that cultures are always heterogeneous, they're multiple. There's not culture in
an organization, there are multiple. Culture is transmitted through socialization processes. So, beginning
through graduate training or even earlier, researchers are observing and learning how successful members
of their field think, act, and talk. So how does a competent member of the community I belong to behave?
Is there attention to safety, security, and risk? Is that part of being a respected researcher? These are
signals that we send to our students as we train them and as they go out into the world. So the culture is
taught.
Now, how do we change culture? So if you think about how do cultures evolve and how do we change
them, this is one of the -- an issue when you think about recommendation for how do we create or
promote a culture of responsibility, a culture of safety. There are really two ideas about this, and I'm
going to tell you about the two ideal types that are kind of at the ends of the spectrum. The first is that
culture can be self-consciously engineered, that it can be designed, and that organizations have been able
to do this. So if you look at British Petroleum, if you look at Dow Chemical, both of these companies
have very strong safety cultures. These have been top-down centralized endeavors. They are slow. They
are incredibly expensive. They require ongoing maintenance. There are entire departments of people who
are responsible for safety culture. So safety cultures can be changed and reengineered redesigned when
we are operating in a very bureaucratic environment.
The other way to think about how cultures develop but also how they change is the emergent model. This
model is more suited to scientific endeavor because it acknowledges that each discipline or sub-discipline
already has a culture. There is already a culture of safety. It's old. It's established in most scientific
disciplines, and that these disciplines are not closely or centrally managed or top-down managed. They
emerge and they grow and they change over time.
And so there really, in scientific cultures and laboratory cultures, there really are -- many people would
argue and I do argue -- really limited means of top-down centralized culture change. And that culture is
going to have to emerge from within scientific disciplines and sub-disciplines. And we know from
studying culture over long periods of time that this often happens because of a shock or a change in the
environment which requires that the culture adjust itself and change. It's slow, but it's very effective
because it's self-reproducing; right? If you think about how culture is transmitted, once it begins to
change, it's a process of socializing each generation differently.

Gain of Function Research: The Second Symposium, Day 2


I want to overlay culture on the other important factor that influences and shapes decision-making and
behavior related to safety and security, and that is the social organization or the organizational structure of
most research institutions. So what research institutions, universities, hospitals, or professional
bureaucracies, what this means is that you have an organization that has two very different sides.
On one side you have the administration. This is often where EHNS is located, the people who are
responsible for ensuring safety in laboratories. That side of the organization has a very strong bureaucratic
logic. There tends to be unity of command, which means each person reports up to another person. That
person has formal authority over their direct reports. They have the ability to hire, fire. There tends to be
centralized decision-making. And there are very stable and defined roles, relationships, and rules. This is
a very stable part of the organization, and one that we see more broadly out there in the economy. And
this side of the organization keeps the place running. This is where you find finance. This is where you
find accounting, HR, and EH and S.
On the other side of the organization, we have the academic side, and it operates according to a
completely different logic. It operates according to the professional logics of each discipline. At least in
theory, we operate according to collegial governance. We have a more democratic way of organizing. PIs
have an enormous amount of autonomy in how they organize and run their laboratories. Decision-making
is highly decentralized, often operates according to verbal agreements. Trust is a very important
component in how things get done in these laboratories. And these laboratories are often in flux. They
work on soft money. They are waiting for research grants. So the resources, the membership, the activities
are changing continuously.
So one thing to realize, if we think about changing culture, is it's quite easy to change -- it's possible, and
there are means centrally to change culture on the administrative side in part because administration
actually knows who works for them. It becomes a little more difficult when we go to the academic side of
the organization where we don't have even a central registry of who's working in every laboratory,
because we have visitors, we have post-docs, we have graduate students, we have undergraduate students,
we have lab managers who come and go as money arrives and leaves.
In addition to this, these laboratories function according to relations of dependency. We have graduate
students in these laboratories that are often, as I'll tell you in a minute, given responsibility for safety and
security. However, while they have this responsibility, they are still highly dependent on their PI for the
start of their career. They require the approval of their PI to graduate, for example, they require letters of
recommendation from these people, and they require, frankly, ongoing support from these people,
advocation from these people throughout their career. So, a really interesting dynamic when you look at
these two sides of the organization running very differently. It's also important to note that in most
organizations the administrative side has less say, that the academics tend to run the organization, and in
interactions between academics and administration you can frequently see these power imbalances.
So this has implications for how we think about safety and security, responsibility, who is responsible and
how do we ensure responsibility for these requirements, who has authority in these organizational
structures to enforce, and who is actually doing the facilitating of compliance. It also has implications
when we talk about recommendations for culture change, which culture are we talking about and what are
going to be the sources of change for those cultures.
I want to tell you a little bit about the findings we have about compliance in laboratories and how
compliance is achieved in laboratories. And I want to make it very clear that our research has not been in
BL3 or BL4 laboratories. This is a large gap in the research. It is related to the difficulty of obtaining
access for people like me. We need agreements from several levels. And so this is something I'm
pursuing, but this kind of access is difficult to get and takes time.

Gain of Function Research: The Second Symposium, Day 2


So let me tell you what we found across laboratories that are not BL3 or BL4 laboratories. And here are
some of the highlights of the findings. First is that researchers experience compliance requests as
intrusions and impediments to their work. This should not be surprising when I show you how these two
sides of the organization work. So bureaucratic ideas about filling out paperwork, about completing
forms, about doing things according to a schedule don't fit with the same logic that happens on the other
side of the organization. Researchers -- PIs tend to communicate safety as peripheral to research work,
and it is often delegated to graduate students in the laboratories. We see that, and there are some really
nice studies that show, how researchers are very willing to incorporate safety features into their practices
when they align with efforts -- the scientific efforts -- to control physical matter.
We also know from some great work that Susan Silbey is doing that when we look across one institution
for ten years and we analyze the inspection results, we analyze the [indiscernible] incident results. We
know that most violations are, first of all, very minor. So these are safety violations related to biosafety
materials, radioisotopes, chemicals, every sort of hazard you can imagine in laboratories. Most of the
violations are minor. They are housekeeping. So this means that things are mislabeled. This means that
caps are not on bottles. This means that the lab needs to be tidied. This means that signs have not been
updates.
We also know that a small number of laboratories account for the majority of the violations. Okay, so we
see people smiling. We know who those people are in each one of our institutions. It's not a surprise to
learn that violations go up when professors get tenure. It's also not a surprise to know that when you have
a full time lab manager, you have fewer violations. Okay, so this is about resources. That last point is
about resources.
Here's what else we know. We know from our research about the incredible importance of the technolegal experts in each one of these organizations. So these are the people who often work on the
administrative side of the organization, in Environmental Health and Safety (EH&S). These are the
people who pick up the slack to make sure that all the compliance happens. So we know that biosafety
officers, health physicists, industrial hygienists who often buffer researchers from things that are
important for compliance when PIs either don't have the resources to do them, don't have time to do them,
don't see them as part of their job, such as recordkeeping, inspection reports, looking at correction and
maintaining compliance. So these people are incredibly important in managing the gap between current
practices and regulations or other requirements.
We know that people who are very good at these jobs are able to negotiate increased compliance by
working in the laboratories. When biosafety officers adopt the logic of PIs, that means that they work on
principles of trust, familiarity, they develop relations, they demonstrate to scientists that they understand
what it means to work in a laboratory, and they do this by actually being in laboratories frequently,
getting to know the labs, doing work in the laboratories for scientists. Often it's not glamorous work, but
scientists appreciate it.
So they are able to do a few things. One, they're able to anticipate problems. These people are able to, for
example, understand when PIs are running low on money. When they know PIs are running low on
money, they know they have to check more often. They know that they may have to offer funds to help
them get rid of waste or to help them organize some compliance issues. When PIs are in the tenure
process and they don't make it, they know these are the labs. So they know this information. They know
these are the labs they have to attend to.
When they know a lab is close to a discovery, wrapping up, getting to the end of a research project, they
know they have to go more frequently. They know people are tired. They know people are excited. And

Gain of Function Research: The Second Symposium, Day 2


these are the people that are able to identify emerging dangers. They also know when laboratories are
changing the research focus, so they're using new materials, new protocols, that they may need different
containment levels or different practices. So these people are extremely important to maintaining
compliance in laboratories.
They often draw -- and this I'm learning from my research in Canada -- on requirements and regulations
to increase their authority in laboratories. So, giving Biosafety Officers (BSOs) regulations and
requirements allows them to exercise more authority in relation to PIs. We also know that these people
are chronically underfunded. We usually don't have enough of them in our research institutions.
So there are two implications from these findings. The first is that if we're going to think about culture
change in relation to the kinds of projects that we're talking about in this panel but also more broadly in
science, the research suggests that the change has to come from within the scientific discipline or subdiscipline, and that in order to create safer, more secure laboratories, both within institutional boundaries
but also outside of institutional boundaries, we need to create -- and this needs to come from within
disciplines -- fundamental changes to what it means to be a competent researcher, that to be a competent
researcher it means also having a sense of responsibility for safety and security and that this is part of
being a competent member of the community, paying attention to these issues.
What is great about this kind of approach to culture change, it's a slow process, it's very effective because
it's self-reproducing. It is also global. So the profession is global. What it means to be an immunologist is
a rather global definition. If what it means to be a good immunologist means that you pay attention to
issues of safety, health and security that can spread globally, beyond both institutional boundaries and
national boundaries.
Second, the research shows that we need increased support or we need to support at least the BSO by
creating a community of practice or by enhancing the existing communities of practice and the existing
profession. So we need to understand what they know and build on what they know about practically how
is safety and security produced on an ongoing basis. So, in collaboration with BSOs and PIs, we need to
develop studies that address the compliance gaps that they observe in laboratories, and we need to find
ways to share these findings among the practitioners and to develop and share tools and models that can
help these professionals continuously produce compliance in laboratories.
I wanted to tell you very quickly about the responsibility movement, which is happening in some areas of
science. The extent and the effect of these efforts are unclear. These are researchers within different areas
who are acknowledging their obligation to society, and they're diverting resources from focal science to
studying the safety of processes, materials, and products, and we see this in green chemistry, we see it in
nanotechnology, in synthetic biology we see an increasing focus on this, and also, surprisingly maybe for
some people, is the way in which the do-it-yourself community is actively thinking about safety and
security through codes of conduct. But also we've started to study some of these communities in their
daily regular meetings, on their websites, in their interactions. Safety and security are actually in the front
of their minds.
And then, lastly, this is an innovative program that has -- it's related to chemical safety, but it's a
collaboration between a university and Dow Chemicals in which they're taking graduate students over to
Dow and teaching them how safety cultures work there, and giving them funds and resources to try to
bring those safety cultures to the university. So there are efforts to think about cultural change, both
through the responsibility movement and through introducing programs like this. Here's the code of
conduct from the DIY biology group in the [indiscernible] in Europe.
So I guess, in conclusion, a few things that we know from the research and that I would recommend that
the members of the board consider or think about how this could work is that culture change will be most

Gain of Function Research: The Second Symposium, Day 2


successful if it comes from within the scientific professions. It's likely to create long-term changes and
global changes. And that we need to focus on the role of BSOs in producing the safety and security in the
laboratories. And, as other people have said already yesterday, we require more research on the daily
decisions and practices in laboratories. In particular, we are missing research on the level three and the
level four laboratories. Thank you.
Fischhoff: Thank you.
Huntley-Fenner: So, good morning. While we're setting up the slides, I'll just introduce myself. I'm Gavin
Huntley-Fenner. I'm going to talk today a little bit about human factors. And I'm going to focus on the
problem of attaining the risk reduction objectives outlined in the NSABB draft. And I think that by
incorporating human factors, we can -- we need to incorporate human factors in order to attain these
objectives. I'm going to move fairly quickly through some of my slides, and I'm hopeful that we'll have
discussion that will illuminate some of those earlier points.
So, just to outline, I'm going to talk about human factors, the role of human factors data in biosafety and
biosecurity. I'll talk about the opportunity. Think of this as the size of the prize, if you will. And then I'll
try to point to the future, because I do think that there are some exciting options for us once we start to
collect the right kind of data.
So what is human factors? Simply put, it's the study of human reliability. And we draw on human
capabilities and limitations in order to understand how to design environments, do forensic analysis once
adverse events happen. When we do those types of analyses, forensic analyses, for example, we find that
human factors, or human error, if you will, are a contributing cause in the vast majority of incidents. And,
clearly, this is a priority, I think, as indicated in the NSABB report, that the recommendation number four,
which Ruthanne pointed to, this idea of a culture of safety I think rests on the ability to control -- on the
idea that we're going to be able to control error. However, we are challenged by a lack of good data on
human reliability and human error, and Rocco Casagrande talked about that yesterday.
Over time and across industries and in terms of lab safety -- the picture is similar -- we're seeing that
human error is an increasing proportion of the injuries or the causes that lead to adverse events. Some of
the contributing factors when you look at the individual person level include things like increased fatigue
due to hours work. We just heard about long hours and the impact that might have on the safety behavior
and safety culture.
But there are lots of other things that can contribute to the emergence of error. And all of these combine
to, A, reduce the application of skill. That is something you've learned how to do and you have to do it
over and over again. You may make more errors when you're tired, for example, and also reduce in
cognitive function; that is making decisions about risk and analyzing a situation for safety or the proper
approach to doing the problem. WHO calls out human error as a primary lab biosecurity concern. And I'll
just mention briefly that if you can control human reliability or reduce error, you accrue benefits in both
biosecurity and biosafety.
And, as I mentioned, we know from the Gryphon report that this is -- there's a dearth of data here. So one
of the points I'd like to make is I'd like us to think about the opportunity to collect data. What does that
look like? Well, right now, if we look at the data we have and we see few adverse events, we may be
tempted to think that that means that we're doing a good job. But, in fact, most of the important markers
for safety are latent. And this is an image that you've seen before, this idea of latency, of unexpressed
potential is something that will cut across, I think, a lot of our talks today.

Gain of Function Research: The Second Symposium, Day 2


So, for example, in this particular metaphor, you've got the idea of -- you've got accidents, which are the
things we observe, and incidents, which are things we may -- accidents we observe that result in a specific
injury, let's say; incidents, which may be things that we record; and then there are a lot of other things that
we don't record, we may or may not record that accrue to potential safe risk. And we know for sure that
the bottom of that iceberg exists. The General Accounting Office did a study in 2009, looking at latent
errors that have been reported and substantiated that story.
Early last year, I wrote an op-ed on this in which I argued that human error is inevitable. But what does
that mean? The inevitability of human error does not mean that gain of function studies of concern should
not be done. In fact, it means the opposite. It suggests that if we understand the contributing causes to
error, that we must understand the contributing causes to error, and assume that there is some error if
we're going to effectively continue the research. And I think human factors will provide tools for
designing and implementing systems in which there are fewer errors and/or the impact of errors are
mitigated if and when they do occur.
Let's talk about the size of the prize. We saw in the Gryphon Scientific report that there is potentially a
significant upside from a risk reduction perspective to reducing human error, whether we're talking about
workers who are working with seasonal influenza or Coronavirus, or whether we're looking at hand
contamination issues or just work errors in general, there's a significant risk reduction benefit potential.
Some simple approaches, so let's talk about some quick wins. What are the things we can do now with the
information that we have or the tools that we have right now in order to reduce errors? A simple checklist
has been shown to be an effective tool in reducing errors, that is forcing people to stop, take stock of what
they're doing in the moment, and just make sure that they're following standard operating procedures is, in
and of itself, an important tool. But there are also other things that we can do to really understand errors
that fall into the category of quick wins.
NASA, in 2010, faced the same problem that our biosafety and biosecurity labs face today, that is there
was a dearth of data on human error that was specific to NASA operations. They found themselves
pulling data from lots of different sources in order to understand reliability in a NASA context. And they
essentially put a huge effort behind doing a deep dive on the data that they did have and came up with
some reliability data that now helped guide some of their decision-making around risk, and training to
reduce risk.
In Europe, this is an example from Belgium looking at laboratory-acquired infections. Very, very easy
thing -- well, "easy" -- what I call in the category -- put in the category "Quick Wins," a surveying mailed
or delivered by Internet to labs across Belgium designed to get an understanding, a snapshot
understanding of the status of quality and human reliability in labs. So, here, you're looking at these
green, blue, and red bars. It may be hard to read from your position, but the green says there's strict
compliance with these measures or practices. In blue, in general, this measure is respected well. In red,
this measure is often not respected or put into practice. And we're looking at wearing a lab coast in the
first column, wearing gloves in the second, changing protective clothing, carrying a respiratory mask,
carrying safety goggles or a face protector. Clearly, you get a snapshot view of the variability in quality or
reliability across different labs.
But understanding the status quo is one thing. Measuring existing error rates or reliability is one thing.
Controlling it is another. And I do think that we need to put some effort into thinking about what it is we
do with the data once we've got it in. How do we actually measure the variables that will allow us to
control and reduce the likelihood of error?

Gain of Function Research: The Second Symposium, Day 2


So this is a paper that falls into that category, published just this year, 2016, looking at factors associated
with BL2 research workers' handwashing behavior, and looking specifically at how it is you actually
increase handwashing behavior. And in this particular behavior they recommend multifaceted
interventions in order to control the reliability of handwashing.
Apart from looking at specific behavior of individual workers, we also need to consider the context. And
we heard from Ruthanne about the broader context of culture, safety culture within the lab, the culture of
the administration, of the institution that might accrue to safety. It turns out that you also need to measure
culture in order to understand safety. For example, cutting corners, mainly due to organizational factors.
In this report from the U.K., this is looking at focus groups that gathered focus groups from CL3 labs in
the U.K. And they found time pressure, workload, staffing levels, training, supervision were things that
contributed to the detriment of reliability.
Another aspect of culture, which was, again, touched on by Ruthanne, this is a chart showing the growth
in environmental health and safety regulations on university campuses between -- in the 20th century,
heading into the early 21st century. So this gives you a sense of what your environmental health and safety
personnel are responsible for. And that's an -- this is why they're "overworked" and why additional
resources might be needed in order to effectively control reliability.
So I'm going to head into now sort of conclusions and kind of looking forward where I think we ought to - where I think we ought to focus. From the perspective of data, if you've got apples to apples, you are
really going to be in a much stronger position to look at safety quite broadly and, in fact, share
information across institutions. So, for example, if there were rigorous national reporting standards that
go beyond the most significant adverse events, that would allow us to sort of measure the factors that may
contribute, latent factors for example that may contribute to significant adverse events or perhaps to the
safety culture at various institutions.
Collecting near misses of course is quite important. Think of the bulk of the iceberg, which is constituted
by things like near misses. Some types of events -- of course, these adverse events that we're thinking
about today are quite rare, so being able to compare across institutions and gather data across institutions I
think would be helpful and allow generalization.
Also, looking forward, I think it's important to sort of think about the general context, beyond the safety
questions that we're dealing with today. We live in an environment in which sophisticated analytics and
algorithms are used to understand not just technology but how human beings interact with technology, or
organizations or human behavior in the context of complex organizations. I think a really good example is
that of geographic policing, where we collect data on -- depending on the size of the police department -hundreds of thousands or millions of interactions between citizens and the police.
We may also collect information on neighborhood histories. We look at the weather. We look at prison
release statistics, and the police department is able to look across a city and say, "Okay, you know what, I
think we're going to have hot spots in neighborhoods X, Y, and Z today. So I think we're going to deploy
resources there." This is a well-established approach that's effective in reducing crime.
We're also looking at the emergence of using these sorts of data to predict when individual police officers
might find themselves in an adverse event with extreme outcomes, and deploying training or intervention
resources appropriately. I can imagine a time where if we have good data across a large number of
institutions, we might be able to -- and, for example, considering, of course, standardized data across
institutions -- we might be able to start to identify hot spots. What are the kinds of work where training
would be most effective right now, given the limited resources we have available? I think we ought to be
thinking in terms of what the study of error might allow us to do ten to 20 years from now. I think this is a

Gain of Function Research: The Second Symposium, Day 2


distinct possibility. Now, being able to identify the potential adverse events before they occur would not
only help to reduce risk but also potentially help to increase productivity.
I want to put forward the example of the Google Car as sort of a model, a conceptual model, to think
about. At the very beginning of the talk I had a slide that sort of pointed to human factors where it looked
at the fact that human beings contribute -- human errors are contributing causes to about 95% of motor
vehicle accidents. And Google was faced with that problem when they set out to design a car. So how do
you design a car to operate safely and effectively in an environment where human error is an important
and increasing proportion of contributing causes?
Well, one of the things they did is not just automate the vehicle; they also collected lots of data on human
behavior, decision making, millions and millions and millions of data points. And obviously this is an
ongoing process. And they are able to use those data in real time, combined with informatics about the
vehicle in real time, in order to reduce the risk of an accident. Six years, two million miles, 17 minor
collisions so far. These data are as of, I think, December of 2015.
So, clearly, I want to give you a sense of the opportunity. I can imagine a future where labs are
increasingly automated, where human error is a significant contributing factor, where we need to think
about how to collect data so that we minimize the impact of human error. And I think we ought to be
thinking about the future as we go about studying human factors, studying human reliability, and
designing a system to collect data that will allows us to reduce the potential for human error. Thank you.
Monica Schoch-Spana, UPMC Center for Health Security: Good morning. I was asked to share evidence
with you regarding design and implementation of effective public deliberation. So, to that end, I want to
outline some very fundamental design questions with regard to public deliberation; talk about how one
operationalizes standard elements of public deliberation, including inclusivity, information, and valuebased reasoning; and then come back to the issue of potential recommendations, very concrete ones, on
how one could continue to apply public deliberation within the specific context. Where I can, I'm going to
tie it back to the specifics of the current discussion.
So let's talk about very fundamental design questions with regard to public deliberation. There have been
a number of calls for public involvement. And usually when people think of the public, what comes to
mind is the ordinary person, oftentimes this is conflated with the non-expert lay public. It's actually more
useful from a design perspective to think in terms of three fuzzy overlapping categories. One, which is I
think what most of us have in mind when we think of a pure public, is that ordinary person who's
disinterested and representative of the larger polity; the effective public, which would be persons or
groups whose lives are potentially altered or influenced by the policy decision; and then partisan publics,
which would be representatives of groups that have vested interests in the issue or a level of technical
expertise in the policy matter.
So how does that play out? If you look at some of the way in which the discourse has been constructed in
the current conversation, if we look at the risk benefit assessment where there's the probabilities of an
accident turning into infection of a laboratory worker, which could turn into the local outbreak, which
could turn into the global pandemic, they have sort of a stream of potential affected publics in that
conceptualization. Really rich analyses of an ethical nature are coming out of a variety of writers,
including Dr. Selgelid who was commissioned by NSABB to provide a detailed ethical analysis. We have
sort of that pure public, the people being presented, that if these hard ethical questions are to be best
managed, that they have to be reflective of the risk-taking strategies and the values of the people. And
then Nick Evans' underscore of an affected public being humanity.

10

Gain of Function Research: The Second Symposium, Day 2


Partisan publics that have emerged in conversation with regard to gain of function research of concern,
you know, the initial conversation involving microbiologists and flu researchers in particular, the desire
for sort of broadening the tent and bringing in a variety of professions, science write large, and then
different disciplines such as medicine and public health.
Another big design issue, once you decide who this public is or set of publics are, is what is it that you
want them to do? There are a lot of legitimate, though very distinct, aims. I'm just going to focus on two.
One is innovation, the other is democratic accountability. You can bring people together in the sense that
what you want are those rich unpredictable insights that come from crowdsourcing a problem; right? Or
you might be interested in democratic accountability where you want to back up a policy decision with
the argument that there has been broad representation demonstrated in coming to that conclusion. What
are the desires that I've at least picked up from the discourse in gain of function research of concern is that
there's an interest in diversity, balance, civility, accountability, and also coming back to this issue of
people's consent.
Another big design issue is once you've picked your public or publics, you have a purpose in mind, what
process do you -- what process enables them to fulfill that goal or that purpose. I think, at this point, it's
helpful to sort of lay out -- it's a very simplistic but very helpful continuum of public participation in the
policy-making process. At one end, you have communication, which is basically for basic matters of
transparency. This is the issues we're addressing. Here's some background on the issues. This is what
we're doing about it. Those are pushing out one-way information. The consultation of the public, which is
to bring in -- elicit opinions, views, perspectives, and have that as an input into the larger policy-making
process. And then a third option is this more intense collaborative, deliberative approach, where there's an
exchange of ideas and a shared responsibility for making and executing policy decisions.
So, just concretely, how have these different points on the continuum played out in the GOF discussion?
Well, in terms of communication, we've got the press releases such as the NIH funding moratorium
statement. We've got the websites put out by NSABB and NAS, and so on and so on. We have
consultation, that elicitation of input, vis--vis formal mechanisms for public comments draft NSABB
recommendations. And then, in terms of collaboration, the deliberative option has been fairly minimal
with regard to affected and pure public, so to speak. And, actually, the collaborative deliberative approach
has been more in the realm of partisan publics.
So the last design question is, well, okay, you got your public, your purpose, the problem and process
you'd like to use; what is it that -- what is the issue? There have been proposals, but there are very
particular kinds of questions that are well-suited to a public deliberative process. There's this conflicting
values about the public good, you know, the Apple FBI standoff about security versus privacy in the
phone issue. Controversial and divisive topics where outcomes are perceived to produce winners and
losers -- this is oftentimes resource allocation, right, whose research gets funded and whose does not.
Hybrid topics where technical and normative aspects are interwoven; and then low trust in which the
government and/or scientific community can earn, retain, or lose the public's trust.
So, from what I discern, there are a few questions that actually fit those kinds of profiles. Under
conflicting values I've seen it phrased as in terms of "despite potential contributions to public health,
should studies that produce a pathogen of pandemic potential be performed at all?" An example of a
hybrid topic or a low trust issue is, if any, what added steps should trustee institutions, such as U.S.
government and research entities, take to strengthen biosafety and biosecurity regimes, and also
strengthen public confidence in them? Two very different questions, so the question you pick really
matters.

11

Gain of Function Research: The Second Symposium, Day 2


So let's move away from the big design questions to more sort of practical operational issues. There is no
single methodology for public deliberation, just very concretely a review of about 20 years worth of
public deliberation in the realm of public health and health policy, and about 62 events in this review of
what's going on out in the field. There were ten distinct methods being used, right, including citizen
juries.
Why is there such diversity? Well, there's no, to be honest, no settled account of what constitutes public
deliberation and inferior practice. I mean, every issue has its nuances, its particular partisan publics, and
so on. And then, of course, there are real world limits of money, logistics, and politics. There is, however,
consensus around what are the minimum standards for public deliberation, and they focus on inclusivity
and diversity; right? You want to compensate for the disproportionate participation by privileged classes
of people. And you're trying to uncover that broad range of views and perspectives and conflicting
interest. You want to make sure that there's factual balanced information given to participants so that they
can really grapple with the issue at hand. And you want to provide that opportunity for reflection and
discussion -- a free discussion of a wide spectrum of viewpoints, testing moral claims, and the potential
for people to change their minds based on the arguments presented in rationale.
In the interest of time, I don't want to go into the nitty-gritty in which these standards are applied, but just
to pull out a few questions. Under the inclusivity and diversity standard and action, you have to ask
yourself whose interests are represented in the deliberative exercise that you have constructed. Who gets
to represent those interests? How are those people chosen -- randomly, purposively, is it a first-come,
first-served, is it open to all? Each of those has very concrete implications, as well as philosophical ones.
And also, what concrete evidence can the organizers demonstrate to make claims to inclusivity and
diversity? There's the information provision. We talked earlier about the selection of an issue, but how do
you actually take a complex issue, like gain of function research of concern, and produce background
information that speaks to the facts of the matter, so to speak, as well as the different positions that are in
play? And are organizers going to lay out policy options, a finite set of policy options for people to
consider or are the deliberants going to be allowed to be flexible and free to develop their own set of
policy options, so to speak? And then, of course, how does one manage the injection of the technical
information?
Value-based reasoning; how is it that you build a process from a social dynamics point of view that's noncoerced, that allows respectful reflection on preferences, values, and concepts? Are you giving enough
people time and space to learn the issue, acclimate to the process, and then to deliberate it? And then how
is that input translated, interpreted, and packaged for use by the policymaker? These are all very concrete
elements of public deliberation.
It's important to also know that, while there is no single methodology, there are markers of quality
deliberations that are out there and being used in the field. From a procedural point of view, how good is
your information. Is it comprehensive? Is it balanced? Is it accessible? How good are your deliberations
from a social dynamics point of view? And you can measure that in post-deliberative surveys. Did you
change your mind after what you heard? Did you feel like your voice was heard? Do you feel that all the
voices that you needed to hear were at the table? You can survey that in terms of quality.
Then, of course, there's issues of impact; right? Did it influence policy? Did it influence the level of
competence a policymaker has in his or her sense that he or she has made a more informed decision
because they had the public deliberative input? And then, from a participant point of view, what is their
level of confidence in the legitimacy of a policy decision?

12

Gain of Function Research: The Second Symposium, Day 2


So let's tie this back. I want to close down now with some very concrete, specific things that I think
should be considered at this point, that relates to public deliberation. This is my assessment of where we
are in the process. We have the formal institutions of NSABB and NAS being the priority venues for
public input. We've got meetings in which there's been a large representation of very necessary partisan
publics, due to the complexity of this issue. There is a strong desire to protect the public well-being, and
that motivates all opinions in this conversation. But I would argue that the voices of a pure and/or affected
public have been largely absent. So the unresolved issue is will there or should there be more
sophisticated resource intensive deliberative sessions outside the present circle of vested parties?
So, some three concrete suggestions to be considered is should there be a formal evaluation of the current
deliberative process that's been enabled by the formal structures of NAS and NSABB? How would the
deliberants themselves, mostly partisan publics, rate the process in terms of inclusivity, information
provision, and support for value-based reasoning? I think the benefits of this formal evaluation include
strengthening the evidence base with which people can judge the legitimacy of the policy-making process
to-date. And it's certainly provides very useful data if there should be plans for any additional deliberative
activity.
Another concrete way to go, which considers more the issue of affected publics in deliberation that
involves them, is potentially holding deliberative exercises in communities now hosting facilities that host
high containment facilities. Yes, we're talking about a low probability, high consequence type of risk, but
we're also talking about a very concrete place-based activity. Labs, institutions, there are host
communities that, should there be an issue, either within an infected worker or local outbreak, there are
response institutions in place. So I think that one can bring in that type of conversation with a set of
affected publics. You could ask them how should we strengthen our protection regimes, how do we
strengthen public confidence in them? Or you can ask should these types of experiments be done in the
first place. You got to make a choice, though, about what question you're asking.
And to conclude with a pure/affected publics issue, it is arguably beneficial to engage a cross section of
the American public in a deliberative exercise, say deliberative polling or citizen jury, about specific
questions. It could be of the resource allocation nature. Should our limited research dollars go here or our
limited flu preparedness dollars go here or there? Should these experiments of concern be done at all? I
think that these three concrete suggestions, if there were activity in one or more of these, certainly here in
the U.S. -- and I have not touched on the international domain at all, okay, it's a very complicated issue -certainly if there were concrete processes that were engaged within the U.S. setting, it would provide a
point of comparison for other countries and could be part of a harmonization endeavor. So, with that, I'll
conclude. And thank you very much.
Fischhoff: Thank you. So, a couple of points. I encourage people to come to the mics. As Harvey has said,
this is a workshop. We're not providing recommendations, but anybody can offer recommendations, and
they will go into our report. And we encourage our panelists to offer recommendations based on their own
professional experience and research. I would say for those of you who are not familiar with the social,
behavioral, and decision sciences, this will give you some idea of the sort of breadth and depth of the
research that's out there if one wanted to put the human aspect of this enterprise on a scientific foundation
that shows you the mix of methods that we all use. There's theory. There are multiple methods of
observation: direct observation, laboratory and field experiments, traditional statistical analysis and data,
and data mining.
One of the things that's sort of somewhat atypical of the social, behavioral, and decision sciences, and
may be one of the reasons why some people are reluctant to go there, is that the basic disciplines often
tend to operate in stovepipes and have sort of sole proprietorship if somebody's really tied into their story.
What's different in the talks that you heard here and people like ourselves, who are drawn to these

13

Gain of Function Research: The Second Symposium, Day 2


complex problems, is that the commitment to the problem enables us to talk to one another and use one
another's methods. So if you've been discouraged in the past, you have people like this to help you in the
future.
So you've had communication; time for consultation, and maybe it will lead to collaboration. And let me
particularly encourage -- let me see if I got this right -- encourage diverse views. So if your voice hasn't
been heard, only if it's said verbally, if it's said out loud will it make its way into the report. So if you've
just been exchanging things with your friends, please come to the mic. Please.
Kavita Berger, Gryphon Scientific: I have two questions for the speakers. So the first question is that
several speakers and commentators have suggested a comprehensive reporting system for human errors,
but this raises a lot of questions about how to ensure that institutions do not have sort of -- that this can be
done in a non-punitive way; and so what are your suggestions to promoting non-punitive approaches by
the political and regulatory system as well as by the public to encourage, you know, this comprehensive
reporting?
And then the second question is a little bit different. I'm not sure if you have touched on it or whether it's
even appropriate for this particular panel. But, actually, it's a question about human errors for security and
not safety. So, quite a bit of work has been done in behavioral threat assessments and threat assessments,
but there's quite a bit that institutions can do very far upstream. And so what are your suggestions for
addressing the potential underlying factors that might contribute to someone either stealing an agent or
animals, or vandalism, violence, deliberate misuse, etc.? Thank you.
Huntley-Fenner: Let's see. So, one possible -- regarding the first question, I was having a conversation
earlier today. I think one thing you might do is change the heading on the paperwork so that theft and loss
aren't the sort of categories under which the reporting is made, right? So there are some very simple
things you can do. But I also think that there has to be some kind of give and take. So if you're just
sending data out there that gets recorded by some bureaucrat who will, at some unknown time in the
future, come whack you over the head with it, that's a problem. But if you're helping to construct a model
that you can use to better manage your safety and you know that we're going to get some interesting
analyses and analytics back that will help us. That may be another thing you can do as well.
Regarding the second question, I'm not sure of the answer to that question. It's a great question. It's
something that needs to be studied.
Fischhoff: You want to add something?
Huising: Just quickly to add, the Canadian government has just implemented a system for reporting, a
national reporting system, and it picks up exactly what you're saying. There are numerous issues around
privacy, around being liable, and so all of that had to be sorted out so that people understood it was a nonpunitive system that also interacted well with workplace safety systems, legal and so on. But certainly one
of the promises is that the data, the findings would be fed back to improve safety programs.
Fischhoff: Thank you. Please.
Susan Wolf: I'm a member of NSABB, but, of course, speak only for myself right now. I have two
specific questions where I am going to ask your advice directly for this report. The first question has to do
with data collection, data standards, and the erection of data collection systems. And I'm inviting your
advice on what we should say about that. it comes up not just in recommendation for, as you saw, but
throughout, including recommendation to where the current draft talks about the adequacy for some

14

Gain of Function Research: The Second Symposium, Day 2


pathogens of current oversight systems if appropriately effectuated -- and I'm not quoting directly, but
that's the gist of recommendation two. That "if" suggests the necessity of an evaluative process.
I'm also thinking about some analogies that already exist, rough analogies like the RAC's creation of the
Genetic Modification Clinical Research Information System for gene transfer research, the creation of
data collection transparency systems. So this is, you know, NSABB formulating recommendations to the
federal government, not speaking to academics, though, of course, there are multiple audiences for this.
So your thoughts about what NSABB might recommend be created. That's question one.
Question two, Monica, goes to what you said. So you were talking about public deliberation, but there's
another face of that that you didn't have an opportunity to touch on -- and, of course, any of the other
panelists as well -- which is whether there's a need for a Federal Advisory Committee Act entity, a FACA
committee, in the loop of federal consideration of GOFROC. So this is a question that Sam mentioned
yesterday in setting up the five questions on which we particularly wanted advice. Ken echoed it in his
remarks.
So, if you think about the flowchart, which perhaps you saw and recollect, which is a draft of how this
process might work institutional and federal, there was a loop out to the right if potential GOFROC was
identified. And the question is who's doing that loop? Whether it's just internal to the federal government,
which, of course, is an option, or whether, in addition -- and, again, of course you can hear the echo
perhaps of gene transfer research and FDA plus RAC -- or whether there's, additionally, a FACA
committee. So if you might be able to reflect on both of those questions, I think it would be enormously
helpful.
Fischhoff: We'll start with Monica.
Schoch-Spana: Sure. Given the complexity of the issue, the number of institutions and individuals whose
practice matters in biosafety and biosecurity, you have to have a multilayered approach, I think not just to
safety and security from a technical perspective but also from a transparency and public engagement
perspective. With regard to a FACA-like entity, that would take care of one level, but I think there are
parallel injects of a wide diversity of perspectives and opinions on what constitutes a safe, secure, and
credible system with the public so that there -- for instance, Dr. Frothingham's Duke system.
Duke has created a system, right, with its IBC and IRE, which involves a diversity of perspectives,
including members of different publics, both partisan and affected ones. Same thing with St. Jude's. So I
think what you have as far as potential recommendations is one has to assure that the system has
engagements -- public engagement and transparency at all levels. A FACA-like committee would be, I
think, a beneficial thing. It's not the only thing. I think one has to think of shared governance across all
levels.
Wolf: And on data?
Schoch-Spana: And data.
Huntley-Fenner: Yeah, okay. Well, I'll give maybe part of the answer to what to say about it. I think it's
important to sort of frame what we know and what we don't know. So we've said that we may continue
this work. Our expectation is that we're going to be able to do it to a high degree of reliability. But our
ability to continue to assess and ensure reliability is limited by a lack of data on the primary contributing
factor, which would be human error, human reliabilities. We just don't have enough of an insight into that.
Therefore, in order for us to really attain what we're promising, we have to simultaneously put in place a

15

Gain of Function Research: The Second Symposium, Day 2


structure in order to collect data and processes by which we can actually control and potential reduce
errors when they occur.
Fischhoff: Let me -- under the time constraint, let me do the -- let me ask each person to give your
question -- try to keep it to within a minute, and then we'll have about a minute for each of our panelists to
respond. Given the standard that Harvey set, I will not try to synthesize your questions. So, please.
Maggie Kosal from Georgia Tech, Faculty in the Sam Nunn School of International Affairs: And I'm
going to ask a specific question, but it's specific because I'm responding to the request for specificity. At
the same time, I'm concerned that it may come off as a little bit more pointed. So, if it does, in wanting to
fulfill our non-adversarial culture, blame the pointedness on specificity. So this observation that incidents
go up with tenure, first of all, I'd love to see the data on that, and the causal mechanism, you know, why
does that happen, and does it happen because of additional administrative burden, which would be my
first hypothesis?
And then how does that intersect with the call for increased public involvement? Is one more
administrative burden on faculty supposed to be educating an illiterate public? And, again, I'm being
pointed because we're asked to be specific. You know, is this another unfunded mandate being put on
faculty? You know, so I could go on, but I want to highlight a couple of key pieces there, that we're
talking about a much broader public that sometimes is adversarial to science and technology for reasons
that have nothing to do with the science and technology. And culture has a whole bunch of pieces that it
might be economics, and I loved how you articulated the economics, institutional drivers. And we haven't
even talked about sexism and racism that affects cultural behaviors. So, thank you.
Fischhoff: We did see two icebergs, so. Recognition of -Kosal: Penguins are represented.
Fischhoff: Recognition of the hidden problems. Thank you. I'd like to have each person speak, and then
we'll give our panelists a chance to -Silja Vneky, German Ethics Council, University of Freiburg, and Harvard Law School Fellow at the
moment. I have two short questions. I liked your proposal, Ruthanne, that the culture change has to come
from within the scientific discipline because it's a very soft and non-interference approach. On the other
hand, I think the incentive for scientists is to produce results, is to do science, to write articles, and
therefore I think how can the change come from within science if there are no outside incentives for
safety and security? This would be my first question. The second question would be, here in the U.S.
there's a lot of talk about nudging, and my question would be whether there can be some sort of nudging
the scientists to do the science in a safe and secure way?
Fischhoff: Okay, thank you. Over here.
I'm Andy Kilianski, Edgewood Chemical Biological Center. I'm an NRC Fellow with the U.S. Army.
This is just maybe a suggestion. So, talking about the human factors research, and we don't have a lot of
data about accidents and human errors within the laboratory setting. I think the non-fault self-reporting in
high containment labs is a problematic thing to sort of enforce and enact, and compliance with that, I
think, would also be challenging. So I think this is an opportunity for NSABB, with their
recommendations, to sort of touch the broader life science community and perhaps suggest that we need
to fund some studies looking at cohorts of grad students in BSL1 and 2 spaces, and evaluate how their
errors affect their work and integrate that with their performance on a daily basis.

16

Gain of Function Research: The Second Symposium, Day 2


Fischhoff: Okay, thank you. Adam.
Adam Finkel from University of Pennsylvania. You know, sometimes when things seem to be glaringly
missing, it's because they're too obvious. But I think in the human factors, the first two presentations,
there are some things missing that I think it's important and telling that they're missing. So, one of them I
mentioned briefly yesterday is inherent safety, designing out the consequences of human error. And,
again, my vague sense is that that's not come to the laboratory setting the way it has come to the health
care patient care setting.
Second is I didn't hear any mention of part of the culture being the availability of a confidential channel
for reporting of incidents. The whistleblower situation in the country is bad, and actually getting worse.
The laws are getting slightly tighter, but the culture within companies and government, I think, getting
worse. And then a previous speaker mentioned just a little bit enforcement. There was really no
discussion of the external norms that might be enforced in order to grow the culture of safety. Cultures
don't spring up by themselves. They need some kind of external validation.
So, you know, there's a literature on governance. I would really just briefly suggest that traditional
command and control regulation probably is not a good instrument for this, but there are a couple of
models, one being third-party auditors -- EPA and OSHA have talked about that in the chemical process
world. It hasn't gotten very far because of skepticism about the independence of the third parties. And I
encourage you to look up a term of art called "enforceable partnerships," which would be some model of
kind of like a consent decree without a precipitating event needing a consent decree, but an industry group
comes to government and says, "This is a code of practice that we think is reasonable, it's flexible, we will
follow it and we will sort of cede over to government the ability to peer out, make sure that we're abiding
by our own code of conduct." I think, in this case, with OSHA being so overmatched, there would need to
be some kind of funding mechanism as well to provide some troops on the ground to enforce that.
Fischhoff: Okay, thank you. And so two more quick comments and then a quick follow up here.
Dave Drew, Woodrow Wilson Center: I'm just a concerned citizen, overly curious. My comments are
specifically directed about public deliberation. Everything I've learned about GOF has been at this
conference and the few hours I researched it before, so I feel perversely qualified to talk about public
engagement in that regard. It struck me that public engagement could be seen as similar to upstream
engagement in that it is an effort by scientists to persuade the public around to the opinion that is the
consensus of the experts. So I just want to ask, is upstream engagement and public deliberation similar or
the same thing, and should scientists perhaps leave the public engagement on science issues and
developing understanding on science issues to science journalism and science journals?
Fischhoff: Thank you very much. And our last comment or question.
Megan Palmer of the Stanford University Center for International Security and Cooperation: First, I just
wanted to thank this panel. It's actually very refreshing and illuminating to see a specific focus on these
topics at the start of the day. I wanted to ask, in light of the focus to have specific recommendations, to
what extent can you recommend specific strategic interventions to allowing sustained scholarship on the
social and behavioral dimensions of research, given that there have been long-standing tensions in
integrating these types of observations, this type of work within scientific practice, and within the life
sciences in particular.
What, in your experience, have been essential elements to kick starting and sustaining that process, be it
just the resources allocated, partnerships with key individuals in the field, designing the databases in the
first place? Is there anything in particular? And is there anything unique, either for the benefit or the

17

Gain of Function Research: The Second Symposium, Day 2


detriment of gain of function research and the way it's been framed as a way to have an example of how
this type of research and partnership might be operationalized in the future?
Fischhoff: Let me thank you all for these excellent questions. I wish we had a lot of time. So, maybe just
one final comment from each of you, starting with Ruthanne.
Huising: So the issue of culture change is a very sensitive one in this particular context because we are
dealing with elites, we are dealing with highly educated elites who expect to have autonomy. They are not
-- let me say, there is not a lot of openness to ideas that come from elsewhere. So I really do
fundamentally believe that the ideas about the importance of safety and security in science are going to
have to come from some of the best researchers in each discipline. We need the leaders in these
disciplines to model -- we use this word, to model -- the importance of these values and normative
expectations in research. We need the journals to expect it. We need conferences to highlight it. It's going
to have to be pushed from within. The idea of some sort of partnership with outsiders or something that
looks like a consent decree, which is what we observed at a major research university, was not a
convincing argument for elite scientists to change the way they behaved.
I do think the idea of nudge is incredibly interesting and important, and I think it identifies a way that we
can collaborate. So, for example, I am currently designing nudge experiments that will be -- field
experiments within research laboratories to understand what kind of nudges can actually change culture.
But you can imagine the collaboration in which we modify the experiments to understand some of the
human factors research. So this is a very fruitful discussion.
Fischhoff: Gavin, a last comment?
Huntley-Fenner: Yeah, I just wanted to say briefly, I mean, I think one of my main points is that the
dearth of error doesn't mean that what we need to know is unknowable. And so I take that to heart.
Someone asked a question about the safety hierarchy, and I wanted to sort of just briefly mention there's
kind of an interesting paradox that goes along with that. And you design out human error, you get the
Google Car, and you reduce error rates and accidents to near zero, but all the error you have left is human
error at that point. When you put a steering wheel in the Google Car, things go wrong. So, something to
think about. You'll always need to focus on the human error piece, regardless of the design trajectory.
Fischhoff: Monica.
Schoch-Spana: Tying some of Ruthanne and Gavin's comments with yesterday, some of the things raised
by Larry Kerr, I think the issue of best practices, particularly by those researchers and their host entities
and their relationship with the larger community in which they're situated, that there are best practices that
I think have yet to be captured, synthesized and put forth as best practice guidance around biosecurity,
biosafety, and biocredibility, if you want to call it that.
On the issue of is this just scientists bringing people over to the other side of the argument -- I think that
was the bottom line of that first question -- what's important about public deliberation is that in areas of
ethical uncertainty with technical complexity, where there's a sort of mix of total interwoven-ness of the
technical and the normative issues, that there has to be a shared dialogue. It isn't about bringing someone
over to the scientist's point of view, even if there was just one point of view. It really is that iterative
exchange of information to come at, hopefully, some type of mutually agreeable common ground. And
even if it's not your side of the argument, you can live with whatever the final decision is. And it's a little
bit different than just pure persuasion.
Fischhoff: Let me thank the panel and thank the audience. And we'll be back here at 11:00.

18

Gain of Function Research: The Second Symposium, Day 2

Session 7: Best Practices to Inform National Policy Design and Implementation: Perspectives of
Key Stakeholders in the Biomedical and Public Health Communities
Phil Dormitzer: So I think we're ready to start. I seem to be mic'd as well. So welcome to the next session.
This one is on Best Practices to Inform National Policy Design and Implementation: Perspective of Key
Stakeholders in the Biomedical and Public Health Communities. So this process, I guess, started a couple
years ago now as a debating society, and things were pretty heated. And it's become much more
deliberative.
The next step is actually policy that will be made, although not by this group. However, I think what this
group says and discusses will have a very large impact, particularly the NSABB recommendations will
play a very large role in influencing what that policy is. So this is, I guess, a time to be quite concrete
about what you think would be important to inform that policy, what those policies might be. We're not
here to make policy, but, on the other hand, we've brought together key stakeholders so that that policy
can be informed by the views of key stakeholders of communities.
And I won't give everybody's bios here, because they are in the program, but I think when you read those
bios you'll see that these people are extremely well-qualified to comment from their individual
perspectives. Ethan Settembre has an industry perspective, he is part of a company that has been heavily
involved in pandemic response. Michael Callahan is a physician who organizes the treatment of very high
impact infectious diseases in the field and is also very deeply involved in biosecurity matters. Jonathan
Moreno is a bioethicist and not a biological scientist, but certainly has a lot to say about these issues. And
Robert Fisher is with the FDA, and also deals with medical countermeasures, emerging diseases, and
counterterrorism, and can provide the perspective of a regulator on some of the options for influencing
what goes on in laboratories and elsewhere.
So I guess we'll go in the order of the program. So, Michael, would you like to start? We'll have about ten
minutes per speaker, a brief panel discussion, and then open it up to comments from the audience.
Michael Callahan: Thank you, Philip. I think we're off to a good start because my slides are on a USB
drive which dropped into a boiling hot cup of coffee in Brazil 48 hours ago. And then the second thing
that happened is that all of the case studies that I'll be sharing with you today needed to be signed off for
host country concurrence or institutional concurrence.
The data that we're presenting to you today come from a number of international partners. And I think
what our hope is from our discussion, when our group met, was to expand our thinking here on the NSA
policy activities so we don't have to do it twice or 168 times for each country. But the efforts that are done
here are minimally revised in a way that they meet the cultural and scientific mores of where really the
burden of gain of function activities is really happening.
Many of you will consider that the truthfulness, if you work internationally and if you work in the
industry sector in particular, that the world is flat for bioinnovation. As we know with using our Chinese
colleagues as an example, they have sequenced more viral pathogens in four months than we have in the
entire history of Western Europe and the United States. So the pace and the rapidity of bioevolution and
the increased focus on sovereign health security are driving these unexpected types of gain of function
activities, which are largely beneficent or market-driven. I think we need to keep this in our perspective.
So I'm an ex-Fed, and so there's always a lawyer disclaimer slide, and now, within industry responsibility,
a Sarbanes-Oxley board of directors slide. But I think the humility that I have with you about gain of

19

Gain of Function Research: The Second Symposium, Day 2


function and how much we have to do is driven by my experiences with DARPA and its international
investments in highly dangerous pathogen management to understand what next generation biological
threats look like, and to get there early, get there clinically, get there from a laboratory capacity and
understand it.
But so, too, does Massachusetts General Hospital also play a role in this because I compete for federallyfunded research, I'm a principal investigator for DTRA, the Department of State, USAID, and NIH, and
run programs internationally for the Center for Global Health. And so I understand what it's like for the
academics to be competitive. And for United Therapeutics to make antivirals that will outcompete other
antivirals, our drugs need to have flair. So we have a dengue drug in phase two that has been acquired by
another company. So academic, government, and industry experience has made me actually uneasy, more
uneasy than I was when I worked directly in gain of function many years ago.
So this slide we're going to zip right through. It's just to draw your attention to the motivations
internationally that spur gain of function research. And it's important to note that they are intentional and
largely focused on improving the quality of medical products. It should be stated early and quite clearly
that the majority of medical countermeasure development is shifting from West to East, and that many of
those Eastern anti-infectives, most notably the anti-bacterials, followed by the antivirals, are never
intended for Western markets. So we don't actually see those drugs.
And I mentioned this yesterday in the open mic session, but we need to avail ourselves of and understand
these drugs because we're going to receive those resistant viruses downrange. So we do need to do what I
believe it was Andy Weber, the former Assistant Secretary for Biodefense at the Pentagon, stated, "We
need 'Goint,' we don't need 'Humint,' we don't need 'Geneint,' we don't need 'Masint,' we need to get there,
cooperate, collaborate, we need to go." So I think that we need to think about that, because you cannot
phone this stuff in from Washington, D.C.
Our experiences overseas and our concerns are driven by staffing Asian venture capital. If I go there as a
U.S. government representative and I'm in Asian institutions, I see some pretty good stuff. I see not
alarming stuff. I go to venture in Singapore or Taipei, and I see really amazing stuff. So I'm going to show
that to you because it will, I think, underscore the importance of penetrating all the different stakeholders
that are assisting gain of function activities. And the last point is not just the intentional up top -- I think
that's been discussed -- but the unintentional.
We need to understand that by not organizing our interventions in a way that's meaningful, culturally
acceptable, and meets host country concurrence for sovereign health security of foreign nations, we might
have unintended events that lead to problems. And I'll share a couple of those problems with you. One of
the big ones, by the way, is that our vaccines are being constructed in a way that guarantees immune
escape, immune evasion, and detector defeat, and there are examples of this that are largely restricted to
the poultry sector of Asian vaccines.
All right, so I am not qualified to talk about the range of U.S. policy for the NSABB study, but I would
just ask that we reflect on the excellent comments yesterday for the non-aligned nations. They are our
customers for products as well, for public health security. And I think we need to think about those 112
countries. And I'm thankful for the comments that were brought up because they're the guys that told the
Department of Defense Advanced Research Project Agency that they will never bring a vaccine into
Indonesia unless it's halal. So we make halal vaccines. DARPA makes halal vaccines.
So, if you don't understand this at the get-go, you are going to promote incentives for domestic vaccine
manufacture with unintended consequences such as the use of low-cost, whole virus, inactivated vaccines.
That is where you grow up the threat agent, the same one we're spending billions of dollars under

20

Gain of Function Research: The Second Symposium, Day 2


cooperative threat reduction activities to keep locked away. They are being propagated quite aggressively
and then killed at the end and being turned into ways to protect economic investments in livestock.
We're going to spend a couple minutes on this slide because it captures a number of publically disclosed
events, which you can read further about, but we also have the references two slides from now. The first
two cases are in the upper left, is the Bacillus cereus 9241 and -42 series, and, of course, Ebola Makona.
And these two cases are examples of the distribution of wild type agents that have been promoted by a
lack of cogent and thoughtful policies by resource-rich biotechnology groups.
In the case of Bacillus cereus, most of the pulmonary anthrax cases that I've seen -- not cutaneous -pulmonary anthrax cases I've managed, which is 32, have belonged to Bacillus cereus, an agent not on the
Category A list. And, importantly, 42% of those patients died, and they died of anthrax, because they died
of anthrax toxemia and Bacillus cereus.
So if you were one of the southern countries that has denied access to the primers to identify Bacillus
cereus, G series, anthrax Bacillus, if you will, you'll understand why they might propagate the strains
throughout the clinical microbiology laboratories so that they can look at the betahemolytic pattern unique
to this anthrax-like Bacillus cereus and is not found in anthrax. It allows them to meet their requirements
for host country reporting to the Ministry of Health of six Central American nations. So this is a problem
inflicted by not sharing reagents to help nucleic acid diagnostic capability in very middle income,
affluent, and medically-sophisticated countries.
To the center of the slide is Makona, Ebola Makona for 2014. We reflect back that if you collected all of
the Ebola -- U.S. Ebola clinicians that have had experience with Ebolaand Marburg patients, they
numbered 103 in 2013. Now there are thousands of them happily because they responded to the outbreak.
So, too, did the chaos of that treatment environment and our inability to do prompt referral lead to clinical
diagnosis and the distribution of clinical materials to eight countries, including Chad, Niger, Mali, and
Kaduna State -- which is the medical center of the Islamic North in Nigeria, it is their Harvard, Hopkins,
and Stanford. So the distribution of those biologic samples which have been kept cold has just
undermined $1.3 billion of cooperative threat reduction activities.
My point of these first two examples talks about the need to unify U.S. policy between the events that are
happening in this room with all the other investments of the U.S. and Europe to understand that we need
to make these systems work together, to provide viable solutions, to reduce the wild-type distribution, and
that's before we get to bioinnovation.
When we think of gain of function, and if you go back to where it was first written, it was talked about in
Appendix three of the Biological Weapons Convention. Several of you were with us in those rooms in the
early 2000s, and I think you'll agree that nobody knows gain of function better than a former Soviet
biological weapons scientist. They worked on it for offensive uses. Now our cooperative threat reduction
activities are working to drive them to turn their swords into plowshares, and their advantage in entering
the Western markets and the markets of the FSU are to be better than we are, and to test those medical
countermeasures against the most robust and hardened threat agents, particularly the filovirus cases.
So an example for us, in 2002, back then we could run around viral hemorrhagic weapons labs at vector.
So we could do that then. We could work quite closely with our partners then. And right now, due to
policy issues and politics, we're unable to engage them in really meaningful discussions like this one here.
The best case I have for you comes from Bioventure, and it is located at the three o'clock position here.
I'm sorry the laser pointer is a little bit meek. And that's a product that we found out about through
bioindustry and venture capital in an unnamed city in Southeast Asia. It is not China. It is not Taiwan. It

21

Gain of Function Research: The Second Symposium, Day 2


is not Japan. But it's one of the most affluent. And they were fielding market entry, radically destabilizing
technology advancements that would take over large markets.
And the example that they used, driven by McKinsey Consultation Service and a bunch of Wharton
School graduates, MBAs, was in order to go after the very lucrative cosmetic industry, the Botox
industry, you needed to go after the number one thing that the patients hated, which was repetitive shots.
And you do this by altering the botulinum toxin to make it last longer. So what did not show up in the
PowerPoint slide is the SNAP-25 cleavage site of the alpha subunit of the botulism toxin, which is located
about halfway down the stem. And by hiding it from serine proteases in your body, it basically produces a
3X increase in the physiologic half-life and binding.
Okay, so it was not nefarious. It was economically-driven, practically, by business reasons, and poses an
obviously huge gain of function examples to us. They did not -- they appreciated this group of young
scientists, appreciated they did not need to make it more virulent. So they wisely came to the cultural
understanding that actually Ruthanne brought up from McGill earlier today that this cultural issue can
really help you here, but they missed on the other part, which is longevity.
Two fast ones just to end is that we also need to understand the self-infliction of not providing viable
alternatives in the very rowdy avian poultry vaccine markets of Southeast Asia, where the vaccines are
created locally. And as new clades of low path and high path H5 come through, and Newcastles and
avian botulism and avian cholera, which decimate the backyard terrestrial protein herds, and that these are
critically important to the maintenance of protein security in the Islamic Republic of Indonesia and
Malaysia, we cannot deny them the opportunity to come up with reasonable low-cost vaccines that are
generated at pennies per dose rather than the international option, which costs seven dollars a dose per
bird. That chicken in Indonesia has a half-life of about six months, and no one's going to invest in that
foreign import, so they'll create the vaccines locally.
Rather magnificently, they are pretty good at recombinant technology through segmental reassortment. So
they take their H5s -- both the new one found in Ache and the current one found in Bandung, and they do
in vitro recombination and get reassortment, and they select for it. And only after they get a winner do
they go through any sequencing technology, which is perfunctory and not the purpose of the conversation.
My point here is that if you want to prevent a recombination to make, in this case, a vaccine that will last
longer through subsequent rounds of future avian diseases, we need to get in the game and provide
reasonable alternatives. The Department of State did that with the purchasing and co-licensing of the
H5N2 vaccine, which allows us to look at H5N2-vaccinated birds to prove to the Indonesian government
that they are efficacious if they develop H5N1 antibodies, which could only have been conferred if they
saw the wild-type pathogen and the chicken did not die. So vaccine strategy, it's called DSMA (FDA
Division of Small Manufacturers Assistance) which is internationally-renowned by the FAO, allows us to
provide a reasonable technology that will prove metrics inside these regional economies.
The last point is just, you know, how much taxpayer dollars we wasted trying to take highly dangerous
pathogens and locking them away in freezers in Africa and then not understanding that if you want to
protect the livestock herds of the Nigerian North, all those vaccines are grown anthrax that is turned into
dead product that is injected into those sheep. And this is an example of sheep red blood cell agriquantification of colony-forming units from just a couple of years ago.
Let me just end and bring this together with some policy alignment that we might act on, which might
crystallize what we're doing here in acknowledging that the West will scale perhaps a little bit more
reasonably to the emerging economies and to our collegial bioventure communities on the Asean
(Association of Southeast Asian Nations). The first thing is we have to get bioventure. If you look at all
the cool stuff, and that's a euphemism for things that scare you, the cool stuff is funded by venture. It is

22

Gain of Function Research: The Second Symposium, Day 2


not typically Ministry of Health funded or regional NGO funded. It is funded by venture. There is a large
number of these. They've been produced and distributed to certain agencies in the U.S. government, but
they need to be put before the academic experts and industry experts. They need to read these things to
understand their implication.
Vaccines, we need to provide licensed, safe and effective, and inexpensive vaccines that don't require
cold chain. All vaccines made in Indonesia and Malaysia for the agricultural market are cold-intolerant.
So what they need to do -- this is one way in -- is through thermo-stabilized vaccine technologies that we
can bring to them. Why not just give them export vaccine design? In an example that is four years old
now, the government of Indonesia allowed the sequence -- not the virus -- to leave Indonesia, come to a
U.S. federal agency. Just the sequence for hemagglutinin was turned into subunit vaccine product, which
was GMP, went back to Indonesia and was tested on birds that were challenged by wild exposure. Okay?
So we live in an age where sequence and place is becoming the norm, where pathogen exportation is
definitely looked down upon by the 112 non-aligned nations. And so we're going to see fewer of these
virulent viruses being sent to our major facilities in Europe, the United States, and Canada. And so if we
understand that, we can roll with it. We can adapt and provide bioinformatics, big data analysis systems,
and distributed manufacturing bases in the synthetic and cloud space.
The last two points to end with is that U.S. government performers at USAID, DARPA Prophecy, and
Center for Global Health protecting our international collaborators from R01-funded investigators who
seek to do nothing more than get a virus, go home, and write their big Nature paper. And if we do not
start putting it to the source selection criteria of awarding those contracts and to building capacity and
maintaining stable, not unstable, relationships with our international partners, then we're going to
perpetuate the problems that we're having right now with Ebola, the problems that we're having right now
in Zika.
Do you know how many strains of Zika we have in the United States? Nine. Do you know how many
Brazil has deep sequenced right now? Sixty-one. Do you know how many Panama has right now? Thirtyseven. Why are they averse to sharing them with the United States as members of the United Nations?
The answer is they've been ripped off, okay? They've been ripped off and they will tell you these cases
and they will point out who the offenders are. So we need to think about that as part of the NSABB policy
for our federally-funded researchers.
And the last is just incentivizing our host nation compliance with the policies that are happening in this
room by doing what we failed to do for all our CTR years in Russia and China and Brazil and South
Africa and Nigeria, and that's provide a metric for these activities that they can measure, that has value at
home. It saves chickens. It saves the cattle and livestock. It reduces infant mortality. It protects mothers
that are pregnant. Okay. And so I'll just end with further reading. That's your reference slide. This is all no
longer for official use only. These are publically available documents, mostly on the website. And I'll turn
your attention to "Sequence-In-Place: An Engagement Strategy for Maintaining Sovereign Health
Security," which is one of the premier documents that is missing from our dialogue here. I went a minute
over. I apologize. And I'll wait for your questions at the end of the session. Thank you so much.
Dormitzer: Thanks, Michael. And Robert, you're up.
Fisher: So, first, I'd like to thank the invitation for FDA to come up and speak with you and to participate
in this panel. I like to think that FDA is somewhat uniquely positioned for this discussion in part because
we do research, we do fund research. Believe it or not, we actually have laboratories. We have capable
laboratories. We have talented scientists. But the real reason I think we're here is because of our main
mission which is to really ensure that the medical products and all the other things FDA regulates that are

23

Gain of Function Research: The Second Symposium, Day 2


made available to the American public are safe and effective. And we have a variety of regulations that
apply to how we evaluate these and how we make these available through our regulatory mechanisms.
This can range from the randomized clinical trial where you're evaluating it in patients to the use of
surrogate endpoints for accelerated approval. And if there's no other way to do it, there's no ethical or
feasible way to use those other mechanisms, we naturally rely on animal efficacy data in some
circumstances.
Regardless of the regulatory pathway that's being used to push a medical product forward, the FDA needs
to rely on data. We need evidence. We can't determine if a product is going to be safe and effective
without the appropriate science being done to provide that evidence. So, with that said, if the gain of
function framework that's under discussion is really framed in terms of looking at agents of concern or,
you know, particular pathways of concern, the vast majority of things that come into the FDA aren't going
to be impacted by this.
Now, with that said, that very small part that does, our potentially high impact, which would be things
like pandemic influenza, Coronaviruses, and those unknown threats, I think I can provide a couple of
examples of circumstances under which there may be some impact on the FDA from the framework. And
I think actually Michael did a really great job of setting me up for this because some of these tie right into
what he was just saying. The first is for the production of vaccines, a lot of times the seeds are produced
from molecular clones; okay? And depending on how those molecular clones are generated, I don't think
it's hard to imagine that that could actually fall under the gain of function umbrella, which then has
implications in terms of how quickly that can happen.
Also, if you consider that -- and we've seen this before -- if you have a circulating strain of influenza,
pandemic influenza, and you take that and you try to grow it in eggs or you try to grow it in cell culture
and it doesn't want to grow, well now you're in a situation where you're going to have to adapt that virus
to grow in the cells the eggs that are used in mass production techniques that will allow availability of
large amounts of vaccine that can then be used to protect the public from that threat.
Now, how do you get around these challenges; right? How do you keep things from getting caught up in
the framework and getting delayed, because in a public health emergency you don't want that? I mean,
we've seen that with influenza. We've seen that with Ebola. Well, at the October 2014 NSA meeting
someone actually said that potential high threat gain of function research should not be pursued without
access to vaccines or drugs that can aggregate or ameliorate disease. I think another way of thinking about
this would be to consider that risks associated with high threat gain of function research that's deemed to
be critical to understanding a pathogen or deemed to be critical to making these countermeasures
available in a timely fashion, this could be mitigated using the appropriate stringent safety controls.
So, with that, you know, another tool that the FDA can bring to bear when we're approving drugs,
vaccines, medical products, we provide advice to our sponsors, either through frequent meetings or
through our published guidance. We also meet with our USG stakeholders to make sure everybody's on
the same page and cooperating in terms of the Public Health Emergency Countermeasures Enterprise. It's
always better to deal with these challenges early in the medical product development pathway than it is
late. We don't want something to come across the FDA's bow at the 11th hour. And having a vaccine
advisory committee meeting probably isn't where you want to start having these discussions. So I think
the FDA is looking forward to continuing to work with the relevant stakeholders and our USG partners to
make sure that NSA support a framework that will allow the flexibility so that when we are faced with
these public health challenges, we can respond appropriately.
Moreno: So when I was invited to be on this panel, I did what I almost always do when somebody from
the academies calls me to ask me to do something in this building, I said, "Sure." Then I realized that I

24

Gain of Function Research: The Second Symposium, Day 2


was going to be on a panel with people that actually knew what they were talking about, and that raised
some concerns. And then I saw that I was on a stakeholder panel, and I of course asked myself, "Well,
what kind of stakeholder am I? What's my stake?" I guess the obvious one is the bioethics professor, but
I've decided to be a different sort of stakeholder today, trying desperately to lower your expectations. I'm
a health care consumer, somebody who used to be called a patient, and therefore I'm very much at the
mercy of people, like most of the people in this room, who do know something about these topics. And I
tried to find a slide to succeed this one that I would have labeled "One of the Great Unwashed," but I
couldn't find a really polite one. But that's what I am.
I first heard about gain of function, I think, not long after the term was coined, which I believe was
around 2011 or 2012. And I realized that if I was going to talk about this in the future, I would have to
learn something. So I'm putting myself, today, in the role of somebody who's not a scientist -- I'm a
philosopher and historian -- and try to summarize some of the points that I think I have learned about in
the last few years. And I think I have seen a developing set of conclusions, rough conclusions -- I hate to
use the word "consensus" -- largely the community that I've observed, namely you, have reached, and
some outstanding questions. So let me try that.
Along the lines of lowering your expectations, there's this concept in the philosophy of science of
fiduciary knowledge, and basically what that means is we all rely on other people who really know
something. We rely on their judgment and their ability to work together and figure out, as one particular
presidential candidate has said, "If we all work together, it's going to be beautiful." So I'm relying on -- I
rely on you, just as we all rely on the architects and the engineers and the mathematicians who put this
building together, that this exquisite room will not collapse while we're sitting here talking politely with
each other. So I'm in this position, you know, I'm trusting that, for the most part, the experts will actually
get to the right place.
So let me summarize quickly what I see are some of the outstanding questions for you, that you have
raised, and that one hopes that the report ultimately, NSABB, whatever the academies come up, will help
us. So gain of function, not a great term -- I think there's pretty good agreement about that. It doesn't
really hit the mark. It reminds me a little bit of the term "cloning." Now, "gain of function" hasn't gotten
on the cover of Time Magazine. George Lucas hasn't made a movie about gain of function yet, although
just wait. Nonetheless, like cloning, you know, nobody really liked it, and certainly the people who met in
this building ten years ago to try to sort these things out didn't like it particularly, but it got a certain
purchase in the public discourse. And gain of function has kind of done that, too, but it doesn't seem so
great. I'm really happy to hear that Botox -- thank you, Michael -- we got gain of function from Botox.
That will save me some money over the next few years. But it's not a great term. There are always gains
and losses of function. How we can think about natural mutations in this context, escape mutants that are
resistant to medication.
Since the term "gain of function" is fundamentally not the best term, it's very hard to arrange policies.
You know, the tuitions don't flow well. So you have these policy issues, lots of talk, of course, I've heard
over the years about safety records and lab accident risks, how much do we really know. My post-doc,
Nick Evans, has written about this. So how good is the reporting? Probably not so good. Between the
lines also, people worry about what biological safety level ought to be required, some sort of BSL3+ or
BSL4-. These seem to be outstanding questions that could be addressed.
Lots of disagreement about other acceptable alternatives for vaccine development. Are gain of function
methods always necessary? And, of course, there's the time lag between a pandemic like Zika or Ebola,
and do we need basic science as compared to ring vaccination and some of the things that we -- or
isolating affected communities? Nonetheless, there is some agreement. So, much regulation fails to hit the
mark. I think I mentioned that already. There was the Novartis example a few years ago. But everybody

25

Gain of Function Research: The Second Symposium, Day 2


agrees, I think, that some regulation is needed. Even I am subject to regulation. I do survey research once
in a while on informed consent in clinical trials, or I have done that, and every couple of years Penn
reminds me that I haven't taken the quiz to get me certified to be an investigator at Penn, and I get very
annoyed. And then I have this Pogo experience, you know, "I am my own enemy." But I generally take
that quiz, and sometimes I pass.
Biocontainment, not great. Can we do better? Not a perfect record. Everybody agrees that there's some
stuff that gets out that ought not to. And then there are the human problems, which I'm really impressed
by the remarks this morning about the fact that so many of the errors are human errors. The same is true
in cybersecurity, by the way. Most of the problems we have in cybersecurity seem to come from human
beings, not from the systems themselves, which I think is interesting. People have said over the years that
really it's a pain in the butt but you can do risk mitigation. You have to think about it a little bit. If you
really want to do some work, there probably is a way to make it a little safer. You just have to think about
that. And sometimes there are alternatives. They may not be the best option. We may not be able to
answer all the questions in science we'd like to answer. That is not a new fact in the culture of science.
But maybe we can come up with alternatives that are nearly as good.
The looseness between genotyping and phenotyping, I think that's very interesting. I generally get
vaccinated, I guess, for flu every year, but this made me realize why I sometimes don't bother, because
I'm not so sure. Is pre-pandemic strain selection transformative was the big discussion a couple of years
ago in this room in new vaccine development. I think the consensus seems to be, well, it's not short-term,
maybe not even mid-term, but in the long-term we need to leave a space for something like anti-function
for long-term improvements in vaccine development.
Human beings are vulnerable right now. I've heard several people say -- give examples, for example,
about SARS-like Coronavirus. That's a problem right now. Maybe there are some areas in which we
should be moving faster and other areas in which we should be moving a little more cautiously. And then
animal model development is another example I've heard people talk about.
So I'm trying to say something constructive and relatively concrete. I was thinking about what would be a
new way to do real-time assessment in basic science labs. So what would be the triggering criteria?
NSABB has come up with these three criteria. I recognize that there is a debate about what the logical
connective is, is it an "and" or an "or"? It seems to me, speaking of public engagement, this is a fantastic
opportunity to do targeted public engagement. Would people like me, members of "the great unwashed,"
would they tolerate the condition that all of these criteria should be existing at the same time or would
they really prefer us to be more conservative about this and only one criterion at a time? I think that's a
great opportunity for some kind of public deliberation.
So what would be the precedence for the model? I'm building up to something I want to suggest. Those of
us who've done work in venture capital as consultants, or when I did a project with Gates a few years ago
on their global health work, I was struck that they had adopted a milestone. So perhaps we could ask our
scientists who are doing a project, what's a milestone in this project that you think you, yourself, would
like to see a moment of reflection in whether you take the next step? What's an important -- what are the
decision points, as it were, for you to decide whether you go further in this project? Again, this is all
based on the notion of the triggering criteria that I had up before.
We see this in phased clinical trials that IRBs are supposed to oversee. Innovative surgery, generally it
doesn't happen -- not supposed to happen, in my view at least -- with many candidates at a time. You do
one patient at a time and reassess. You do two or three and reassess. And then, of course, DSMB, the
DSMB model is based on the notion that there are limits on continuing. So I'd ask -- in this new model, I
would ask the investigators to set up some milestones. I was very interested -- there was a brief discussion

26

Gain of Function Research: The Second Symposium, Day 2


a couple of years ago about this particular practice, the Biosecurity Taskforce at the University of
Wisconsin, multidisciplinary team, basically focusing on the higher risk, big magnitude labs, and
establishing and applying risk mitigation principles that are communicated -- discussed with the team.
So there's another model as well, I'm going to give a plug to our colleagues at Stanford, in bioethics, this
Benchside Ethics Consultation Service -- some of you know about it -- where a geneticist, ethicist, law
professor will actually be on call for the team in a lab, and they will go and talk to the PI and her or his
team, and they'll talk about what they're doing in the lab and whether this is a moment for further
reflection. That seems to me to be, that, along with the previous model in Wisconsin, they've caused me
to think about what I call R-BATs.
Now, R-BATs are going to drive people bats. I didn't think about that at first when I made up the
acronym, but it does work pretty well and I think it's probably true. Nonetheless, this is aspirational.
Maybe for various reasons it can't be done. I heard a number of people over the years talk about the need
for some kind of dynamic, real-time, iterative process. This is for the lab. This is not going to be for the
application process, if there is a pandemic. Perhaps based on milestones that the team itself would
develop.
Maybe unannounced audits -- nobody likes that, but maybe yes, maybe no, depends on the situation.
There would be case specific risk benefit parameters. A lot of people have said in the last few years we
really need to go do this case by case. Maybe this may have some educational function. It might be
voluntary. It might be mandatory, depending on what the funder wants. I thought that some idea like this,
to actually bring people into a lab, something like what goes in Wisconsin, something like a hybrid of
what goes on at Stanford, might be something useful for the lab.
Now, of course, I don't have in mind that these are going to be dispositive. They're not DSMBs or IRBs.
The notion would be -- and I've done this when I was on staff of advisory committees, presidential
advisory committees, when you're working with agencies, you ask the agency team to tell you how have
you proved to us that you have considered these questions? What are you doing? What are the questions
that occur to you? What are you doing to address them? That's the spirit of it. It's a collegial, iterative
process. And this would be, more than paper compliance, would be a dynamic real-time process. The
bodies actually go in the lab and talk to people.
The problems -- pretty obvious -- more bureaucracy, you might say, although in my ideal world I think of
this as colleagues going in to talk to other colleagues. Obviously there are payment issues. The Stanford
group got some funding to get off the ground for this, and I think we have some ongoing external support.
And it doesn't solve the rapid response problem for the emerging pandemic virus -- not intended to do
that. But it's intended to address some of the concerns that people have about the utter independence and
non-transparency of what's going on in a lab that's doing work that meets one of those criteria. So that's
the news from "the great unwashed," and I look forward to hearing more of the discussion. Thanks.
Settembre: First, I'd like to start off by thanking the organizers for inviting me to speak on this topic. I am
indeed the -- Ethan Settembre, I am the head of research for Seqirus. Seqirus is a flu vaccine
manufacturer, which is a combination (merger) of Novartis vaccines and bioCSL. And I point that out
obviously to say where I fit as a stakeholder, but also I'm really part of what is a large global system to
generate flu vaccines in a timely manner. And I'll describe the system as a whole, to put it in context, and
then, of course, the role that manufacturers play, and then some of the gaps that exist where
improvements can occur. And those sorts of improvements, I'll describe them in some degree of detail.
So this is just a sort of brief snapshot of the global influenza system. And, indeed, it's a system that
involves 141 national influenza centers from around the world, where samples are obtained, they're sent,

27

Gain of Function Research: The Second Symposium, Day 2


perhaps sequenced there, but generally sent to six WHO collaborating centers, which are also distributed
around the world, where antigenic characterization takes place, as well as sequencing, to determine
whether something's new, we've seen it before, and whether or not we have existing vaccines for it. And,
indeed, there are a number of other processes that happen in those centers and other related centers where
there's various events to make a vaccine virus strain or a candidate vaccine virus all before it really gets to
the manufacturing site.
And then the manufacturers, of course, generate the vaccines using a variety of methods, generally eggs
or cells where the virus has grown inactivates, and then perhaps further processing. And then, finally,
there's still a release that occurs, both within the manufacturing but through a series of essential regulatory
labs that also themselves have to make a series of reagents, which are used and distributed around the
world. And at the very end, of course, there's a vaccine, but that vaccine, in that case, would just be sitting
in a manufacturing site. There's a whole series of steps that then occurs where it needs to get to people
and actually get to be used, because, after all, the best vaccine is the one that's used.
So you can see this is quite a complicated system, with many players, both public and private players,
which leads to a variety of steps, propagation delays and other aspects just because of the nature of the
system itself. And so that is how the system is set up. And then one may ask, "Well what happened most
recently in a pandemic situation?" So, here, we have a prime example, relatively recently, of the 2009
H1N1 pandemic. And, actually, in this effort the global system worked quite quickly, considering the
virus itself was first recognized March 18th in Mexico, and actually by June 11th there was actually the
declaration of the pandemic. But before that there were, in late April, clinical virological assessments,
then vaccine strain preparation, testing, evaluation, and initial seed viruses. Some of the seed viruses
involved re-assorting, which is a process by which the HA and NA generally, although it could be more
than that, but the HA and the NA of the candidate of the virus of interest is re-assorted with a known low
pathogenicity common backbone, generally PR8.
And, ultimately, those were transferred to manufacturers. And you can see, in each step, there are many
hands that are involved. There's places where handoffs occur and there's a lot of work that needs to be
done at a variety of groups in concert, which is very complicated. And you can see that it wasn't until -although clinical trials actually were running along, it wasn't until the end of -- the process of batch
release which finally occurred in September, and then the shipment of the first doses out to where they
needed to get to people. And, of course, that was the ultimate goal. And this was quite quick in the overall
scheme of things, four to five months, particularly in the case of a pandemic.
But what does that look like? Even though the system, as a whole, worked quite well in moving forward,
getting a pandemic vaccine out, if you take a look here, what you'll see in the black is an increase, and
based on the number of cases per week in the United States. And you can see that represents the peak, the
wave of the H1N2 pandemic sweeping across the United States. And what you'll see, that yellow bar that
begins to increase at around September very slightly and then really more meaningfully into October and
further, that actually that's the provision of the number of doses that were delivered.
And so what this sort of represents, and I think it's pretty clear from where you're sitting and certainly
from where I'm standing, which is that we were in the situation of vaccinating the survivors of the first
wave, because actually no meaningful quantities were really available prior to the peak itself. And so
there's a real need for speeding up, particularly in the case of a pandemic -- although used in seasonal
situation as well -- to speed up the production. And there's a variety of ways to do that. And, in fact, it
was because of this that, at the Novartis vaccines in particular, we took an interest in determining can we
speed up this process. And the place where we noticed that great speed could be added would be at the
front end of the process. And, initially, the start for manufacturer of the process of generating a vaccine is

28

Gain of Function Research: The Second Symposium, Day 2


waiting by the mailbox to receive the candidate vaccine virus from the collaborating centers or from the
center that you'll receive it from. That's one way to start.
An alternative is with the availability of HA and NA sequences, to generate the vaccine virus itself on an
appropriate backbone that actually could work well in your process. And so at Novartis vaccines we
worked with the J. Craig Venter Institute and Synthetic Genomics Vaccines, Inc. (SGVI) to come up with
a synthetic process for generating synthetic influenza viruses where it's designed that they are actually on
the backbones which would be attenuated, but would, therefore, allow for speed, accuracy, and ultimately
they'd be designed for high yield. In all cases, it would be -- they would be attenuated -- the backbone
itself is an attenuated backbone. So it's one of the ways that we consider that we're actually generating
vaccine viruses that can be used, and really address an immediate important medical need in a short
period of time, so something where it's really essential that we deliver ahead of that wave.
Now, this was work all that was done following the 2009 pandemic. We had a more recent opportunity,
and this was particular for H7N9, where we could apply this actual methodology to determine how
quickly it can work in a situation that would be similar to what would be in a pandemic situation. And, in
fact, using that methodology, it was actually March 31st, Easter, where the viral coding sequences were
first posted online on GISAID by the China CDC. And this information sharing is really what can enable
the beginning of the process of being able to move very quickly into the process and create proper
countermeasures in a reasonable period of time to address the needs. And you can see that, working with
our colleagues, actually who were in California at the time, based in Cambridge, they began synthesizing
the genes there. HA and NA chain sequence started moving into April 2nd where gene assembly was
complete, and it was shipped actually still in the mail to us, where we've generated the viruses themselves
and saw first rescue on the sixth of April.
So you can see that's a big time difference, especially considering that it wasn't until April 11th that the
CDC received the wild-type virus, which would be the start of the process for generating the conventional
vaccine re-assortments. And so this is just an example of the time that one can cut off from the traditional
methods by which one can deliver vaccine so that it's used in an immediate need situation, not just for a
pandemic but actually more broadly in seasonal situations as well. So I would argue that, from the
perspective of the way in which their work is done, that this is one of those activities that would not be
considered gain of function of concern as it relates to particularly the concerns of getting it to people in
time for use prior to a pandemic wave. So I leave you with that. And thank you very much. Happy to take
questions along with the panel.
Dormitzer: Okay, thank you, Ethan and the rest of the panelists. I want to make sure we have plenty of
time for public discussion. So I think we won't go through all these questions, but I think maybe three of
the questions quickly. And we don't need to go down the whole line. This is going to be more of a quicker
meeting. If you feel inspired by a question and have something to say, that will be fine. The first one is
there is some consensus, there seems to be, that not all gain of function research is gain of function
research of concern, and not all of it requires new policy. This does not seem to be the case where you
know it when you see it, because, clearly, different people can look at the same experiment and some
would say it's of concern and others not. Are there any thoughts from the panel on what does constitute an
experiment of concern in a way that's somewhat formidable, that you can actually define, in some way,
what the proper scope is?
Callahan: Okay. I was going to draw out that pause. So much power there for a minute. Ethan and I had a
little quip to the side here. He might not agree with this because he didn't have a chance to respond, but
we note at the beginning that, you know, we note that the actual trigger mechanisms for what is gain of
function are actually one of the major motivations to push gain of function, and that's that there is no
medical therapeutic. And so that's the criterion here, but it's exactly the driving force that that's a new

29

Gain of Function Research: The Second Symposium, Day 2


unmet need market. So there's a twist when we try to explain this internationally, that we don't want you
doing gain of function research because you don't have a medical therapeutic. And, as the Russians said to
us, "That's the point." So, from a pragmatic standpoint, we shouldn't have to be like them to understand
the optics externally. We should critically review it for its complete end to end reasonableness.
So, as for what I consider gain of function, we're going to be meeting again and again because new
technologies are changing what qualifies as gain of function. Remember that the U.S. perceives it's the
country that invented artificial amino acid encoding. But the only groups that have turned that into
products are three Asian countries, okay? So we're way behind. And so gain of function in the synthetic
biology space, if we use that, if we don't even have the gene code that allows you to organize artificial
amino acids to confer novel folding patterns in proteins that can do novel things that were never allowed
in nature before. So this is referred to as molecular hysteresis for the virologists in the room. And there's a
big concern and we barely have that technology in the United States. That is an industry proprietary
technology to produce winning and enduring highly resistant proteins. So I slipped a little bit away from
answering the question by telling you we still won't know what it is next year because there will be a new
technology that we will need to worry about that we need to contend with. So our definition needs to be
general, adaptive, and culturally appropriate through foreign scientific communities. Could you agree
with any of that?
Settembre: Yeah, no, I would agree with some of that. But, you know, I think actually it came up in more
of the discussion yesterday, somewhat around risks and benefits and the difficulty in defining really what
the benefit is. There's difficulty, obviously, in the risk as well, but the difficulty is in defining the benefit.
So I think that really it has to be a bit of that discussion around, and perhaps that relates to we need to do
this work so that we can generate the countermeasures. And so it's a little bit of back and forth. So, in
other case, I sort of link those two together, the risk and the benefit, which is very difficult to determine.
Fisher: Yeah, I think we really need to look at it from what's the consequence, right, because if you're
looking at the consequence, that can be informed by what's the consequence of doing the experiment and
dealing with that result versus the consequence of not doing it and not having a countermeasure when
something happens.
Moreno: I was thinking, you know, if I'm a member of Congress, I would really want to be sure that you
know how to keep this stuff in the lab until you come up with something that will fix it. So the
containment issue, I would say as a member of the public, we've seen this before in recombinant DNA,
that was a big thing in the '70s, to keep it in the lab, and I think that's still true from the point of view of
the general public.
Fisher: I remember when I was I am showing my age -- when I was a kid, I had a copy of "World
Book." Remember, it was, like, natural encyclopedias. And one of those editions, the science year
actually was discussing recombinant DNA research. And, you know, what they were talking about for
recombinant DNA research looks a lot like what we have for our BSL3 and 4 labs now.
Dormitzer: Maybe this is related to the next question, and given that there is still definitional uncertainty
over what is gain of function research of concern among the tools that are available to influence what
goes on in the U.S. or outside the U.S., you know, U.S. government-funded or non-U.S. governmentfunded. What do you recommend for tools? And the R-BAT idea is, I think, very interesting, but are there
other tools that you would want to introduce, or some that might not be appropriate, given this situation?
Moreno: I'm only good for one tool a day, so, you know, tomorrow. It's on everybody else.

30

Gain of Function Research: The Second Symposium, Day 2


Callhan: So many of the tools are so encumbered by proprietary intellectual property that they're
intentionally not being made available. And I use the reagents that are used in our own industrial
processes as well as those of our collaborators overseas. They have competitive advantage from having an
unfair advantage by having the critical reagent access to a strain to make a hemagglutinin that confers
immunity across many clouds of H5. And so we have to change the incentive plan to produce more
players, more stakeholders, and therefore more solutions. So I think we should, again, listen very
carefully to industry, foreign industry, working with foreign markets. And I hasten to add that a growing
number of these innovations are for therapeutics that are never intended for USDA and FDA approval. So
they just aren't -- the market is elsewhere.
Fisher: So, you know, kind of taking off my FDA hat a little bit and speaking as a scientist who's dealt
with select agent regulations, who's dealt with DURC, who's dealt with some of these things, you know,
from my own perspective, one of the things I would like to see -- and I believe you touched on this,
Jonathan -- was to bring all these frameworks together so that we're not talking about gain of function in a
vacuum away from DRC, away from all these other things. And I believe yesterday, one of our European
colleagues mentioned how the EC approaches this, and it seems they have a more global unified approach
to looking at the big picture in terms of this is your research, what's the impact?
Dormitzer: That brings us to the last question before opening up the mics, and that is I think we're
reasonably clear on the intended consequence of the new policies, and that is to protect the public from
the unintended consequences of gain of function research when done for beneficent or economic
purposes, or the intentional consequences if done for nefarious purposes. But as new policy is being
considered, in addition to considering the intended consequences, it's important to consider the
unintended consequence of new policy. And are there any unintended consequences that you might
foresee or particularly be concerned about that you would want those making the policies to be aware of?
Settembre: Well, certainly for me, from the global system perspective, in influenza, many of the processes
that are commonly performed really, depending on how one reads it, could be considered as gain of
function, when really they shouldn't because they didn't really fit the categories. And, in fact, if you
weren't able to do the traditional re-assorting that's done, generally we would not be able to have enough
vaccines for those who need it in a timely manner. So, for me, that would be one of those unintended
consequences.
Callahan: I'm fine with that.
Fisher: I was going to say sometimes I think it's better to pay the piper now versus later. For example, if
you're talking about a gain of function experiment that could inform rational vaccine design, if you do
that now, that may preclude the necessity to do those gain of function adaptive of experiments later. So, in
terms of the unintended consequences, I think anything that results in uncertainty or results in knowledge
gaps, those need to be carefully considered.
I think maybe with that we'll open up the mics for our questions from the audience.
Hi. Wendy Hall, Homeland Security: I had a question, Mike. I appreciated your comments yesterday
reminding us of China and the innovation and the values that they hold might not be equivalent with
values of the universal scientific community, with the U.S. And I'm looking at being a recipient of the
NSABB and other recommendations that come in from the public to shape policy. In that discussion,
what is the best way to make sure we're getting the full picture of risk and benefit, pros and cons, and
importance for governance to include the private sector dimension and venture capital dimension.
Because, as you know, the USG is tapping into the NSABB for input, mostly from the academic

31

Gain of Function Research: The Second Symposium, Day 2


community? I don't know if we have an equivalent tool in our policy tool kit to get stakeholder input
looking at private sector equities. Any ideas?
Callahan: Great. Hi, Wendy. So nice to see you. I think we heard a little bit about the instruments being
used to manage gain of function throughout the Asean and George Gao made the important point
yesterday that they're actually policing their own systems with nationwide recommendations in the KCUs
so as to prevent viral twinning studies, to prevent segmental re-assortment. So that was nice, because it
talked about enforcement. We haven't been talking about enforcement here really.
But in terms of the ways to more thoroughly engage them for your current position, for example, is you
have to understand all the stakeholders and their motivations. Our success internationally in reducing gain
of function studies where these accidents occurred because they were driven by other issues. For example,
for the DoD program, DARPA programs for the Blue Angel, which was the pandemic H1 rapid vaccine
system, that vaccine needed to be made halal or we were not going to transition it to an Asian-based
manufacturer. So that was an early engagement in having the Islamic clerics tell us all of this and tell us
what's halal and not halal, making sure the expression system in nicotine -- it's Nicotiana, by the way -you know, was very important. But then we got it exactly right. And because we got it right, we shut
down a major investment through the Asean in halal vaccine manufacturing. We didn't know we did that.
We undercut them and we were earlier because of the speed of this platform. So, accidental, but we'll take
it. We'll take it as a win.
I was quite emphatic about the role of Asian venture capital and FSU venture capital. It is unbelievable
what they are talking about in those venture meetings. And I think a lot of it is likely overstated, a lot of it
is overly optimistic. Those companies will die, just like our own, you know, in round one and round two,
in the equity cram downs. Those forces are at work. But if we can provide some economic reasonableness
so that the responsibility of the for-profit group and the public health motivated investor includes
prevention of gain of function using metrics they can use at home, then I think we have a better chance of
doing that.
In Russia, as soon as we talked about this version of gain of function, the conversation was over. D.I.
Ivanovsky Research Institute of Virology, Kirov, Institute of Highly Pure Preparations are no longer
participating in gain of function. It's because of this language. It's just like our biodefense language, you
know, and trying to justify so much of our infrastructure for the biodefense enterprise. But that's -- we're
straining too much for a complicated discussion.
So, keeping the language where we keep them involved in the dialogue and giving a little bit, I think we
have to be resilient in our assertions of what is gain of function and accept the fact that these developing
countries need to protect their own animal stocks and their people and need to comply with their own
sovereign health security, and they can only do that with cheap, inexpensive, effective vaccines, which
involve gain of function-level principles such as massive propagation, high virulence, and conferring
resistance across multiple strains.
Dormitzer: So, if I understand -- I want to make sure I understood what you're saying. When this comes
up, they don't want to talk about it. It's not that they don't want to do the gain of function work, they don't
want to discuss it anymore?
Callhan: This is an international regulatory tax on the ability of countries to protect their animal food
populations, followed by human.
Dormitzer: Got it.

32

Gain of Function Research: The Second Symposium, Day 2


Jerry Epstein, DHS. This panel is really focused on the benefits of gain of function, but I think it's
important, let's look at what the NSABB workgroup has come to now. We have a lot of confusion about
what the term means. There was a lot of discussion in the past about experiments that probably never
should have been captured. I'd like to have this panel crystallize on what I think we're going to be moving
ahead with, which seems to be -- if the NSABB recommendations take it -- a more restrictive definition,
and whether or not you think there should be a third criteria? The third criterion does not appear to be
resistance to countermeasures. It was reframed as a different third criterion. So I'd like to start with that
narrower set.
But I also want to talk about some other gain of function benefits that have been talked about in the
discussion previously that I don't think have been mentioned here. This is sort of the vaccine production
process, most of which, as I see, is gain of function of the old type and not of the type we're now focusing
on. So I think, largely, the discussion of the panel isn't intersecting much with where it looks like the
workgroup is going.
But we've also heard about the importance of having surveillance and having the ability to predict
mutations that might confer pandemic potential, and therefore attributing surveillance systems, and
therefore attributing control systems. And I haven't heard much discussion about those at all, the relative
importance of that type of gain of function experiment that would -- the important or unique ability of that
type of research to identify mutations, and the ability of the global surveillance system to act on those
mutations, and the ability of control mechanisms such as poultry elimination other than vaccine
production. And I'd like to know if any of you can talk about those sorts of benefits of that restrictive part
of gain of function that appears that the working group is focusing on?
Callahan: I was going to give it to flu, but okay. Thank you, Jerry. This has to do with just our
experiences in uncommunicative countries. The requirement for them to participate in surveillance they
do at their own peril. These Ministry of Health, public health officers, and physician officers in foreign
nations are rewarded for their own ability to produce control. This led to the fictitious reduction in Ebola
rates in two African countries in the middle of the epidemic at a time when those rates were going up.
They were being rated weekly on how many cases they had and whether or not their programs were
viable. So we also know that the Program for Monitoring Emerging Diseases (ProMED) is rife with
industrial sabotage and false reporting of outbreaks that actually aren't occurring. So anybody can put an
avatar in ProMED and say a chicken farm down the road has had an outbreak, and therefore take over that
commodity market for a short period of time.
So, even our surveillance -- oh, and, of course, recent events, a very prominent biosurveillance group is
now in harm's way because of events involving their reporting mechanisms. So there's -- you know, these
are casting huge prejudices and concerns about U.S. promulgated surveillance, that it's been used
allegedly for tools of economic advantage of American and other Western companies. And the second
part is that under the International Health Regulations, the host country has their ability to control its own
disease reporting, just as the U.S. does. And a great example to show the inequity in the reporting is the
demands that we make for MERS reporting but our unwillingness to actually share strains for the
progenitor viruses with MERS endemic countries and their labs. A great example is to look at how many
flu strains the U.S. government exports compared to how many they import to look at that. We also have
the CDC and WHO reference labs, and do the seed stocks and all that, but this is a continual problem that
we have in justifying our demands for surveillance. So this has led to "Sequence-In-Place," which we've
talked about, the non-aligned nations, and also "don't test, don't tell."
The example with "don't test, don't tell," we know what a MERS patient looks like, absolutely. We've
seen hundreds of them. But only tens of them are reported, and the reason is there's no advantage to those
endemic countries with MERS to report them. They are prosperous. They are internally funded to develop

33

Gain of Function Research: The Second Symposium, Day 2


their own vaccine and monoclonal strategies, and to invite BARDA in to do a phase two study. So,
absolutely, they're on their way, but we cannot -- I can't use the word "trust." We have to understand that
we've produced the wrong incentive plan for faithful, transparent reporting in the surveillance space, and
that's why we have to be there. We have to breed collaborations with young investigators here and there,
Europe and Africa. And because countries are more difficult to work with, those are the exact countries
we have got to work with. You know, Iran is a perfect one. We can get DoD money into Iran; what's it
take? French people! French investigators in Tehran are how we work, and we fund the French, and the
French fund Iran, and then we get a Leishmania vaccine, which is really important for Iran. So let's think
of these novel twists in the system that will allow us to create -- exaggerate our similarities, not our
differences. That's my riff on that.
Dormitzer: Next question, please.
Gavin Huntley-Fenner: I wanted to ask a question about how this panel today intersects with the panel
from yesterday where we talked about the potential for international collaboration on standards, et cetera.
It was interesting to hear one of the speakers talk about the difference between biosafety and biosecurity,
and the potential -- possibly a greater potential for collaboration over biosecurity. Although, today, I think
we're hearing something different. I was wondering if you could talk about the ramifications of your
perspective for the possibility of international collaboration.
Dormitzer: Okay. I think that may be up your alley again.
Callahan: Okay. I apologize. So, great question. And I was reflecting about great diplomatic catastrophes
in the Appendix three review with our excellent colleagues in the FSU where it took until day two in the
afternoon before they realized what biosecurity meant. Biosecurity, in this biosafety/biosecurity meeting,
to them meant the protection and immune hardening of the personnel working in their BL4s, which is
Russian BL3s. So, wait a minute, 48 hours into an international meeting involving the G8 and we don't
even have the term right. And, you know, they have a very different segmented evaluation of the elements
of biosecurity. They now know that Americans mean locks, fences, freezers, and classified research. And
they point to our NBACC and our IRF and top security clearance service and new scientists, and they're
like, "How can you tell us that you're being transparent?" It is a hard thing to answer. It is a very hard
question to answer.
So we operate in trying to mystify it. And, happily, much of the biodefense 2003 era efforts are now
focused on emerging infectious disease, and so the same responsiveness, magnitude of mass casualty
infections, and poisonings, the same timeline issues from the influenza portfolio are being brought to bear
against the Zika, the chikungunya, the dengue 2, and Asiatic flu, moving through the Americas, MERS,
Bundibugyo ebola, followed now by the Makona strain. So we have a regular opportunity to get this right.
It's the beauty -- the tragic beauty of emerging infectious disease, but I don't think we're capitalizing on
that and getting the language out there in a way that they're really enjoying. They really have problems
with American exported language that are hurting their own ability to conduct research.
Dormitzer: Next question, please.
Greg Koblentz from George Mason University: Just to follow up on the question from Gavin and some of
the comments made by the current panelists. A lot of what we spent today is talking about biosafety or
biosecurity, or dual use research/GoF oversight. But all of these areas are interconnected and they
overlap. Is it time to break down some of the stovepipes between these categories and start thinking more
holistically about something like bio risk management, which would help you integrate the things you
have to do in common for all of these areas that would maybe get around some of the terminology issues
we've had, and would allow you to think about how to redesign the system in order to improve bio risk

34

Gain of Function Research: The Second Symposium, Day 2


management, bio risk reduction more broadly, as opposed to just tackling each of these problem areas
separately?
Fisher: I think that's a great idea. I think that touches on something Jonathan brought up, and I mentioned
as well in that uncertainty is always a challenge, right, and coming from the FDA, one of the worst things
you can do to a sponsor is introduce uncertainty. And I think, you know, in terms of the scientific
enterprise, again, if you have scientists that are on the fence about whether or not I should do this
research, if you have a bar that's set up here, it's like, "Well I'd like to do it, and I wouldn't mind filling out
all those forms and engaging with the committees and all that, but I'm not exactly certain how to do it, I'm
not sure if I'm going to get pushback." That's going to be a huge hurdle. So if the construct is to bring all
this together to reduce the uncertainty, I don't think I have an issue with that.
Dormitzer: Maybe we can wrap up with one sort of final question. I think, Jonathan, in your talk, you
talked about some areas of consensus, and, at least among this group of people on the stage right now,
there seems to be some consensus that there's a desire not to do things that make it difficult to respond to
emerging diseases, outbreaks, and produce medical countermeasures more than is necessary. Are there
other areas of consensus that the panelists see that -- because it is getting close to policymaking, so I think
it's good to not only focus on areas of disagreement but where do you see consensus in addition to what's
already been brought out?
Settembre: Well I guess I'll just say one of the points that I think I'm sort of hearing, and hearing also in
the questions, is actually about information sharing, you know, making sure that the right people have the
right information. And, in fact, that's really, as part of the global influenza system, information sharing is
really key for making good decisions about what to do, particularly in the case of what's the right
countermeasure. And so I would say that, sort of broadly, information sharing and assuring that actually
happens is probably key. And I just sort of hear that amongst the points that people are making.
Callahan: I agree. I agree absolutely. I think that one of the things that promoted information sharing was
the efforts of Flu Division, Nancy Cox in particular, who helped and made sure that hemagglutinin
sequences were available to all. So those would eventually get through the National Center for
Biotechnology Information (NCBI). She felt a duty to, if she was receiving a strain of consequence from
Indonesia or elsewhere, that CDCs job was a quick turnaround, and the CD, back then, would then be
Federal Expressed back to Jakarta. So we're on the Jakarta side of that, witnessing that, and that produced
a very close partnership within the CDC that was unlike any of the other partnerships with other federal
agencies that were pandering for, you know, collaborations. So, structural elements are key points.
And, you know, I talked about exaggerating similarities. I think that we're just -- the West, including our
European partners, are still -- we have poorly pixelated views of the things that are driving gain of
function. We think we know. We get committees together, but I don't see an Islamic cleric chemist in the
audience. We dont have -- you know, Persia has a really awesome homegrown manufacturing capability.
Vietnam makes all of its own vaccines for childhood preventive illnesses in Pasteur's legacy, the Pasteur
Institutes. And these are groups that have played different roles over time in helping us maintain our
connectivity for times of international health crisis. So, happily, if there's a shining lining to catastrophic
infectious disease, as you can be sure it's going to happen again, so each time we have one, minimal or
not, is an opportunity to get it right, get a little bit farther along, go for the junior faculty, not the senior
faculty, and grow your collaborations and try to make them last decades, not through one hour, one cycle.
Dormitzer: Well, thank you very much. We're wrapping up on time, even one or two minutes early, which
is terrific, particularly since lunch is the next event. So thank you very much, gentlemen.

35

Gain of Function Research: The Second Symposium, Day 2

Session 8: International Governance: Opportunities for Harmonizing GoF Research Policy and
Practice

Welcome back to the last of the formal sessions. My name is Ron Atlas. I'm a member of the Steering
Committee, and this particular session is a follow-up to yesterday's session that Barry Bloom chaired on
the International or global aspects of the questions surrounding gain of function.
Yesterday, we really did hear the perspective from Europe primarily with a perhaps more global view
coming from the WHO. But it was very much a discussion of what has been a robust discussion within
Europe on gain of function.
In today's session, we extend our consideration into Asia and the Middle East and Australia. And given
that the three viruses that have been a central focus of discussion in the U.S. and focus of the pause of
funding -- being influenza and SARS and MERS -- it's very appropriate to gain the perspectives from
Asia and the Middle East since that's where those viruses are rapidly evolving naturally and where their
work is presumed to be helping in the eventual prevention of a pandemic caused by some of those strains.
Unlike the previous sessions, none of our speakers brought slides. We're just going to be talking heads-up
here; and we're going to start off with George Gao, China, and ask for his perspective on how, first of all,
China might harmonize with anything the U.S. does; how can there be cooperation; what are the
mechanisms for that; and also to press him a little bit on the perspective for SARS and influenza viral
work.
George?
Gao: First of all, I also would like to thank the organizers who wanted me to be here to share some of our
thoughts, maybe my thoughts, about gain-of-function research. So why do not I have any slides? Because
I found this slide is much better than mine, so I'd like to keep this one. It's the best; I keep it.
The first point I want to make and to discuss with you, I already made something yesterday. So we are
discussing something about the risks and the benefits what kinds of risks we would encounter, what
kind of benefits we can gain from all of this research. So I will give you an example. I mentioned it
yesterday a little bit, but I'll give you an example about H7N9 influenza, which we encountered in 2013.
When you think about the evolution or the mutations of any given viruses, like H7N9, because I'm a lab
guy, I did a lot of work on MERS, Ebola, and influenza for the whole (inaudible). When you try to spot
the mutations in the genome, you can imagine, based on whatever you obtained from previous research,
you thought, okay, this is the mutation, this is the amino acid that might be responsible for the mutation,
for example, for the H7N9 to switch from the avian receptor binding to the human receptor binding.
So then you say, okay, this time what we isolate are so many different kinds of viruses. So we see the
original one, the bind only avian receptor; at the later stage, we've got something better for the human
receptors. But then in place of all this, H7N9, H101, H2 and also the H3 -- so you imagine, the virus will
go that direction. If you have another mutation, that position of the gene, of the H gene, they might obtain
that transmission ability to human beings. But now four years -- it never happened.

36

Gain of Function Research: The Second Symposium, Day 2


The point I want to make is do we, the human being, have enough capacity to test all these hypotheses?
That means we are risking the test, but what kind of benefit or gain we can get from all those mutations.
You can imagine, you can do a lot of experiments; but you might miss one important one. That one, in the
later stage, that nature will select for this virus. So my conclusion would be maybe it's not necessary to
spend a lot of money to do all these experiments, even the loss of function.
So as scientists, we want to do something. So can we direct the scientists to do something else?
For example, now we have Zika. Why did we try to attract those top scientists to move from their own
interests Ephrata, MERS come to work on Zika or something else? I think by use of their expertise,
their experience on the gain-of-function research or natural selection research for some virus, to work on
some newly-emerging virus. And maybe that's the solution for the scientists' usage because a lot of our
research is driven by the usage, just when the scientists have the interest. So this is my first point I want to
make.
Because in this session we are discussing international governance, now I want to give you an example.
International collaboration, coordination and governance are very important. We need harmonizing in all
these efforts together.
In 2012, when I'd just moved from the Chinese Academy of Sciences into the China CDC, we had one
case and two scientists one in China CDC, the other one from Harvard so one American, one Chinese
scientist. We are working together. So they are doing some tests for the golden rice in China. Golden rice,
or transgenic, is something good. I do want to speak more about the GOM (inaudible), so that's another
topic.
However, what I want to say, international governance is important because when you look at the
agreement in Chinese and in English, they are different copies. So when they tried to sign an agreement
with the parents of the Chinese kids, they did the test; they claimed something else. Instead, when they
made advocates for (inaudible) for the grant, they launched something the other way. So that means we
didn't communicate well.
The scientists, like this morning we discussed scientists, they are human beings, so sometimes they
want to hide something. This is why yesterday I also said something about we needed to do the oversight
or supervision of the process, not just the [particular situation] so give them the chance or right to do
some experiments. We need to do the monitoring of the whole process; that's also very, very important.
So this is why what I want to say for all this first.
Then, how can we do the international governance? I believe we need some meeting like this. So I'm here
today only on my behalf, maybe a little bit on the Chinese Academy of Sciences. I would encourage we
should have some government-level meetings, like at least the National Academy of Sciences here with
the Chinese Academy of Sciences, have the third this is the second in the symposium together. So our
voice must be heard by the top officials, not just in this country but also in China and also any other
countries because this is very important for the policymakers.
In China, we also are facing some problems. For example, by the regulation of the Ministry of Health, any
scientist, you are not allowed to mix two viruses together in a single cell. However, there's no such
regulation in the agriculture sector. So nowadays, new Zika virus is very difficult to see. It's humanpositive, also animal-positive. And we are also required to do some harmonizing regulation between the
agricultural sector and the human or health sectors. At the moment, we have some problem for that.

37

Gain of Function Research: The Second Symposium, Day 2


So my last point I want to say, a gain-of-function experiment must be highly regulated. So I think the
experiment we do want the scientists to do all of these kinds of experiments because of their usage.
maybe we need a highly-regulated lab, like the U.S. CDC, Army lab; or in China it's the same, the Army
lab or China CDC, to set up some kind of experiment to attract the scientists to get into this higherregulated labs to set up all of these experiments.
Anyway, those are the points I want to make. I'm ready to discuss with the audience after all my friends
here make their points. Thank you very much.
Atlas: Thank you.
Next Gabriel and we'll get a perspective from Hong Kong.
Gabriel Leung: Thank you, Ron. And thank you to the National Academy of Sciences for bringing me
here all this way.
I'm going to focus my remarks on the following headings. First, I'm going to argue that we should put
gain-of-function research in context. Second, I'm going to talk a little bit about the global dimension,
which I believe is critically important to our considerations here. Third, I should like to talk about a
reality check. And finally, I'm going to have a few specific reactions to the past 48 hours and some of the
points that have been made.
Let me start with context. I just alluded to the fact that I just got off a plane and am about to go back on a
plane in a few hours. So let me take the 35,000-foot view of things.
Our discussion is one of risk to human and, to a lesser extent, ecological security. Second, by their very
nature, infectious pathogens do not respect national borders, hence are global. And if we put together
those two facts, or at least assertions, we are therefore considering a global security issue and not simply
a U.S. national health security concern here. As such, I would encourage you to consider reading the
National Academy of Medicine's recent report, The Neglected Dimension of Global Security, a framework
to counter infectious disease crises.
Now, if we take what I've just said in the preamble as motivation, the primary outcome of interest is
surely how to keep the global population safe from the potential consequences of gain-of-function
research; that is, how to keep the global population safe from dangerous pathogens and by dangerous, I
mean either it's highly virulent, highly transmissible and/or highly resistant, according to the NSABB
report -- that may eventually enter into human circulation.
Now, we agree that that is the primary outcome of interest. At this point, I should like to introduce the
concept of hazard analysis and critical control point, or HACCP, which would be very familiar to those of
you who come from a food safety enterprise perspective. In fact, the HACCP idea really dates back to the
'60s when NASA invited Pillsbury to supply food for space missions. In a nutshell, HACCP almost if
you take it literally it really means that they're looking for the weakest link along the entire supply
chain.
So where is the weakest link in keeping the global population safe from the impact of dangerous
pathogens, including those that may arise from gain-of-function research? I would argue that it would be
the lack of country preparedness, as per the international health regulation requirements, let alone the
more stringent GHSA framework.

38

Gain of Function Research: The Second Symposium, Day 2


When a significant minority of countries around the world self-declare that they're not yet IHR compliant,
and with the further unverified self-reported at errant proportion likely not reaching the accepted
standards, I think herein lies the Achilles heel of keeping the global population safe from dangerous
pathogens.
Now, while this should in no way be an excuse for more loosely considering gain-of-function research of
concern, we should view the potential risks of gain-of-function research in context. That is, there is plenty
we can and, in fact, must do first.
For example, in strengthening health protection function of national and subnational health systems that
would be much more impactful in relative terms going towards the same end of keeping the global
population safe from dangerous bugs. In fact, by the third criterion of what may constitute gain-offunction research of concern, pushing for universal IHR compliance would, as a happy corollary, boost
the mitigation capacity such that the proportion of gain-of-function research that would be deemed of
concern is minimized.
Now, I move on to my second heading of global dimension. Because of the second assertion I made in the
preamble -- that is, the high degree of externality which is an inherent nature of infectious agents -- a
globally-harmonized order on gain-of-function research is a prerequisite to achieving global human
security.
And on this point of internationalism, I note and have learned from the discussions yesterday that EASAC
has issued a relatively more permissive set of recommendations a few months ago, whereas largely tacit
responses remain from other countries whose labs have been involved in gain-of-function research thus
far. Some have even called it a deafening silence from the rest of the world.
So while we are engaged in this deliberation for the purposes primarily of U.S. domestic concerns, and
while NIH, NSABB, the U.S. Government, and the regulations that we are talking about carry enormous
sapiential authority around the world, one really shouldn't overestimate the impact. Nevertheless, because
going back to this idea of where is the weakest link along the entire spectrum, even if we do our level best
and put in very stringent regulations in this country, or for that matter stop all gain-of-function research of
concern or stop all gain-of-function research period, it's not going to stop the risk to the global population
in terms of having perhaps an incident elsewhere in the world where these regulations don't and cannot
touch.
Now I go to my third heading of a reality check. I'm a self-declared realist. Overly burdensome acrossthe-board restrictions may well lead to unanticipated consequences. Human frailties will reveal
themselves; and oftentimes, the most dangerous gain-of-function research and experiments will be driven
below the radar whether outside the purview of public sponsorship or moved entirely outside of the U.S.,
where there will be less stringent oversight and monitoring. So we should really worry about what we
don't see and not necessarily what we do.
I also worry about the big chill specter over all of gain-of-function and related research, which is not at all
healthy for science, mostly due to the uncertainty in how the broadly articulated findings and
recommendations of NSABB will be translated into policy guidance, then executed by IBCs and IREs on
the ground at the co-face; i.e., where is the proverbial red line going to be drawn? And I think here
whatever your position, clarity must be key.
Now, some specific reactions to what I've heard and what I've read and what I've learned from all of you.
Two fundamental issues that I should like to comment on. The first is the fundamental question of really
the raison d'etre of gain-of-function research. In other words, is there any unique scientific value or value-

39

Gain of Function Research: The Second Symposium, Day 2


addedness of gain-of-function research; or can alternative methods exhaustively derive the same
knowledge set?
I'm not a molecular biologist and would defer to experts who are much more qualified to comment than I.
But as a general principle, it would be a brave pronouncement to preclude the possibility of gain-offunction research's unique contribution. I've learned from colleagues who do gain-of-function research, or
at least are familiar with that line of work, of the two famous, or perhaps infamous, H5N1 studies that
have essentially brought us here together today, identifying mutations in proximity to the hemagglutinin
and fusion peptide have since prompted further characterization of the fusion pH of the ferret adapted
virus.
And in turn, they have provided a link to other research, a follow-up research, to facilitate current
understanding that the acid-stable pH is important in transmissibility in ferrets and, by extension, in
humans. And this understanding has now allowed us to use phenotypic assays to assess the fusion pH of
the hemagglutinin rather than individual mutations as one of the markers of mammalian adaptation.
The second fundamental issue that I should like to discuss is that regarding the third dimension of
defining gain-of-function research of concern -- that is, resistance or not amenable to control by existing
mitigation measures or pandemicity -- I'm not entirely, first of all, sure which of the three is going to end
up being on that particular dimension or that particular access.
But the question I have is: Is this third dimension really orthogonal to the other two dimensions of
transmissibility and virulence? Is there mileage, for example, in better defining or making more
encompassing the two truly orthogonal axes of, one, ease of spread and, two, ethology, as the product of
interactions between agent as host as opposed to being just innate properties of the agent only?
Now, regarding recommendation two out of the four in the NSABB report which states, and I quote,
"Oversight mechanisms for gain-of-function studies of concern should be incorporated into existing
policy frameworks." Now, in other words, do we need another layer of special oversight regulations?
I haven't thought about it thoroughly enough nor long enough to come down with a decision, and I keep a
very open mind. But I'm always wary of possible unintended, or perhaps even intended, consequences. It
may well become an implicit or tacit way of bogging things down to a death spiral with red tape. And if
so, we should be at least explicit about it and upfront about it.
Besides, practically speaking, there is really no stronger enforcement instrument than a public funder -i.e., the NIH -- in terms of ensuring compliance. Financing has always been a powerful behavioral
modifier.
As for the worry about NIH or other potential funders having a rogue conflict in judging gain-of-function
research of concern, given the NIH's statutory remit as a publically-funded Federal body whose mission is
to serve the public interest above all, I should hope that it shouldn't have any conflict of interest.
And my last observation or reaction, given that the only quantitative assessment is that of the biosafety
component, whereas the biosecurity and benefit assessments were both qualitative or at least only semiquantitative, the risk/benefit analysis -- i.e., the risk versus benefit tradeoff -- is far from a trivial
subtraction exercise. So in other words, while the exercise by Griffin was a necessary and very useful
step, it has not at least to me resolved definitively the original questions posed. So it makes it doubly
difficult for us, really, to draw direct lessons from the RBA exercise, except that we now know much
more explicitly how much more we don't really know and how much more we can't quantify.

40

Gain of Function Research: The Second Symposium, Day 2


And here are my final thoughts. While what I have just said may appear to cast me to the side of the
permissive, for the record, I actually agree and am very sympathetic with many of the excellent arguments
of the Cambridge working group. In fact, I cannot be a more emphatic advocate for responsible science
with proper and sufficiently robust oversight, but not so that it would squeeze the lifeblood out of bona
fide scientific inquiry.
Where that most delicate of balances pivot or, in other words, where the red line is drawn, should be made
crystal clear down to the littlest detail and subject to continuous fine tuning as we learn how to achieve an
optimal balance, whatever that may mean.
Thank you very much.
Atlas: Next I'm going to ask Nisreen to give us a perspective out of the Middle East.
Nisreen AL-Hmoud: Good afternoon. I'd like first to thank the organizers for inviting me to this meeting
to give the perspective from the Middle East regarding gain-of-function research of concern.
I'm the Director of the Center for Excellence and Biosafety, Biosecurity, and Biotechnology in the Royal
Society of Jordan. And RSS is the largest applied research institution in the region and the regional hub
for biosafety and biosecurity activities. And based on that, I was asked to discuss the following questions.
The first question was: What are some of the opportunities and the manner for discussion and potential
action for the wellness of gain-of-function research?
The second one: What do you think are the risks and benefits of gain-of-function research on MERS
virus?
And the last question, what processes do you think would be put in place in the MENA to ensure the
safety of gain-of-function experiments on those policies?
I'd like first to start by saying that while the life sciences are becoming an evermore important part of our
lives for health and nutrition, fuels, and industrial materials, the Middle East and North Africa region lags
behind other parts of the world and in addressing issues in life sciences and research to ensure that natural
diseases are detected and contained as soon as possible; that harmful and unintended consequences of
research are minimized; that labs operate safely both for their workforces and for communities in which
they are situated; and that plans and infrastructure are in place to respond effectively to biological
emergencies.
The ongoing controversy surrounding highly pathogenic avian influenza virus and MERS coronavirus
research has generated considerable discussion and debate among biologists, public health scientists,
biosafety and biosecurity experts in the United States and other parts of the world. However, in many
regions there is considerable need for awareness raising of gain-of-function research of concern, not just
for life scientists but also for lab directors and policymakers.
Scientists and policymakers often have difficulty in either defining what are the risks that need to be
assessed or addressed in relation to life science research, and particularly gain-of-function research of
concern. For maximum benefit to society, policies and practices aimed at reducing and managing
biological risks should be planned in a holistic hall of government manner as part of national biosafety
and biosecurity strategies.

41

Gain of Function Research: The Second Symposium, Day 2


So many countries, such as the United Arab Emirates, are fairly advanced in their federal and (inaudible)
planning. Others have not yet begun the process of creating a national strategy. National action, while
absolutely necessary, cannot always be sufficient to contain or manage biological risks. The need for
effective, concerted, super national efforts means that cooperating countries need to have a common
understanding of the global and regional risks, which in turn requires a common risk assessment
methodology and common prevention activities.
While biological risks do vary from region to region and from one country to another in the same region,
without a common methodology for assessing risks and for relating appropriate policies and practices to
manage and mitigate these risks, any international effort will be neither concerted nor effective.
So the need for comprehensive biosafety and biosecurity strategies has been explicitly recognized by the
countries of the MENA Region. And this has happened at the first Biosafety and Biosecurity International
Conference held in Abuja Dhabi in 2007, where the countries of the region acknowledged their need for,
and the desire to, establish such strategies nationally and regionally. The conference ended with a final
statement, which listed many areas in which the region needed to develop capacities (inaudible) with
regard to biosafety and biosecurity.
In late 2008, the list was developed further by a core of a group of interested experts and champions from
the region into a framework document. The document maps a process by which countries of the region
can develop their national and regional biosafety and biosecurity strategies. The development of a
national regional biosafety and biosecurity strategy enables the MENA countries to identify the biological
risks to which they are exposed and to give them, through the development of appropriate legislative
system, human and physical infrastructure and improving national preparedness and contingency
planning.
The approach is a holistic one, a hall of government, one with a view of all biological risk across the
spectrum of natural, accidental, and intentional threats as they pertain to humans, elements, plants, and the
environment, including water.
While MERS coronavirus is a potential threat to global public health, it's not the only coronavirus
threatening the Middle Region. Little is known about the antigenic relationships among the different
coronaviruses or how these relationships influence the complexity of different genetic strains to emerge in
human populations. The agnostic tools and some information on clinical features of biorisk factors for
MERS coronavirus are now available. However, there is limited information on the sensitivity and
specificity of the agnostic tools; and many clinical questions remain unanswered, including the root and
time course of infection, pathogenesis of disease, and treatment options.
Also epidemiology questions are not fully answered, including identification of animals, reservoirs, and
possible intermediate sources of human infection. The relative importance of different modes of humanto-human transmission example, fomites and aerosols -- a risk factor for transmission and infection.
Thus, basic research priorities for MERS coronavirus could include identifying basic research priorities;
interpretation of pathogenesis; understanding coronavirus biodiversity, so the mechanisms that regulate
potential for cross-species transmission and constructing panels of representative heterologous viruses to
design, develop and test broad-based vaccines and therapeutics; and finally, improving translational
outcomes of vaccine therapeutics and diagnostics.
Purely scientific expertise via research collaboration and training opportunities is essential in order to
address the above research priorities. In this regard, a process needs to be developed that would enable
responsible and rapid sharing of research, resources and data among the scientific community.

42

Gain of Function Research: The Second Symposium, Day 2


Potential issues of biosecurity and select agent status would also need to be addressed. Life sciences
professionals in the region need to learn from the experiences from other regions of the world and to
adopt practices and develop networks of experts.
In MENA countries, there is a necessity to disseminate best practices in research institutions; strengthen
human and lab capacity for handling importing and exporting of biological agents; improve standards and
oversight of gain-of-function research; and involve practitioners in the development of better regulations
at facilities.
However, to implement a systemic program for gain-of-function research that is effective and sustainable,
a certain infrastructure at the national level has to be in place as a first step to support and implement
these programs. Such programs should have considerable regional and international benefits in the form
of reduced risks for pandemic and epidemics of any nature and from any source, be it natural, accidental
or deliberate, by enabling earlier detection of a reaction to outbreaks resulting in area control and
elimination and so fewer casualties and a considerably lower risk of the outbreak spreading to other
regions of the world.
Second, better responses to biological crisis through better risk protocols, better education, and better
preparedness reduce unintended consequences of research activities and policies.
Third, a greater awareness of the issues and the policies through better communication and wider
adoption of best practices and code of ethics.
Fourth, better governmental policymaking and policy choices reduce biological accidents through better
biosafety and biosecurity standards and practices and finally, reduce risk of intentional biological crisis
through better design of better security systems and procedures at biological facilities.
Finally, a step forward is to identify or establish such partners or channels that can assist in the
implementation of these programs regionally and internationally. Thank you.
Atlas: Michael, an Australian ethical perspective.
Selgelid: Thank you, Ron, and thank you to the National Academies.
The ethics analysis white paper that I wrote pointed out numerous ways in which it's important to have an
international outlook when thinking about the ethics of gain-of-function research. And that can include
things like the need for engagement with other countries, harmonization of practices or policies across
nations, and maybe even international decision-making.
One thing that was revealed in the literature review I did was that there was a common theme, or a
consensus, that we need broader consultation and decision and policymaking regarding gain-of-function
research. So one kind of criticism about things that had transpired earlier in the debate was the concern
that too much of the decision-making or debate had consisted among people scientists and primarily
microbiologists and that there hadn't been enough input from the general public. And that in moving
forward, we need decision-making and debate that involves a much broader array of stakeholders. And
many authors explicitly said that should include involvement of stakeholders from other countries
because gain-of-function research affects countries globally.
So that's one thing that we might be thinking about and asking for and expecting; and that is, input from
other countries. But is that enough? There's a real question about who is or what is the legitimate
authority for making decisions and policy pertaining to gain-of-function research? Could it be the U.S.,

43

Gain of Function Research: The Second Symposium, Day 2


for example, so long as it's getting input from other countries? Is that enough? Or is there reason for
thinking that at least some decisions should be made by an international body? The point being that gainof-function research the benefits and the risks of gain-of-function research -- affect the global
community at large, and that might give some concern if there's a perception that a particular country is
making key decisions that affect all countries.
Yesterday, Dr. Stanley raised the question about whether or not some especially difficult decisions should
be made at the institutional level or the federal level. Should, for example, we have a FACA be making
some key decisions or involved in some key decisions.
We should keep in mind that another alternative is decision-making at an international level by an
international body an already-existing international body or an international body that might need to be
formed to answer at least some kinds of questions.
One reason why gain-of-function policymaking requires an international outlook is that gain-of-function
research poses issues of global justice. Justice is about the fair sharing of the burdens and benefits of
societal cooperation. There might be concern about gain-of-function research that poses risks on all
countries if the benefits that might result from that research aren't going to be fairly shared. For example,
if the new pharmaceutical or vaccine products that might be made available end up being unaffordable to
some of the countries that shouldered the risks required for making those benefits possible. So this is an
issue of global justice regarding benefit sharing.
There is also the matter of the fair sharing of risks. Some countries are likely to be exposed to greater
risks from gain-of-function research than others. That will especially be the case in situations where the
gain-of-function research involves pathogens for which there are control measures -- cures or vaccines or
treatments -- or vaccines that are available in rich countries but aren't so affordable or available in poor
countries. Gain-of-function research on pathogens like that will pose greater risks arguably on poorer
countries.
And even in situations where the gain-of-function research might result in that new pandemic pathogen
for which there aren't existing control measures, specific treatments or vaccines, might pose more risks on
some countries than others.
At the last National Academies meeting on gain-of-function research, it was suggested that in cases like
that, the risks might be equally shared among all countries. But even if we're talking about pathogens or
diseases for which there aren't cures or vaccines, often having good basic health care can make some
people less vulnerable to the diseases in question. So in the case of Ebola, for example, we've seen that
supportive care makes some difference. So that's a reason for thinking that the risks might not be fairly
shared, and that's a matter of international justice.
Another reason why an international outlook is important is because when we're thinking about what the
standards should be for things like biosafety, we should keep the international arena in mind and the
debates about the ferret flu H5N1 studies and the questions about whether or not research like that should
be done in BSL4 as opposed to BSL3.
Some authors pointed out that it might be setting too high of a standard to demand that research like that
gets done in BSL4 because if we set a standard like that, we would be ruling out relevant research in less
wealthy countries. And there was the suggestion that that would be inequitable. So there are a whole
bunch of reasons for thinking about an international governance and having an international outlook when
we're thinking about decision and policymaking regarding gain-of-function research.

44

Gain of Function Research: The Second Symposium, Day 2


These matters have been previously recognized. There was an earlier report by National Academies, the
Lemon/Relman report, which very much emphasized front and center the extent to which doing this
research raises international problems needing international solutions. And gain-of-function research is a
subset of dual-use research.
There is also an important WHO document called Responsible Life Science Research that was published
in 2010. And this document was really quite advanced and ahead of its time in some ways. It was largely
a document providing guidance about dual-use research, but it was already talking about lots of aspects
about biosafety concerns. Whereas there was the dual-use debate that largely focused on concerns about
malevolent use, and then there was what I call the gain-of-function turn, and that is when more attention
started getting placed on biosafety concerns, as well as concerns about malevolent use.
So this WHO document, which is international in outlook, is already taking both kinds of concerns into
account. There are several important messages in that document, some which we echoed in some of the
things said yesterday; and that is we shouldn't expect necessarily that there is going to be or should be a
one-size-fits-all approach to the governance of research raising biorisks. And the language yesterday was
used of cookie cutter; we shouldn't have a cookie-cutter approach to this. Different solutions might be
more appropriate or more feasible or make more sense in some places rather than others.
Another suggestion in the WHO document Responsible Life Science Research is that we should always be
looking to piggyback existing regulations and/or governance structures. So an example is that if we need
additional oversight of research with an eye to concerns about biosafety or biosecurity, well, maybe we
can expand the work of IBCs or research ethics committees rather than setting up new committees to
scrutinize research for threats pertaining to biosafety or biosecurity.
This guidance document also highlighted different levels in the governance hierarchy for research raising
biorisks and dual-use research in particular. Dual-use research raises questions for individual scientists,
and there are certain kinds of decisions that individual scientists need to make about how to manage
potential problems associated with dual-use research.
Research institutions that's a level of governance for dual-use research. And so there are decisions that
institutions need to make about how they're going to manage these kinds of dangers, insofar as they're at
liberty, given whatever the regulatory regime is.
A level of governance is scientific communities, and scientific communities need to make key questions
scientific bodies, for example, need to make key questions about whether or not they want to develop and
promulgate codes of conduct and decisions about what the content of such codes of conduct would be or
should be.
And then there is the level of domestic governance and particular decisions they need to make. For
example, to what extent do they want to be regulating imposing hard restrictions requiring education or
oversight, and so on and so forth? And one thing specifically included was the idea that there's a key role
of funding bodies in this, and the suggestion that funders should be taking risks of research into mind
when deciding which projects to fund; i.e., other things being equal, it's better to fund a less risky project
than a more risky project. So these were quite advanced, I think, suggestions in that document.
One of the things I was asked to talk about today is: How should we move forward? How can we achieve
international governance of gain-of-function research?

45

Gain of Function Research: The Second Symposium, Day 2


And as I said at the very beginning, there are different things that might consist of. An international
outlook could involve getting input from other countries when a country like the U.S. is deciding what to
do with regard to gain-of-function research.
And another thing we might hope for with regard to international governance is harmonization just
coordinated policies and oversight mechanisms and so on -- in different countries.
And then a third thing is actual international governance, and that is international decision-making and
policymaking or standards setting, and so on and so forth.
Yesterday, Keiji Fukuda talked about one possible way of achieving the third, which would also provide
harmonization and involve consultation and input from different counties. And that is the idea of he
used the language of frameworks, but there is the idea of having a treaty or strengthening existing treaties.
So it could be strengthening existing treaties to deal with these kinds of problems, treaties like the
Biological and Toxins Weapons Convention, or setting up special new international agreements to deal
with this kind of thing.
Harvey Rubin, who is a previous member of the NSABB, had this idea about an infectious disease
compact; so different countries would agree to work together in various ways to deal with problems of
emerging and reemerging infectious diseases and infectious disease research. And he thought that
something like that could play a key role in addressing problems about dual-use research.
And as Keiji noted yesterday, this kind of approach could be something that would be very difficult to
achieve; a lot of work would be involved. It might be hard to bring about. And it's not obvious to me what
the content of such international agreements should actually be or look like to address the kinds of
problems in a robust way that we've been talking about the past couple of days. So what should such a
treaty look like even if we could hypothetically achieve one?
What might be another kind of approach? Well, there's a great example of international governance that's
been achieved in research ethics; and that is research ethics. And how is this achieved? Now we have
quite widely-respected international standards, guidelines, and principles that govern the biomedical
research involving human subjects. How did that come about?
Well, it largely came about from the Nuremberg Code, an international standard that was set about what
should be expected with research involving human subjects, followed by the Declaration of Helsinki,
which was, again, an international standard. And in this case, it was an international professional body,
the World Medical Association, that said these are the guidelines and principles that should govern
research involving human subjects. And a body like that issuing an international global standard like that
has done a lot of work in bringing about harmonization and actual respect and practice of those standards
and principles.
So maybe we can hope for the same kind of thing in the context of gain-of-function research or research
involving biorisks more generally; i.e., if we could have some setting of international standards or
international guidelines or principles for gain-of-function research or dual-use research, et cetera., then
that might be an effective way of achieving global governance. WHO is one body we might look to, to
play a key role in the establishment of such standards.
In the case of the Declaration of Helsinki, we're talking about the World Medical Association; so that's
largely doctors and health care workers, whereas a lot of the research we're talking about doesn't
necessarily involve the same kinds of professionals. And it's not obvious what other professional body
exists that could be the international-standard-setting body. But WHO has played a big role in setting

46

Gain of Function Research: The Second Symposium, Day 2


standards for research ethics involving human subjects, and would be a potentially obvious body to turn
to for setting standards for this kind of research.
Research ethics then could provide a precedent for achievement of global governance of gain-of-function
research. It could provide a model for it; i.e., we could achieve it arguably in the same kind of way by
having international standards set or guidelines or principles set that end up being widely recognized
because of the authoritative nature of the body issuing them.
But beyond that, research ethics could potentially be a chassis for the achievement of global governance
of gain-of-function research; i.e., do we have this existing research ethics governance regime? It's largely
concerned with human subjects' protection. But what we're talking about here is largely research ethics, so
that regime could be expanded to take on other kinds of research ethics beyond human subjects'
protection. So rather than having a new kind of regime that's like research ethics in some ways, we could
just expand arguably research ethics.
Marc Lipsitch and some colleagues have often made the case that there are some principles and guidelines
and research ethics that apply to gain-of-function research. And so one idea might be, okay, when we
make principles and guidelines for gain-of-function research, we can have similar principles. But another
kind of approach would be to make the existing research ethics governance regime explicitly say, when
appropriate, that those principles apply to gain-of-function research and other kinds of research raising
biorisks, and that such principles and guidelines need not be limited to human subjects' protection; i.e., we
could aim for more robust research ethics.
Last but not least, some hard decisions, as I suggested at the very beginning, might need to be made by an
international body. So if there are especially hard cases of research and there is the question about
whether or not that research should go forward or whether or not that research should be published, if the
U.S. is involved somehow, should we think that the U.S., when making such a decision, should be getting
input from other countries; or should there be an international body making those decisions and playing a
key role in that decision-making?
And again, the WHO could arguably establish relevant bodies that could be, say, analogous to the
NASBB but at an international level, and maybe with decision-making authority rather than having a
mere advisory role. There is an existing, ongoing body that plays a key role in deciding what research
should happen or not happen with the smallpox virus. So that, I think, could be a good model for an
international body playing a key role in decision-making about gain of function more generally.
I'll leave it at that. Thank you.
Atlas: We'll open it up for comments and questions, if you'd go to the microphones and also identify
yourself.
Don Burke, University of Pittsburgh, member of the organizing committee: A question for Michael on
your last comments about the potential use of ethics as a scaffold or a chassis for the operationalization of
the gain-of-function oversight. Could you say a little bit more? You can look at that as from the
operational advantages about we've got an existing set of structures. But also, is there sort of a tactical
way of saying is it more acceptable to do that? Is it more politically expedient? Would there be sort of a
broader international acceptability of that framework rather than other frameworks? It's sort of a general
question about are there are other values of approaching it from an ethical framework?
Selgelid: Maybe it's just largely a practical suggestion. It's funny; in my white paper, I've provided all
sorts of reasons for thinking that we need an international outlook. What ethics do is look at or analyze

47

Gain of Function Research: The Second Symposium, Day 2


what should be the case. And at the last NSABB meeting, there was criticism that, oh, the white paper
didn't say anything or much about how to achieve these things. Well, that's not really the expertise of
emphasis, like how to do things. In my case, I'm an engineer; and I aim to be practical and not to think
about such things.
And, yes, the motivation from my mind for thinking about the chassis idea is, one, maybe that's a real way
of achieving something that otherwise it might not be so obvious how we could achieve it. The treaty idea
that seems really hard, whereas this is, gosh, there's something there. It makes a precedent. It's in line
with this piggybacking idea that has been thought of in terms of committees. So there's been thought
about taking advantage of existing structures in the way of committees, but here's a governance regime
that we could be taking advantage of the existence of and building upon.
And it seems like that regime has gaps. So if it's about research ethics, well, there are a whole lot of things
that are really important pertaining to research ethics beyond subjects' protection. And now insofar as
different things getting looked at, it's in the sort of stove piped way. So why shouldn't it be integrated?
And a lot of the same overarching values should be governing the different kinds of ethical issues that
arise in research.
The NSABB report, for example, very much adopts key overarching values or principles that are in the
Belmont report. So it kind of makes sense to have research ethics, and the different aspects of research
ethics, under one umbrella. So I think it makes sense sort of conceptually. I think there would be
efficiencies and feasibilities and all sorts of good reasons. It just seems like a natural thing to do.
I don't know if that answers your question.
Atlas: Jerry and then Marc.
Jerry Epstein, DHS. This may be the wrong panel to ask this question of; you're the international panel,
and maybe I should reserve this for the full discussion later. But it seems to be in terms of the degree of
international engagement that the United States seeks in developing its policies, there are kind of three
cases.
One is the U.S. makes policy recognizing that there are international perspectives, and they have done
some amount of effort to collect them. And when it goes off and makes its decision, it will do that
knowing what at least some international voices have had in mind.
Case two is that the United States says, well, we have an urgent need to come up with something because
the moratorium is depending on that. So we'll have some sort of a contingent policy; but, recognizing
there are international implications, we will continue discussing a more formal approach with the
international community so some issuance of some policy with maybe a marker that things might
change and the equivalent communications of the scientific community that things might change.
And the third choice is we shouldn't do this alone. We should do a full international approach and just not
answer the mail right now, but go ahead with one of these approaches and get a more robust international
solution. So act now, act now with a marker for international engagement, or act internationally. I'd like to
know what your views are on those three approaches.
Atlas: Anybody want to react to that, or are we just going to let it stand?
Selgelid: I'm happy to say something about it.

48

Gain of Function Research: The Second Symposium, Day 2


Atlas: You are, okay.
Selgelid: I think resorting to the third, that need not always be the case but in the hardest cases, the most
important cases. Yesterday I suggested there are two kinds of cases, research of concern and research that
isn't of concern. There's a spectrum. And then at one end of the spectrum is the most concerning research.
I think in cases where it's the most concerning research, especially if it really does affect the international
community at large, then why should the United States Government think that it has the authority to be
making those decisions? Just like we think, oh, sometimes scientists shouldn't be making those decisions
themselves because they affect so many others. They should be giving the decision to some other body.
That's the appropriate authority. So there might be some cases in the heart of the research that's most
concerning, and maybe no one country should be thinking it has the authority to be making the decision
and should be handing it over to some international body if the consequences are international.
Atlas: Let me follow that up with the question: Who should initially flag that research of being raised to
that level of concern? And is the recommendation for defining gain-of-function research of concern that's
presented by the NSABB an adequate screen to know who or to look at say we accepted the idea that
there has to be an international dimension to this, which is not to answer Jerry's question, but to say
supposing you did that. How would you get and know what to get? So we've heard that the real research
of concern is a very narrow body, and then we hear other examples which might broaden it; but it's
presumably some narrow subset of research that would be of concern, and even a narrower subset that
would raise it to the sort of level you describe. And I'm not sure how you would suggest getting there, but
certainly that would be something that might be useful for the NSABB to consider.
Selgelid: So my idea is not that research would be of concern or not; it would be of more or less concern.
And there's not two kinds of responses; either there's more scrutiny or not more scrutiny if it's a
concern, and then not more scrutiny if it's not of concern. But the more concerning it is, then the more
scrutiny it gets. So there would be all these different levels of concern and different level of governance
and extra scrutiny. And sometimes maybe it being a bit of concern, well, that gets into a special scrutiny
at the institutional level. And then if it's even more of concern, well, then maybe that gets it scrutiny at the
national level. And then if it's really, really, really of concern i.e., pandemic potential, some extreme
level of pandemic potential well then that gets it to the international level of scrutiny. So multiple levels
of concern leading to multiple levels of extra scrutiny at different levels of the international governance
hierarchy so it's like scaler through and through, scaler with regard to level of concern and scaler with
regard to response.
Leung: Can I just try and perhaps address the question that was just posed? I think that for anything that
has got an international external dimension, like bugs, I don't think approach number one is ever a good
idea. So it really falls to either approach number two or approach number three. And I think whether you
pick number two or number three really depends on your tradeoff between efficiency and sort of a
consensual type of approach to doing things.
If you pick approach number three that is, you get everybody around the table internationally and nod
you wouldn't try to come down with a definitive solution unless you actually have everybody nodding
their heads, then it becomes a highly political process; and it becomes very, very difficult. It's almost like
trying to take everything to the United Nations not that it includes all countries in the world, even in
those cases. And inevitably, not only does efficiency suffer but in many instances, the content gets diluted
in order to arrive at a consensus.
If you take approach number two, I think it's the more pragmatic way to go in most instances. So if you
take the example of, say, the Global Health Security Agenda (GHSA). GHSA, I think, currently has just

49

Gain of Function Research: The Second Symposium, Day 2


under about one-third of the countries in the world sort of signing off to it. It's slowly gaining traction;
and it's one way or one instance where the U.S., along with others, has taken a lead and not really saying
this is the definitive framework or the definitive version of it; but let's start with something, and then let's
get it going. And hopefully people will come and join and then try and fine tune it. So that would be the
sort of approach, I think, that I would recommend for our present purposes.
Atlas: Thank you. This side I think Marc.
Marc Lipsitch from Harvard School of Public Health: I want to take issue with my good friend Gabriel
Leung and thank him for his kind words about the Cambridge working group and many of the sensible
things he said. But I want to take issue with the claim that regulation of gain-of-function approaches will
chill research in the long term. I think that claim is both unsubstantiated and also beside the point. It's
unsubstantiated in the sense that there's just no evidence. It's said often; but there's no evidence the
restrictions on this single technique, which affects about 5% of the NAIAD research portfolio on
influenza, chills anything except for this specific technique.
Historically, the main restriction on science has been cost. Certain techniques just can't be used because
they're too expensive. And what that does is stimulates clever scientists to find alternative approaches that
in that case are less costly, and in this case would be less dangerous. Stimulating next generation
sequencing because it was too expensive to use Sanger sequencing for the human genome is perhaps a
good example.
Secondly, if the research puts significant numbers of lives at risk, chilling research is not actually a very
important counterbalancing consideration. Nobody criticizes the Helsinki Declaration on the grounds that
it's a mixed blessing because it stops a lot of bad things, but it also chills research. Finally, the example of
studying pH of activation as a marker of human adaptation is not unique to gain-of-function research and
has been studied through just such alternative approaches, including comparative analysis of wild-type
viruses. I'd also like to thank Gabriel again for all the other things he said.
And then I have a question for Dr. Gao. If the U.S. establishes a set of principles for gain-of-function
research and what can and can't be done with U.S. funding, what impact, if any, do you think it will have
on China; and what would be the reception?
Leung: Shall I try and respond to Marc? Thank you, Marc. I think this is probably an instance where it's
almost tautological, isn't it, where the fundamental assumption or the fundamental argument or the
condition is that if indeed there is always at least, A, another way of getting to the same knowledge set
and I suppose the argument that Marc just laid out is, well, if you stop all gain-of-function research, then
there will be scientific ingenuity to try and get round it and therefore arrive at the same knowledge set.
But that presupposes that indeed there are or there is no unique value addedness of the gain-of-function
methodology; that is, it's wholly replaceable. So I suppose we will never really know the answer because
this is actually a circular argument.
I suppose I'm trying to perhaps advance the line of argument and I don't know whether I'm right or
whether you're right is that there is, at least in the medium term, there is a unique contribution of gainof-function research. Now, of course in the long term, with technical advancement, God knows we will
have different ways of getting to know something. But at least in the short to medium term, there is a
unique knowledge set that gain-of-function research will and can provide and is uniquely positioned to
provide. And if that is a precept position, then I think that we should care about the big chill.
And the big chill is not really about the 5% of NIAID research in gain of function in flu. It's really about
the big chill of -- we were shown the big Venn diagram, where there is gain-of-function research, that

50

Gain of Function Research: The Second Symposium, Day 2


universal set. And then a very small circle there that's of concern. Now, the relative size is
notwithstanding. What I think is the big chill is not even on the big, universal set of gain-of-function
research but on the fringe of that big universal set where anybody, especially the junior faculty or the
junior scientists, nobody will go near even the outskirts or the fringe of gain-of-function work or gain-offunction-like work because they know of the multiple hoops that you would actually have to jump
through.
And again, I suppose there is nothing wrong with that as you advance if we suppose that there is always
an alternative way of actually arriving at the knowledge set. But if that condition doesn't hold, then it
would not be a good idea to impose this big chill. So I suppose that here is where we agree to disagree.
Gao: Marc, I think what we agree on here and all the research in China we are carrying on, I think we are
the newcomers. I guess whatever you agree on in this country, the United States or the audience here, I
think definitely we can do something and we can follow in the interim as well. So that's an open question,
and I think it's possible.
Atlas: Kavita?
Kavita Berger from Gryphon Scientific: I actually wanted to ask a question about this international sort of
framework that has been discussed here and in other panels. These ideas have come up for the past 15
years or so over and again, and never really quite implemented or materialized. And one of the things that
makes me sort of wonder whether it's even possible, feasible, or even worth doing is sort of the years of
experience we've had with the haves and the have nots, the developed and the underdeveloped worlds, the
resources that are available and that are not available in these different countries. And it's really striking
to see the last panel against this panel, where you see research being done for economic value, for health
value, for agriculture, or whatever it might be. And you see a lot of talk about how we need to somehow
monitor, oversee, and restrict some of that.
And in still thinking of the experiences we had with viral sample sharing a few years ago, where it
became extremely political, an extremely diplomatic sort of challenge to deal with at the World Health
Assembly, how can you actually establish some sort of international body that would actually review
research in countries that have a real, live public health need or agricultural need or whatever it might be?
How can you tell a sovereign nation that they can't do their research to help their own citizens? Where's
the balance? How do you reconcile that?
Selgelid: How do we achieve global governance? Well, one question is: Who is the legitimate authority?
Who can we turn to and say this is your responsibility; you need to own this and you need to take this on?
One question or one suggestion is does the answer to that question sound like a question? Who is the
legitimate authority? One might say WHO is the legitimate authority. But this raises the question about
what is the WHO mandate, and does this fit the WHO mandate? Is there a more obvious body that we
should be turning to, to say this is yours to own and take on?
As Keiji pointed out yesterday, it's hard for WHO to do much without support of member states. So
maybe catalyzing this requires member states pressuring WHO to take this on. This is really important;
the potential consequences of gain-of-function research could be globally catastrophic. This isn't a
hypothetical matter; this is something that could be tragic. And it would be tragic if we don't get global
governance until it's too late. If whatever international body is responsible for this isn't going to do
anything about it until something bad happens, well, when something bad happens, that might be too late.

51

Gain of Function Research: The Second Symposium, Day 2


So the member states maybe need to be pressuring WHO, if it is WHO that's the legitimate body, saying,
hey, this is within your mandate. You're the legitimate authority for this. This is something important, and
something needs to be done.
Now, if some of the member states aren't going to be on board because they don't realize how it's relative
and important to them, well, maybe they need to be made better aware of how it's relevant and important
to them. And it could be relevant and important to them in ways that might not be so obvious. I mean, one
of the things that international governance should be doing is making sure that states with less resources
and infrastructure and so on would receive benefits from this, and that risks to them would be reduced and
so on. So it could be to the advantage and maybe not very costly for less powerful or wealthy member
states to be on board with getting global governance happening.
Leung: I think what Marc just said about the Declaration of Helsinki, who enforces this for clinical trials
or clinical research anywhere in the world? It's not the WHO, although I think WHO has got a good
platform. But I think its hands are rather full; and with its current budget and its current governance
structure, I'm not sure that it can take it on effectively because there's no leverage.
I think it's rather that it's the peer pressure of moral responsibility, and it's that culture that we heard about
earlier this morning of the international scientific community. I think that ultimately is the long run
determinant of whether we can come to some sort of global harmonized governance structure for gain-offunction research and for others, CRISPR being the other obvious example. And I think that's sort of the
long-run control knob or the lever.
But in the short run, there are things one can do in terms of capacity building, technology transfer,
research grants, financial sponsorships. These are all shorter term and more immediate levers that one has
in fostering that global harmonized culture in getting a handle on gain-of-function research.
Atlas: Let me ask you whether you would look sooner to a WHO as a governance sort of body or to
organizations like the National Academies around the world to try to harmonize relations in this area.
Which would be more effective, in your view -- a scientifically-based set of bodies that considers this and
how to harmonize it, or a health organization, or a different organization altogether?
Leung: I think all of the above. I think all of the above. But if we look to either the National Academies
around the world or to WHO and its regional offices, I don't think their role is the policeman. So
execution and enforcement is not going to be their role. I think in coming sort of to a consensus on global
harmonization, yes. But in terms of policing and enforcement, it would still have to be down to the IBCs.
Atlas: Thank you. Yes, sir?
David Stanley, Future of Life Institute in Boston: I'm mainly just here to relay a proposal that was
provided by the Future of Humanity Institutes (FHI) and the Global Priorities Project, both in Oxford, for
a proposal for making decisions about whether to fund gain-of-function research. This proposal is kind of
in line with comments made by Michael and others earlier about funding agencies taking more of a role in
making these decisions.
Specifically, yesterday we spoke about risk/benefit analysis as a tool for making funding decisions.
Pertaining to this method, Gryphon Scientific showed that risks could be quantitatively estimated to some
extent; however, benefits could only be estimated qualitatively.
The FHI's proposal is to utilize the scientific granting process to help make decisions about gain-offunction research projects of concern. This specific proposal is to price the expectation value of any

52

Gain of Function Research: The Second Symposium, Day 2


damages that could result from GOF research into the price of the grant being considered. Then they
could either require grantees to purchase liability insurance to cover the possible damages from this or,
alternatively, require a payment to the state or non-state body to cover the expected cost of that research.
The key advantage of this proposal is that it keeps decision-making in the hands of scientists who are best
suited to evaluate the benefits of gain-of-function research, which we saw are difficult to quantify.
Clearly, potential issues associated with this are finding a practical means to provide this liability
insurance and also attaining some international adoption into evaluating the pricing of the associated
risks. So I basically was just going to get your feedback on possibly how to deal with these particular
international issues. Thanks.
Atlas: I'm not sure if anyone wants to respond to that or we just want to have in the record; and in the
final session, maybe the Chairs will take that up. I've got four more people and five minutes left in the
session.
Chris Park from the Department of State: This was a very useful discussion; but while there's been a lot of
talk about a need for an international perspective, there's been a real lack of clarity on why. If it's, well,
we need to understand what the international community thinks as we shape our own domestic policy,
yes, that's useful. It's certainly important to be aware of what second-order effects there might be, et
cetera.
But I'm looking at this as a practitioner. Me, or someone like me, will be stuck with the job of going off
and persuading other governments or other actors to take some action internationally if we decide we
need to do something. So what would help me quite a bit is better clarity about what in other words, if
the problem is that we actually are concerned that such research is being done more broadly than just
funded by the United States, conducted in the United States and we know that's true. We know that a
certain amount of this research is being funded by (inaudible). We've seen articles published about
research that was done in China and a number of other places. So it's not exclusively a U.S. issue.
So if we think this is a problem that needs to be addressed more broadly in order to mitigate the risks,
that's an important piece of information. That says, all right; that's a different solution set than just
consulting widely before we go off and do what we're going to do.
But then you get to the question of, well, what's the problem you're trying to solve? If you are concerned
mostly that what you want to do is drive the behavior of individual researchers, then I certainly agree that
going the route of professional societies and things like that can be a very useful tool. And it can also be
something that's used in tandem with other approaches.
If your concern, however, is with the behavior of governments, the need for some form of oversight or
setting a very high bar before one funds or something of that nature, then that one probably doesn't work
by itself; you need to look at something else.
And maybe you get into the question of, well, am I most concerned about security risks? The original flap
over the Fouchier and Kawaoka papers was what about the potentially enabling impact of this
information, and bad guys might get hold of it; that's one problem set.
If you're actually driven much more by the concern about inadvertent release, then you get into questions
of, well, under what circumstances should the research be done? What should be the tests applied before
it's conducted at all? What might be the containment conditions? All those elements tell you different
things about what entity is the most appropriate to try and tackle the problem. You might need to divide
up and address a couple of different problems in a couple of different settings.

53

Gain of Function Research: The Second Symposium, Day 2


You also get into some very practical questions about, well, do we go to a U.N. style organization where
you've got 190-plus members? And I can tell you from bitter experience, it is very hard to get that many
people in a room and get them to agree on whether or not it is day or night.
Or do you focus on the 80% solution; if there are five countries funding the vast majority of the research,
and funding is a key element of what we think needs to get addressed, then maybe you focus on a much
smaller group and try to establish a de facto pattern of this as the sort of behavior; and then you try to
push it out gradually in larger circles.
Again, I'm not trying to advocate anything in particular, but to lay out some very practical considerations
and urge those who are interested in this from the standpoint of ethics, from the standpoint of the science,
etc., to take that into their thinking and give us some more thoughts because it would be very helpful.
Thank you.
Atlas: Let me take the others very quickly; we've got two minutes.
Catherine Rhodes from the University of Cambridge: I have two comments. One is, again, relating to the
venue in which international governments might take place. And I wondered, given some of the concerns
about the extension into things like animals, livestock, it would be appropriate to place it within the
trilateral cooperation that takes place between the World Health Organization, the World Animal Health
Organization, and the Food and Agriculture Organization. That might help to bring greater resources to
bear if you've got three organizations taking an interest.
And then the second point is then, again, on resources. So in fact to establish anything at the international
level, whichever you go for, whether it's the treaty, the standards, or through the research ethics area, you
need to consider the resourcing for this. And if you aren't going to get sufficient and sustainable
resourcing, then it may be better not to start.
Atlas: Keiji?
Keiji Fukuda, WHO: Let me maybe give a couple of observations, then maybe some ways to help think
this through. I think that at the global level, there are different kinds of issues. And some of the
discussions are more successful, and others are less successful.
If we look at some of the more successful discussions, for example, the development of the International
Health Regulations, this was successful because the heart of that discussion was that countries need to be
better prepared. It was not an argument that anyone had to be convinced of; it was an accepted argument.
And it really became more or less of a discussion about how do you achieve that. And there was, of
course, a lot of discussion. It was not so straightforward, but that was the heart of the discussion.
If you look at the pandemic influenza preparedness framework, again something which took about five
years of very difficult negotiations, the heart of the issue was very clear. This is an unfair situation; can it
be made more fair? And so, again, a very difficult discussion about how do you achieve that fairness; but
still, the basic need for the discussion was not questioned.
And I think in the current era now, the issue that I focus a lot of time on, antimicrobial resistance, even
though it's complex, again the fact that you cannot have a situation where you don't have effective
antibiotics is not really questioned by anybody. It's that given all the complexities, how do you achieve it.

54

Gain of Function Research: The Second Symposium, Day 2


You then have discussions which are very difficult. For example, well, I won't go into these examples.
But there are discussions which are very difficult because it's not clear whether there is an issue that
everybody agrees upon that in fact is worth discussing or is of importance. And when I think about this
issue and the genesis of it over the past four or five years, particularly when we had the H5N1 studies, the
difficulty of this discussion has been how do you weigh risks and benefits of the situation in a societal
context?
Among the influenza scientists, you have one culture which I understand very well. Even among other
scientists, however, that is not very much the same. But once this issue became an issue in which you had
media, you had very prominent personalities arguing it, you had export regulations brought in, you had
the Commerce departments and so on, you had Foreign Affairs involved in this discussion, it quickly
became a very difficult discussion because how you measure these things is not straightforward.
And I think that the measurement of risk in a very narrow sense and quantitatively, in a political sense, it
is a very different set of (audio interference). You look ahead for other kinds of issues. So the
measurement of risk is not so easy, the measurement of benefits is not so easy, and it very much depends
on the perspective you're bringing.
In a lot of ways, I think this is an issue which is immature to come to the global level in terms of
governance except at the level that there needs to be a scientific consensus. I think right now it is not clear
whether there is scientific consensus across so many different societies, or why. And I think the issue may
be a microbiological issue. It certainly (audio interference) of the gain-of-function issue. But I think it's a
little bit farther than that in reality. And I think some of the things that were raised about (audio
interference) among scientists who are dealing with human (inaudible) microbiologists working.
But I think an issue in which there is no broad scientific technical consensus is in a very difficult place if
you then try to bring the political elements which inevitably any global covering this discussion brings in.
So anyway, my two cents.
Atlas: Thank you. One last comment because we're already into the break time.
Silja Voneky of University of Freiberg: One short remark and one question. The short remark would be I
would like to add to the international dimension, and I very much support a lot of the things Michael
Selgelid said about the heart of international public law in regard to the liability or responsibility of states.
If a state allows that the agent escapes from a state laboratory and causes damage in other states, there's a
good case to argue that a state is responsible to pay reparation. And this is not a question of ethics; this is
a question of customary international law. And I have colleagues who argue it this way. And I think in
regard to the risk/benefit assessment, at a government level, one should take this into account this risk
as well.
And my question would be to Nisreen and to George whether they think that their governments would
push an international initiative to regulate gain of function at the WHO level or UNESCO level or United
Nations level. Thank you.
Atlas: Anyone?
AL-Hmoud: Regarding Jordan, Jordan as you know signatory of many international conventions.
Regarding gain of function, still we don't have any like experiment of concern in Jordan or in the region
regarding MERS. What we have is surveillance for cases in the country and in the region. We do have
international regulation point of contact at the Ministry of Health on an annual basis; and if anything
happens, we report to the WHO or International Health Regulation for this mechanism.

55

Gain of Function Research: The Second Symposium, Day 2


No more comments? I'll just thank the panel then and thank the audience and everyone who did get to
make a comment or a question. We will have a short break, and then we will follow with the last session.
So thank you again.

Session 9: Summing Up

Harvey Fineberg, Planning Committee Chair: Weve had a very full two days of discussion. Our plan in
this final session is to invite the moderators from the several sessions through the days to offer their
perspectives, summarizing and perhaps adding from their perspective points that were raised in each of
the sessions.
I do want to allow, then, an opportunity particularly for our friends from the NIH and NSABB for whom,
in effect, we are gathering ideas and assembling these thoughts. If there are any issues or topics or
questions that they would like to raise and invite further comment from those of us who are here in the
program today and in the audience.
And then well open the microphones for additional comments, suggestions, ideas that anyone present and
from the web would like to include in the record. So thats our plan. Im going to launch right in, in order.
Im going to turn first, therefore, to you Chuck, Chuck Haas, for the overview on the policy framework
and key questions.
Haas: Okay. Next slide. So I want to give some big picture summaries of my takeaways with one editorial
comment. So data gaps, particularly on laboratory safety, were thought to limit ability to do an absolute
risk assessment. And we had, I think, a good set of questions from the floor about the need to develop
scholarship and support for those studies. My comment is that those data are not totally absent, and it
might have been informative to use whatever data are out there even though its poor, not exactly in the
laboratories, were talking about how to bound the potential risks that could occur.
If a pure risk acceptable rule is used as a decision, we lack information on what the level of acceptability
should be.
Rocco presented an updated analysis using new data on seasonal versus 1918 influenza, which raises a
broader point, and that is risk assessments in general need to be living, need to be adaptable to new
information as it comes along. And then finally, leaving uncertainty out, and this is Adam Finkels direct
quote, is a violation of first principles. Next slide.
Another quote from Adam is the statement Is it safe is a vapid question meaning its a question
intrinsically without meaning absent a reference level. A hierarchy of potential judgment rules exists.
Both Tony Cox and Adam Finkel made that clear and the explicit judgment of what rule is to be used
needs to be made. Cara Morgan called this deciding how to decide,which seems to be missing is an
explicit statement from the discussion. And stakeholder input needs to be included to develop the decision
rules. he decision analysis community has rich scholarship which needs to be brought to bear, and again
thats from Cara. Next slide.
Tony called the fallacy of coherence. And I use the phrase because it has been accepted doesnt mean
its been acceptable. Just because risk has been accepted in the past does not mean that an informed
judgment going forward would make that same numerical risk acceptable. A useful task would be to

56

Gain of Function Research: The Second Symposium, Day 2


assess whether or not a collection of more information would make a decision better. And, again, theres a
rich literature on the concept of value of information in this regard. Next.
A couple of miscellaneous problems. I think this is from Rocco. Bench researchers may not be familiar
enough with epidemiological parameters to assess transmissibility. Risk-benefit analysis could be used to
improve the risk profile of proposed experiments, in other words envisioning an iterative process of some
sort. The third bullet is from Adam. Risk and benefit analyses should be balanced, humble, and explicit
about value judgments.
And then finally we had a discussion, I think, from the audience that particularly long-term benefits may
be particularly difficult to value and highly uncertain. My editorial comment on that is while this may
very well be true, it shouldnt mean that you should walk away from the effort to attempt to quantify them
using whatever information you have now. And I think thats my last slide.
Fineberg: Wonderful. Thank you very much, Chuck. Very cogent and quite focused in the comments.
Barry, well turn next to you. And if you would, I know that youre going to be able to share with us some
reflections based on both the session that Michelle Mello moderated on the U.S. landscape as well as the
one that you did on the international dimensions.
Bloom: Thanks, Harvey. And I thank Michelle for providing notes on her views on the policy landscape
U.S. Her first point is basically that there is no set of policies at the moment that targets the broadly
conceivable group of pathogens defined as gain of function and gain of function of concern and strongly
supports the hope that the NSABB report will generate such a framework. Shes concerned that existing
law doesnt really reach research that is not conducted with federal funding, and that raises questions and
there needs to be thought about mechanisms by which that can be regulated.
I think, and Ill put if her panel was to recognize that the time to regulate is as close to the time as
research is conceived and ideas. The later it goes, the more difficult it is to deal with. An issue that was
raised is the general issue of liability, whether in this country its always guaranteed with respect to
institutional liability, but in a global context it isnt clear that there is a mechanism for indemnification of
liability if dreadful things happen.
Another point that she raised is that it would be very important for regulators, both institutional and at
federal agencies, to be in consultation with scientists concerned with this kind of work and sees a tension
between the desire for transparency and the risks of publication for sensitive information and a tension as
well between the need for common standards that can be applied in a reliable fashion and the fact that
different institutions have different capacities and capabilities and individual practices, which came out
clearly in her panel, and how that all may be accommodated. So those would be, I think, Michelles
comments.
My comments are even more simple minded. It was clear from the very beginning of the sessions on the
first day that everyone involved in this meeting recognizes that science and the risks and benefits have
global implications, and gain of function clearly has global concerns. We had major presentations on the
groundbreaking progress made by the European Union, which showed that it was possible to have
discussions and bring country policies from 28 countries to common focus together and scientific
academies in almost all of those countries to a consensus on scientific policies that would govern this.
They emphasize the need to expand and extend the discussion between countries in Europe. Theyd be
very interested in discussions after the United States policies are formulated, and are keen to find out
ways in which discussion and consultation can be expanded to be inclusive of all countries.

57

Gain of Function Research: The Second Symposium, Day 2


In this context, we heard the very important discussion of the inter-academy partnerships, a global
network of science and medical academies that now links academies in 128 countries and four regions.
That could serve as a useful focus for extending the discussions of gain of functions in a coherent way to
responsible scientific bodies that already exist and perhaps should be thought of in moving forward.
The recommendation from that discussion was probably the best place to start is discussion within the
scientific communities rather than going directly to policymakers one at a time, one country at a time,
until there is some general understanding and agreement within the scientific community, and then to
simplify the complexities of those dialogues and discussions to a level that could gain understanding and
support from the political leaderships.
We also heard the value of not just pontificating but having important partnerships and collaborations that
enables transparency to occur, enables technology transfer to occur, and training to occur, but also can be
a way of maintaining standards and identifying low standards.
A personal comment is in my reflection on this meeting that Ive come to the view that process is
probably as important as principles. It is not clear given the technicalities of the science that the lay
public, and even government officials, are going to understand the technicalities. But if the processes at
every level are transparent, maybe thats the best way to gain trust within the scientific community and
within the public at large. And that means the processes as we are conceiving it, and the NSABB
conceives it, is a set of tiered processes that occurs at multiple levels from the investigator, the IBC, the
institutions, study sections, and all the way up to the higher levels of policy.
Second reflection on this meeting is that whatever we do, we have to recognize that science is changing
dramatically so that policies really cant be fixed in time to predict what possibilities, opportunities,
technologies and threats will be coming in future so that the policies need to be flexible in some way to
accommodate new knowledge and adapt to new opportunities and possibilities and yet have a clear-cut
framework that people can work with.
And lastly I would support Gabriel Leungs comment when you ask the question why does the Biological
Weapons Convention treaty, as far as we know, largely work? Why do the Helsinki principles actually
govern how human experimentation is done? I would say its less legal liability and lawsuits than it is to
ask what are the principal constraints on scientists, and that has to do, in general, with constraints on
reputation, credibility, integrity, and respect in the scientific community. And Matt Meselson, when he
was asked how you could possibly engage one more action to enforce the Biological Weapons
Convention, raised the interesting possibility of making it not possible for scientists to travel
internationally as another constraint that would be of high value for scientist. So I think enforcement at a
moral level is highly possible.
Fineberg: All right. Thank you both for your comments on the discussion and your own reflections. Your
notes about process, I think, are a great bridge to Baruch's session on the discussion about the culture of
safety and the public participation, so Baruch, well turn to you.
Fischhoff: Our session was about informing policy design and (inaudible) represented the participants. I
thought I would start with a bit of nomenclature. So Im going to be using the term social science for
those of you who are not familiar with our part of the world to include social, behavioral, and decision
science. Behavioral science is the study of individuals. Its psychology, microeconomics, neuroscience,
and other social sciences. In a larger grouping its sociology, anthropology, political science. And
decision science is management science, the cost, risk, and benefit analysis of that form of applied
mathematics that takes human behavior into consideration. And with problems of this complexity and
subtlety, you need them all. Next slide, please.

58

Gain of Function Research: The Second Symposium, Day 2


So the framing of the human dimensions that I believe came out of our session is that reducing risks and
realizing the benefits of these technologies depends on people at the level of individuals, organizations,
and policies.
The second, that relying on intuition in designing and evaluating the systems that deal with these
technologies, its natural to rely on our intuitions, but its unfortunate because those intuitions are often
wrong or imprecise.
Third, that the biological research community faces the challenge of not having what some economists
call the absorbative capacity for social science. That is, there is nobody on the inside who can tell when
they have a social science problem, define it in terms that would be recognizable to a social scientist, and
find somebody who will help them to work the problem. Thats on the demand side.
On the supply side, the social science community may lack the incentives for addressing biological
science issues because our incentive scheme is to publish on relatively narrow topics. I think we were
fortunate we were fortunate to have three distinctive speakers today who have that bridge which
requires them to draw on different social sciences as well as to see the value for the basic science and
engage in applied problems. Next slide, please.
But what are the kinds of issues that you would find if you brought the social sciences to bear? One is to
identify the places in which scientific judgment affects the prediction of outcomes. Many of the
statements weve heard had to do with scientists anticipating how transmissible something would be.
Given that this a discovery process, there are likely to be surprises. So its smart to recognize that these
are scientific judgments and illicit them in the best, most accountable way possible.
Second, that theyre ethical judgments and analysis, how you define them, who you share them with,
where various publics are engaged in the process.
Third is the communication to and from stakeholders so that you can develop the technologies in the ways
that are most sensitive to their needs and keep them properly appraised.
Fourth problem, more from the social sciences, is the normalization of pathology and the virtue. We can
get accustomed to best practices that are terrible by any absolute standard. We just get used to them. And
as, thinking of Ruthannes talk, and actually Barrys comments just now, theres the possibility of the
normalization of virtue. That these are things that you just dont do, and this is part of the kind of bottomup process of culturation that Ruthanne talked about.
Fifth, you can have a mismatch between the technology and the regulatory mechanism in terms of the
not just government regulation, but also the societal controls that one has over it. You can have regulatory
control mechanisms that dont have the requisite variety for technology, and as Barry was saying, are
moving very quickly and weve developed our institutions for a different environment.
And another problem that one runs into is the neglect of opportunity costs. We know a lot about the
technologies in which weve invested and much less about the ones which we havent invested. Next
slide, please.
So Id like to offer, in the spirit of Barrys two personal comments, here are two suggestions,
recommendations. So one is that given the difficulty of bridging the basic the social sciences, and the
application here is theres some value in centers that would serve as a kind of clearinghouse for helping
interested biologists to find social scientists who could help them to work their problems and social
scientists to find the people with whom theyre willing to work and maybe to help us make the case to

59

Gain of Function Research: The Second Symposium, Day 2


department heads that this is a worthy pursuit to spend, say, as much time as all three of our speakers have
had working with clients. And so thats just to apply the social science that we have and then create the
needed evidence for what some people call adaptive management, though we dont know exactly where
were going and things keep changing, we need to be on top of it.
And second is to develop shadow alternative evaluation processes. That is, if our current mechanisms are
not up to it, we need alternative mechanisms, and none we have principles, I think particularly from
Monicas talk you saw all the expertise that we know. So you could bound the set of deliberative
mechanisms whereby this might work but we dont really know how they would work until you get
people with the different kinds of expertise, cultural experiences together and see how they work. And
one might hope that if you had some worked examples, maybe like some of the conventions that people
have talked to, but they would eventually just become the thing that people do. Its very hard to get
people to repeal regulations that promise safety, but sometimes they just atrophy. And maybe theyll go
away if we have something better. Thank you.
Fineberg: Thank you very, very much for those reflections. Phil, let me turn to you now, a discussion
about some of the points of view raised by what we might call interested parties. So let us hear your
reflections on the discussion.
Dormitzer: Will do. I guess the first of these interested parties was Michael Callahan, who pointed out
that the EU and the US are not the future epicenter, and may not even be the present epicenter, of gain of
function research. And similarly government funding may not necessarily be the dominant mode of
funding of this research as well. We need to sort of expand our thoughts about how we might influence
these processes.
One I think very interesting point he raised were some case studies where mechanisms of control of
infectious agents of concern was lost not due to any mal intent but often due to necessities of people
operating under difficult circumstances. And designs of vaccines that may not be optimal for maintaining
virological sensitivity were due to the necessities of trying to produce extremely inexpensive vaccines in
developing countries. So there are (inaudible) where consultative mechanisms might help, where forms of
assistance might help, and also where incentives need to be created where we can incentivize people to
limit risks when were not in a position to regulate.
Robert Fisher discussed the conflict. There is a regulatory level between the need to make for evidencebased decision making, which is necessarily time consuming and requires evidence gathering, which is
rigorous, time consuming, and expensive. But theres also the need, often, to act quickly, particularly in
these emerging or outbreak situations. This is an inherent conflict that has to be reconciled, and our
considerations that we do around gain of function play into that. And particularly, coming back to a
consideration that came up earlier in the meeting that estimation of risk can really only be judged in a
context of expected benefit. Without benefit, why would you take any risk? And so these things play into
the sorts of mechanisms that we might pursue to try to control the risks of gain of function research of
concern.
Jonathan Moreno did something that I really liked, and that was tried to identify where there are areas of
consensus. And I dont know if everyone agreed on those areas of consensus, but I think some are close
enough to bear mentioning. There is consensus that there are times when we need to move quickly. But
theres also consensus that some regulation is needed. There is consensus that (inaudible) containment is
imperfect. That risk mitigation involves heavily human factors and increasingly human factors as the
mechanical and environmental factors get under better control. That it would be desirable to have
alternatives to risky experiments. And that gain of function experiments are not fully predictable but
probably improving.

60

Gain of Function Research: The Second Symposium, Day 2


He also had a very interesting proposal for a word he calls RBATS, and I do not recall exactly what
RBAT stands for. But the notion that there would be real time, ongoing, interactive evaluation of
experiments of concern or experiments that may not yet be of concern but could venture into that area so
that (inaudible) is not simply a checkpoint, for example, at the time of funding and another at the time of
publication, but an ongoing, interactive and cooperative process of interaction which I thought was
actually a very intriguing idea that might not take care of the whole issue, but I think could be a very solid
contributor.
And then Ethan Settembre discussed some of the lessons of the first H1N1 pandemic in 2009 and then the
H7N9 outbreak response in 2013. I think making the point that although we think primarily of gain of
function research as being long-term exploratory research, in fact gain of function not of concern is an
inherent part of the routine business of vaccine production. Therefore those impacts have to be
considered.
I do want to raise just a couple of points from the discussion that followed. First was a question from
Gerald Epstein. And I was very tempted to just blurt out an answer, but I felt that in my role as moderator
I really shouldnt, but now I have the pulpit so I will answer it. And the question was what is the evidence
for benefit of gain of function for surveillance? We talked benefit, for example, vaccine design, but what
about surveillance? And today, one of the things that gain of function can give you is sequenced
signatures of risk for, for example, high pathogenicity.
Today sequence analysis is a part of risk analysis and vaccine virus selection, but it is secondary at this
point to phenotypic, clinic, and epidemiologic considerations. I do think, however, that that will start to
shift over time. It is certainly never going to be the case that a sequence analysis can replace
epidemiologic, clinical and phenotypic characterization. However, the volume of sequence data that is
relevant is likely to increase dramatically. Its now possible to sequence flu strains directly from harvested
secretions. You dont need to grow the virus. The ability to do that sequencing is becoming increasingly
widespread, and its really quite conceivable that these will be done in sort of hand-held devices in the
coming decade. And its like there has been an explosion of genetic sequence data coming in in real time
that allow monitoring of epidemiologic events. And I know of two major private efforts and at least one
public effort to really apply bioinformatics to glean more information from those data, primarily around
antigenic change, but one could also do some around pathology. So I do think that even though sequence
analysis, which can inform a gain of function research, is a relatively small component of risk assessment
today, its likely to increase over time. So that would be my answer to that question.
The other one point I wanted to come up before just closing with some sort of personal observations is I
thought another very good point that was made during the discussion is an increasing need to consider
integration of the multiple biosafety, biosecurity regimens. And we have gain of function, we have
DURC, we have select agents, we have routine biosafety, we have agricultural biosafety. And it is, sort of
calling out, I think, for, at some point in the not-so-distant-future, some sort of integration.
So just to end with some personal comments and I guess the first personal comment is also my way of
disclosure. This was a panel of interested parties, and Im certainly an interested party. Im currently
Chief Scientific Officer for Viral Vaccine Development, Research and Development at Pfizer. And before
that I had a similar role at Novartis. So when I first heard about the Kawaoka and Fourchier experiments,
I had very mixed emotions. Maybe the first one was sort of an expletive because it wasnt because I
wasnt interested. I really wanted to know what those experiments showed because there was information
there that could be useful for knowing, you know, what is the threat of H5 actually spreading and what
are the signatures? I did want to know.

61

Gain of Function Research: The Second Symposium, Day 2


However, there was a thought that this could really blow back on the whole community in a very big way.
And, indeed, there was a decision, you know, as a company man in some ways, why would one wallow
into this field if you dont actually ever intend to do gain of function research of concern? And when you
look at what those initial experiments were. They made some point mutations. And then they passaged the
virus in ferrets a bunch of times. (Inaudible) point mutations. And we do it to attenuate, not to increase
virulence. We passage virus and infect ferrets all the time. The distinction between what was done there
and what is routinely done to make vaccines is very different in the outcome. In one case youre trying to
increase pathogenicity and in another case youre trying to attenuate. Yet the fundamental manipulations
are not fundamentally different. And the instruments of policy to affect what is done are blunt
instruments. And I saw a very high risk that in the attempt there is no dispute with the notion that we
need to do something to mitigate big risk, not only because of the risk itself, what everyone thinks of that
risk, theres certainly a perception of risk. And certainly on the part of a pharmaceutical company. I think
most people assume that academic institutions are benign. We dont quite have that benefit, Im afraid.
And so the notions that not only can you not do things that are evil, but you dont want to do things that
are perceived as evil. And you do have to make sure that the mechanisms are there that people in the
public also trust that youre not doing these sorts of things. It is important that these things do need to be
instituted. I think just the level of interest that we see is clear evidence that something has to be done.
However, its also very important that we not throw the baby out with the bath water and not overly affect
the large bulk of work done both experimentally and just in terms of medical countermeasure production I
s not unnecessarily inhibited so that we actually come out with a net positive for this whole effort in terms
of public safety. So Ill stop there.
Fineberg: Thank you very much for your reflections as well as your summary of what was discussed at
the session. Ron, its been a recurrent theme about the question of governance and international status of
managing this problem. Your session faced that front and center. Im eager to hear your reflections.
Atlas: It did. And what the session told me was that there is an international dimension to this entire
debate about gain of function research, risks and benefits that cannot be ignored. We heard a number of
possible ways of approaching that on an international scale. One was to go to a non-regulatory framework
to take ethics or other sorts of systems that have gained traction and are accepted across the biomedical
field, build on those, and essentially build a culture of responsibility within the community that would
assure the public that everyone was taking the appropriate whatever that means mitigation steps.
Another was to simply accept that nations that were carrying out gain of function research would develop
their own sets of regulatory frameworks, like taking the recommendations from the NSABB, raising them
to the level of U.S. government policy, stop there. The other was to allow the efforts that are ongoing in
areas like the United States and the EU to begin to cross-fertilize each other and bring together groups
that would then allow for voluntary harmonization without going to an international organization like the
WHO. And then the higher level is you go to a UN or WHO and you try, perhaps the impossible task we
heard, of coming up with a global regulatory scheme.
Gabrielle Leung suggested the middle ground would be the most effective that nations begin talking. That
either through national academies or through scientific organizations or other organizations, Id suggest
maybe for some nations, COECD or bilateral agreements where conversations go on with the attempt of
harmonization.
Now Ill give a personal thought to the NSABB, and that is weve heard repeatedly that the gain of
function experiments of concern represent a very narrow sphere out of all the experiments. And weve
given, possibly, a three-axis sort of way of judging that. Personally Im still having trouble knowing what
to place within that narrow sphere and whether what I would place in there would be the same that you

62

Gain of Function Research: The Second Symposium, Day 2


would place in there, particularly when we look across different nationalities and go to the international
scale. The more the NSABB can due to really define the narrow scale.
Now, the issue with that, as weve heard, it has to be adaptive, and as I started listening to yesterdays
discussions, it seemed like were going to get there, that we were going to be able to agree upon a narrow
scale. As I kept hearing more and more discussion, particularly today, and what was going on in China
and the industrial scale, it suddenly was getting broader where the boundaries were definitely not clear to
me. So I think thats a challenge to the NSABB.
Second personal observation. Im glad the Academy has not been asked, in this case, to achieve a
consensus, and I feel sorry for the NSABB that does has to develop that consensus.
So I think the other point that came up in our session thats very important that Keiji raised is, if were
going to go to the international scale, whats the real issue that would bring people together? What is it
that were trying to do or not do in this discussion? Now I gave one answer to Keiji privately, and that
was trying to prevent a global pandemic. And that may mean that the research is absolutely necessary
because it will provide the vaccines, the surveillance, whatever to prevent the pandemic. Or take the
opposite side, that the research itself is a risk that something gets out and causes a pandemic. And therein
is, in fact, the dilemma that we are facing with this entire debate. And Im not sure that were ever going
to come up with an answer thats satisfactory to everybody. So, again, Im glad youre charged with the
consensus building and Im not.
Fineberg: Thank you. Well, we would be satisfied if you formed a consensus in your own mind about this
topic.
Atlas: Youll have to talk to my other pocket.
Fineberg: Well have to talk to your other pocket.
Fineberg: This has been a marvelous summary, truly, and some very, very cogent remarks. Let me invite
now any representatives from NIH, NSABB, if you would care to, to offer either a reflection, a request, or
an observation, we would welcome it at this point. And Ill just go in order as you come to the
microphone. Joseph, it looks like theyre pointing to you up front.
Kanabrocki: I first want to thank Harvey and the Academies for holding the session. Ive found it
extremely valuable. I want to thank the panelists, all the speakers and all the participants. I think all the
feedback weve heard has been immensely helpful.
We obviously have we want to provide some of our feedback to your feedback and also highlight some
additional things that we still are tossing around in our heads. So just some observations to begin with.
First Im heartened in thinking that we dont have any major missteps, I think, at least to this point in
time. I mean, were not totally off on the wrong track. And clearly Im pleased to hear that.
Another thing Im pleased about and that its been implicit here but not really explicitly mentioned, is
weve moved away from a list-based system to a phenotypic system that I think has been called for for a
number of years by the NSABB. So I think those are real positive aspects of where were headed.
Now Ill speak personally. I heard a number of things that I personally would like to see added to our
report. And, again, these are things that we have not yet deliberated on, so Im speaking just as an
individual member. These include incident reporting mechanisms that call for incident reporting,

63

Gain of Function Research: The Second Symposium, Day 2


obviously the risk-benefits assessment highlighted the paucity of data around incidents, and I think that
kind of data would be immensely valuable to us. Clearly a need for harmonization, both on the national
level as well as on the international level, thats a theme that weve heard time and again, and I think
something that we probably should call for more explicitly in our final report.
A concept that came up today a number of times that I personally like is this concept of a code of conduct,
a code of ethics, if you will, for scientists engaged in this type of research, and I certainly would love to
see us make a recommendation in that realm.
And then finally, again, how to evolve the international component of the problem, I think is something
that we will be struggling with. I like the way Jerry phrased it earlier this morning, though. Im thinking
about this myself as a step in a process, and clearly I think more work needs to be done after we finish our
task. So, again, I think international engagement is something that we yet need to do much more
completely. But I would not want to wait for that before we issue recommendations. Again, personal
feeling.
So Im going to bring us back full circle. The first session we, you know, Sam gave basically the
summary of our approach and our findings to date and recommendations. And one of the first questions
that we really asked about was this idea about the three phenotypes that weve mentioned in our report.
And I want to come back to this because, again, I think in the original language we gave as an example
resistance to counter measures. And it was really intended to be only an example. Unfortunately it seems
to be the one piece about phenotype three that folks have really grabbed onto. I just want to again remind
folks that for me, and I think for most of the committee, that third phenotype is what makes this an issue
of pandemic potential. I think traits one and two really go to the animal pathogen interface, and trait three
is really where you talk about the human public health, the societal aspects of pandemic. And I think that
third trait remains critical in my own opinion. Now whether we can revise the language in a way that is
more palatable, but I just, conceptually I think that third phenotype is really about pandemic potential.
So I again, we would love more feedback on that question, and I know that my colleagues have other
things theyd like additional feedback on. So Ill stop here. Maybe if we should just put our issues on the
table and then is that
Fineberg: Exactly. Fine? Okay. That would be very good. Thank you.
Wolf: So Id like to get back to oversight design for a second to bounce two ideas off of you and the
audience. Its crystallized again by that figure, that sadly you didnt have ahead of time, but perhaps you
recall the flow chart that were playing with in order to visually communicate the oversight were
envisioning. So I have a question or a thought at the institutional level and at the federal level.
Somebody usefully used the term on ramp. Whats the on ramp for this? Who says, hey, weve got one
of these? And whats envisioned so far is that the PI and local oversight authorities, which would include
IBC presumably, are an on ramp. They may say, we see it. And, of course, then the IBC would
particularly be involved if theres agreement that its there in ongoing management prospectively if the
protocol is funded.
If they fail to spot it, it may be spotted at the federal level. The idea is that it would go through scientific
merit review and then if it survived that there would be again another look. One of my concerns about this
at the institutional level, and particularly hearing some of the thoughtful comments that weve gotten, is
that were in danger of recapitulating in IBCs the history of IRBs. A 40-year history of being very slow to
really design, much less effectuate, a kind of learning oversight system where we were systematically,
and probably still dont adequately, systematically looking across IRBs, harvesting in a way thats

64

Gain of Function Research: The Second Symposium, Day 2


shareable with other IRBs, what the conclusions were on what basis. And spotting unjustified variation.
We all know that situating IRBs locally and IBCs is in part meant to be responsive to local conditions and
local ethics, but you reach the point of unjustified variation.
So I am thinking about how does NSABB move the ball forward so that we dont recapitulate that history.
So that we learn from the IRB experience, albeit with a different set of issues than human subjects, etc.,
but its still local oversight. And I think theres a huge opportunity because now we do have empirical
literature. IOM has gone at this over and over again. And now we have the Notice of Proposed
Rulemaking and that whole effort at HHS and other agencies. So local review is one set of concerns I
have, really making that state of the art and making that an effective on ramp.
The other concern I have is at the federal level, that loop out to the right. So now youve triggered.
Youve gone up the on ramp, youve got (inaudible). Who looks at it at the federal level? Post merit
review? And I just want to circle back and see whether we can end up harvesting your wisdom on this
idea that perhaps it should not just be an intramural governmental process, but it should also be a federal
advisory committee act process. Sure, there are going to be circumstances where you cant review a
protocol in full view because of the biosecurity concerns. But there are exceptions in the Federal
Advisory Committee Act, as you know, for proprietary information, etc., and in any case, NSSAB is
being asked for its recommendation of how the federal government should design this review.
But when you look at the seven principles that the current draft urges, the federal authority, whoever it is,
to consider, those seven principles include the ethical acceptability of the protocol, whether, in fact, the
prospective benefits justify the risks. These are considerations that I, just as an individual, find it hard to
imagine the public trusting if there were no public visibility, no public input, no, you know, required on
the record.
So those are my two pressure points where further input would really be wonderful, the local review and
the federal review design.
Fineberg: Thank you very, very much. And additional comment.
(Inaudible), also an NSSAB member, and I actually had some of the same questions about the value of a
federal advisory committee akin to the RAC that was obviously put in place for the recombinant DNA
issues and I think over the years has really been adaptive and functioned in many different capacities and
certainly has achieved the public engagement that we are seeking.
But I have another question I wanted to bring up and thats that issue of as our recommendations currently
stand, they only really capture things that go through federal funding mechanisms. And we all know that
there are more and more university money, start-up money, for example, so the projects can get started
before funding is even sought if you have a young investigator, they typically have very big (inaudible).
You also have more and more industrial interest in university research. And, of course, there is the whole
private sector that will not be captured by any of these recommendations. It seems that if were talking
about potential pandemic risk, that maybe we are not doing our job if we at least dont deal with that part
of the equation. So Im just wondering what your insight and kind of view on that might be.
Thank you. Well, weve had a number of very concrete topics, and excuse me I didnt mean to not
recognize you on this side.
Im Jim LeDuc, also an NSABB member. Im the Director of the Galveston National Laboratory at the
University of Texas Medical Branch. This is a biocontainment lab that has levels of containment levels
through and including (inaudible). So Im especially interested in the issue of risk mitigation. Going back

65

Gain of Function Research: The Second Symposium, Day 2


to Rons comment, at the end of the day our goal is to prevent pandemic. Weve talked a lot about risk
mitigation, and clearly many of the threats that were identifying can be mitigated through
biocontainment (inaudible). So my question to the panel is how do we create a foundation upon which a
policy can be built that clearly articulates the requirements for biosafety and biosecurity, and importantly,
a culture of responsibility that spans the scope from the individual scientist clear through the institutional
leadership. And how to realizing that each experiment is going to be different, the requirements for each
experiment are going to vary. We dont want to be prospective requiring very specific guidelines. On the
other hand, I think we need to pay close attention to the conditions under which this work is done, both
nationally and clearly internationally.
Fineberg: Thank you very much. A number of the comments had to do with issues, especially our last two
speakers, with issues around institutional and the research community responsibility over and above pure
federal funding and over and above, frankly, even just narrowly-defined gain of function research, if I
may suggest. And so one of the topics that has been raised is the question of how to generate, how to
engender, how to support this broader and deeper culture of awareness and safety that will help mitigate
risk. So that is one theme.
I think another that we heard is still help around thinking critically and in a circumscribed fashion about
what is it that qualifies for attention in the first place as potentially deserving of some special
consideration. Thats separate, but importantly goes along with what ought to be the criteria for deciding
what then actually happens. And separate again, but related, is the set of questions that have been raised
around who decides. How is the participation and at what levels, both within the government processes
and around it.
So all of these questions are still with us, not fully resolved but certainly commented upon. But now we
have an opportunity for those who are here and also any who are on the stage to offer their personal
reflections on these additional specific points. Would anyone like to start, please?
Atlas: Can I start? I want to react to the question of the IBC versus the national, and suggest that we
learned an awful lot with the RAC in the early days. We basically sent cases to the full national board
until we were able to demonstrate to the local IBCs what was of greater concern, what wasnt of greater
concern. We refined the principles there. And in some ways, you know, we need to do the same thing
here. And it becomes a learning process, a reiterative process, where theres appropriate consultation from
the national back to the local and eventually the local learns how to handle some things and you lessen the
burden on the national board.
The other thing was we had exactly the same question with the RAC where it involved federal funding
and did not impose regulatory authority over the private sector, or the non-federal government. There was
fear that that meant they would escape from the framework and do bad things. In reality, the first cases
that came to the RAC came from industry. They wanted the national approval. They didnt want to go
around the system, they wanted to become part of the system, even though they werent mandated to do
so. I have no reason to think the same thing would not happen here. Yeah, different liability, or otherwise,
individual companies or those not forced to join the system would ask to join the system. Now thats just
a personal view, but it does have a historical base. And the same questions were rampant at that point.
Dormitzer: I can certainly speak for having been in companies when there are national standards, and
accepted standards, even when not required to follow them, in general companies want to do so. And in
fact, the most distressing situations are those where you lack clarity over what the expectations are. And
thats why these notions of advisory boards and groups you can turn to to ascertain what those standards
are, even if compliance is voluntary, are useful and I think you would find widespread desire to meet
those standards.

66

Gain of Function Research: The Second Symposium, Day 2


Fineberg: Thank you. One comment that I would add is weve had a lot of discussion about the
importance of the scientific community building and reinforcing a culture of safety. And thats, of course,
central and critical going forward. We also had, I thought, a very informative and stimulating session on
the importance and practicality of public engagement and the various types of publics. And it does seem
to me that in the thinking of NSABB going forward that a model that incorporates at an appropriate level,
including a FACA-like model, relevant public participation, would be very salutary in building the kind
of larger trust, and frankly reinforcing the community of safety, both within and around the scientific
community, that the success ultimately will depend. So I, putting together a variety of the things weve
heard, believe that that would be a valuable addition in the thinking of FACA going forward in
connection with the NSABB. Other comments or reflections?
Fischhoff: This last discussion has reminded me of a process I was involved with at FDA over the last
few years in which FDA, the Center for Drug Evaluation and Research has developed a benefit-risk
framework. You can find it online. That it developed jointly with its staff. Came up with it as rewritten
its the guidance to the advisory committee for summarizing information that leads to a table. It looks
much like the table that Cara Morgans third option. It was designed to help people to tell their story in a
way that one could see what the logic was, one could compare across decisions, one could find the
decisions that were as somebody said were anomalous, that gave industry a clearer target for the kind
of things that FDA was approving. And they were comprehensible. While at the same time able to protect
what there was proprietary information. So theres something that, you know, something that had those
properties of enabling people to get their hands around get their minds around the basis for the decision
could be a contributor to this process.
Fineberg: I might make another observation, if I may, on this first and fundamental question of the
phenotypic inclusiveness or exclusiveness. One of the things that I heard repeatedly in the course of the
discussion is the importance of circumscribing the domain of concern so that neither the scientific
community nor the regulatory authority, nor, frankly, the interested publics, are needlessly burdened with
a wide variety of questions that truly do not raise and rise to a level of concern. At the same time, we had
a lot of discussion as to whether the current formulation where the requirement is that a given experiment
affect all of the elements is a sufficient degree of circumscription.
Now I think the real challenge for the NSABB is to reflect in its description the actual intent of the
NSABB and to do it in a way that is clear and understandable over time. So for example, and I will speak
again for myself, I think we can be overly fixed on the models that depend on our familiarity with
influenza and influenza as the case, when in fact the policy that will be promulgated ultimately needs to
be capable of dealing with gain of function, and increasingly, I would say, experiments that intend to
develop entirely novel organisms with capacities and capabilities that are not currently even expressed in
existing microorganisms. And if you think that broadly, defining a phenotypic space that involves
virulence, and involves transmissibility, and involves resistance to treatment if thats how you wish to
characterize it, you could imaginably place any organism at a point in space that has those three attributes
defined. And if you think of it that way, theres an aspect of this space where you would not want
research to go at all. Theres an aspect of that space where you wouldnt want to require further review.
And then theres an aspect of that space, depending on your starting point and the direction of the
experiment, and this is where vaccine development comes in so importantly, the direction of the
experiment to make it worse or to make it better, which would dictate that it may, then, be a topic that
requires consideration for gain of function.
And I hope that it will be possible for NSABB to mull this question further, to think about ways to
characterize and describe exactly what it believes should determine a consideration for gain of function.
Perhaps to be explicit about excluding vaccine development research, which is so fundamental to

67

Gain of Function Research: The Second Symposium, Day 2


protection, and actually the contrary to the worry. And to be able to apply the principles more generally as
new ideas with different organisms will naturally arise in the creative minds of science.
Joseph, the floor is yours.
Joe Kanabrocki, NSABB: So we agree. In short, I just again want to clarify that, you know, I think Ken
Berns said it best yesterday, were not really worried about what goes in, its really what comes out. And
we are not saying that the experiments of concern are only those that contribute the three phenotypes.
What were saying is the experiments of concern are those that result in an organism that displays those
three phenotypes, and theres a difference. Because you could begin with two of the three and contribute
the third and youre in the game. So, again, I think we agree in concept, its just a matter of choosing our
words carefully, in my opinion.
Fineberg: Thats very helpful. And I think it speaks to a need even for further clarity in this description.
Let me invite if there are any further reflections that any of the panelist would care to offer? Is there any
point that anyone has felt has a burning need to contribute that you would like to add to the record before
we conclude so as to benefit the NSABB as much as we can? Well start here and then well come over.
Please.
Wendy Hall, Homeland Security: My question was more in terms of precedent. One, how important is it
that you all as an academic body have full awareness of the gain of function experiments being proposed
throughout a bunch of different labs in the United States? Because right now the people with visibility are
the government funders, but Im not sure theres clarity across the academic community at any one point
in time of whos planning to do what or whos doing what. And my second question is with the select
agent rules where they were implemented in a range of 300 labs, we saw a range of performance. And
some labs were the gold standard, and they came to us and complained about other labs that were a little
bit lax and they were worried about liability if some lab does it bad, its going to have a bad impact on the
whole community. So there was a range, as one would expect for any system with 300 components.
In gain of function research, is there any precedent where if the academic community themselves had full
visibility, peer-to-peer, institution-to-institution, that there could be corrective elements from the
institutional bodies with each other to redirect or course correct, sort of, the lower, more sloppy practice
in gain of function research such that the government doesnt have to come down with really tough,
restrictive language across the board in case one or two, you know, make a sloppy enough error that it
makes the mainstream press and everybody else.
Fineberg: Thank you, Wendy, for both the comment and really a suggestion. I think it emphasizes, it
reinforces the importance of the scientific community itself coming together in a coherent way on this and
related issues of safety and security. And from a personal point of view, I dont think the government
alone can accomplish this. And I dont think the community, acting without the guidance of shared
standards, will be able to do it nearly as well. So I think they will be mutually reinforcing.
Monica Schoch-Spana, with the UPC Center for Health Security: I know were sort of in a regulatory,
lets-prevent-bad-things-from-happening mindset. But I wonder if we could pick up a point that Marc
Lipsitch made around the capacity for innovation. Are there things that could incentivize, not simply in
your proposal where you say, well, I cant do it any other way, and you have to provide the rationale for
potentially conducting a GoF experiment? Is there and perhaps this is out there already are there
special research finances for offering incentive to find a different way? I think we have to think about
fostering innovation, not just pushing holding back the bad stuff, but incentivizing new ways of
thinking about this.

68

Gain of Function Research: The Second Symposium, Day 2


If there are these systems that are put in place, and there is data gathered about the kinds of experiments
that get the thumbs down, that data at least could be synthesized to say, okay, you know, this line of
research really needs to be replaced with something safer. Lets incentivize from a financing point of
view, you know, funds via NIH, etc., innovation. So thank you.
Thank you, Monica.
Fineberg: Please.
Nicolas Evans from the University of Pennsylvania. I have two what I consider erratum and then three
quick points.
The first is regarding the Declaration of Helsinki. The point is well taken that the Declaration of Helsinki
was a great initial work in establishing norms in human subjects research and biomedical ethics. But it is
worth noting that the FDAs removal of the Declaration of Helsinki from its regulations is an indicator
that as a model for governing the life sciences, we should be especially careful about the way that we seek
this type of international collaboration because if the U.S. is going to set up or attempt to initiate other
setups for governing gain of function research only to pull out of them because it doesnt want dynamic
reference in its own legislation, thats a huge problem.
The second is, and this is based on the comments of my colleague Mark Lipsitch, the critique that IRBs
and biomedical ethics chill biomedical research has definitely been made again and again and again and
again in the literature. And two recent works on that include Carl Schneiders The Censors Hand and
Robert Klitzmans The Ethics Police?
Now my comments quickly. We seem to be living in a bit of an acronym soup at the moment regarding
GOF and GOFROC and GOFOC and everything else. And we might need to distinguish that. I think this
is really important conceptually that there is gain of function, which everybody believes that as a
technique is a valuable technique for a lot of reasons. Then there is gain of research. Gain of function
research resulting in the creation of novel pandemic pathogens that is beneficial. And there is the same
kind of research that is uniquely beneficial. And Gryphon Scientific's tables on the unique benefits of
these kinds of research said that three of 13 corona virus studies that they looked at were uniquely
beneficial and nine of 24 influenza gain of function studies were uniquely beneficial. So we should really
be careful to specify exactly what were talking about here.
Ive noted that the consultation of healthcare workers, if were talking about engagement, is almost
entirely absent from this discussion. Im not simply taking about MDs, but EMTs, RNs, NPs, the people
who bear the disproportionate burden of risk in the event of an infectious disease outbreak. So when
were talking about what benefits we might want from scientific research, we should really, I think, be
consulting with the people who are going to use those interventions in their day-to-day work of actually
saving lives.
Finally on innovation, it occurs to me that if were going to, over the last kind of half decade, spend $820
million on synthetic biology funding, we might also want to spend a small amount of money innovating
in our applied biosafety. I mean this should be an area for innovation, not just in terms of social science
and human factors, but the actual physical technology, so reinvest in material science to better our PPE.
This is not only great research, but it would also be a robust solution for gain of function, for laboratoryacquired infections, and for disease pandemics.

69

Gain of Function Research: The Second Symposium, Day 2


And finally in terms of social sciences, I absolutely agree that social sciences should be incentivized, as it
was put, to participate in issues around biology but would note that part of that comes down to cash. And
the funding of social sciences research, which has historically been dismissed for its values. Thanks.
Fineberg: Thank you very much.
Ogilvie: I bring two questions from the web that due to time yesterday couldnt be asked, but I want them
to be on the record. The first is from Gregory Kamoula. Do current oversight frameworks provide
adequate treatment of novel pathogens that were never seen before and are not on the pathogen lists
mentioned in the direct recommendations. For example, if a new potentially pandemic pathogen like
MERS is identified, would gain of function research with this pathogen fall under proposed regulation?
And the second question is from John Cavauny. Its directed at Dr. Casagrande. Prompted by publications
suggesting that gain of function research has characteristics of so-called potential quote/unquote normal
accidents, those in which a technology combines highly negative outcomes with a nuclear plant
meltdown with unquantified and perhaps unquantifiable scenarios falling outside even the most complete
probabilistic risk analysis. Griffins work suggests that such scenarios may be relevant with pandemic risk
the extreme negative outcome. Does Dr. Casagrande have an opinion on this characterization of gain of
function research? Is it correct in some respects as it may be for some contemporary technologies? Or
may there be a characterization fueling clashing of gain of function risk perception?
Fineberg: Thank you for both of those. I think the first question really relates very directly to how
NSABB will come to define what is meant by the research that falls under gain of function. And the
second certainly bears directly on the issues about the definition of application or risk and benefit
analyses in these assessments.
Lets go to the next comment please.
Casagrande, Griffin Scientific: Im actually commenting not in my capacity as PI of the Risk and Benefit
Assessment. I wanted to push back a little bit about several comments I heard in the last couple days
about what we can learn from the successes of the Biological Weapons Convention. Id like to turn that
argument on its head because insofar as there have been successes in biological nonproliferation, such as
the limited use of biological agents in warfare, I think those could be, if they have to be ascribed to a
political instrument, maybe the Geneva Protocol is the better exemplar there that banned first use of
bacteriological warfare. Whereas the way the Biological Weapons Convention was additive and unique
was that it prohibited activities up to warfare, such as stockpiling and research, that were offensive. And I
think in that case weve seen the Biological Weapons Convention as a spectacular failure in that several
states parties have violated it including one of its depositories. So I think what we can learn is not from
its successes but from its failures. Such as a lack of a verification and inspection regime that has teeth,
and a lack of an enforcement capability thats relevant internationally. And I think both of those can be
used to draw lessons for our discussion today.
Fineberg: Thank you very much. Please.
Michael Selgelid, Center for Human Bioethics, Monash University: Just a small but I think an important
point. So some risks of gain of function research we might be able to do a reasonable job of predicting
and/or assessing some benefits, potential benefits of gain of function research. We might be able to do a
fairly reasonable job of predicting and/or assessing. The important point was made today, and we were
reminded of it in the summary at the beginning of this session, that there are some benefits that are very
hard to predict and/or assess and therefore their value or predict. So thats a good point, but lets not let
that bias us with the assumption that oh, any given gain of function research project, we should assume

70

Gain of Function Research: The Second Symposium, Day 2


that it has some benefits that were not taking into account because theyre hard to value and predict and
so on. Because the same is true of risks, because were talking about epistemic benefits that are hard to
predict and asses, whether it can be epistemic risks that are hard to predict and assess, so it works in both
ways. Thank you.
Fineberg: Thank you very much.
Marc Lipsitch: Theres been a lot of talk, especially in the coffee breaks, about what are the things that we
all agree on are there any experiments that we all agree would never be acceptable and are there any
experiments that we all agree we should never impede, or any activities we should never impede. I dont
know whether Ebola plus flu is in the first category of never acceptable. It was one of the examples that
people gave. Certainly I think experiments, developments, other work to enhance vaccine yield in PR8
influenza A virus or some other safe background is the sort of thing we would never want to impede and
should be a green line if the other is a red line.
So whatever the regulatory framework or oversight framework thats developed, I think it would be
incredibly helpful to have at least those two kinds of order cases spelled out by some examples in order to
build our intuition for the next time something comes up that isnt envisioned yet.
And then maybe to give some more contestable case studies. So some things that should never enter the
on ramp, some things that should never be allowed to progress, if there are such things, and some things
in the middle that should be reviewed and what should be the questions asked. Should there be a standard
question asked, could this be done in the PR8 background or some other safe background to get similar
knowledge? The same way that we do, for example, on animal protocols. We have to explain why were
doing it in animals and what is unique about that. Maybe there are other questions. But a set of case
studies that say this is okay and should not be impeded, this is not allowable, and then some guidance for
how to handle the middle ones which we may not agree on.
Fineberg: Thank you.
Joe Kanabrocki, NSABB: I just wanted to respond to that. When we had a number of calls where we tried
to think of experiments that absolutely should not be done. And, quite honestly, every single example that
came up was an experiment that lacked scientific merit. And so I would just leave it there and let you
think about that, but I really struggle to think of experiments that have scientific merit that shouldnt be
done. Again, thats my personal view. And those that shouldnt be done are technically those that do not
have scientific merit. So, again, Im just sharing with you some of our experience and our thinking about
that issue.
Fineberg: Thank you very much. Please.
Jerry Epstein, DHS: Just on the same topic, I think its useful to go back to the HHS framework that Larry
Kerr described yesterday and the test that a proposed project would have to satisfy before it was deemed
acceptable for funding. One was that the pathogen to be constructed had some ability and Im not
quoting this exactly there was some natural process by which that pathogen could occur, therefore you
were creating something which you had a reasonable expectation nature might beat you to. And I think
something that might pass the bar of not being acceptable, at least under that rule if it were taken over
here, is something where we might learn some science by building some construct and saying does it
work and how does it behave. But if its not something nature might ever do on its own, you cant argue
youre defending yourself from nature because nature wasnt going to get there. So that might be an
example of something on the other side of the line, at least from the precedent of the HHS existing
framework.

71

Gain of Function Research: The Second Symposium, Day 2


Fineberg: Thank you very much for that addition.
Any further comments? If not I would just like to say that we first want to express our deep appreciation
to the several dozen participants who spoke and shared their views with us. I want to thank also the
dozens of members of the audience who were here and the several on the web who participated and
contributed their ideas. Truly our admiration and appreciation for the work of the NSABB and its
colleagues at the NIH and trying to come to grips with these complicated questions is perhaps matched
only by our wonderment and admiration, Volker for the work that you and your colleagues have done in
Europe to actually complete a phase of assessment and a coherent set of policies and strategies that now
apply in the European context. This points to the very great challenge of harmonization within a country,
such as the U.S. now, but also across the globe. Clearly a policy about gain of function that applies only
to one country is not a policy that will work for the safety of the world. And that is something that we
have to be very, very mindful of.
I think its also evident from all of the discussion that whatever is the next iteration of conclusions and
recommendations that emerge from the NSABB will really be one step in a process that is likely to
continue, require continued refinement, require the engagement of the scientific community, require
creative ways of finding the public interested and affected to be involved and creatively engaged in the
process of decision making going forward.
I want to express personally my sincere and very deep appreciation to the members of our planning
committee, some of whom are here on stage. Also Don Burke, whos here. A few who are not otherwise
recognized include Ruth Berkelman and Sir John Skehel, who did a wonderful job in helping to think
through the organization of topics that would be most revealing.
And I want especially to call out and express all of our appreciation to the exceptional staff of the Board
on Life Sciences at the National Academies, including Joe Husbands, Fran Sharples, Aanika Senn, Jenna
Ogilvie, Brendan McGovern, Robin Winter, Andrea Hodgson, and Audrey Thevenon, who did such a
marvelous job in helping us prepare and enabling us to have such a fruitful and enjoyable experience over
these two days.
Thank you all very much for participating. We are adjourned.

72

You might also like