Blog


The morality of nature

posted Jan 20, 2017, 3:15 PM by Kevin Esvelt   [ updated Jan 20, 2017, 3:15 PM ]

I've received many communications in response to the profile by Michael Specter in the New Yorker; more than I can answer. They've all been quite positive except for one topic: my comments on the morality of the natural world. One reader wrote:

'One topic that has been really bothering me though is your views on morality and nature. One should not anthropomorphize nature or natural selection and look at it in terms of good or bad, nature just IS, nature is balanced, that's it. As soon as humans intervene is when it becomes unbalanced, its up to you to decide how unbalanced we are willing to make it.

I guess my point is gene editing can be an incredible tool for good to humans but someone with your intelligence and power shouldn't be saying things like “the ridiculous notion that natural and good are the same thing.” or “Natural selection is heinously immoral.” Saying nature is immoral is just as wrong as equating nature to godliness.'

This is an issue deserving of in-depth exploration, so I'm sharing a slightly edited version of my reply:

I agree that nature is value neutral. I normally use the term amoral, not immoral, but only as long as we lack the power to influence it. Once we do, it becomes a test of our own moral character.

If failing to save a drowning child when we could have done so makes us responsible for that child's death, then acquiring the ability to mitigate animal suffering renders us morally responsible if we choose not to.

This is not a comfortable moral position. Right now, we spend ~$2.5 billion per year in the US on animal shelters and trap-neuter-release programs for stray and feral dogs and cats. Three weeks ago I rescued a limping stray cat. Had she not received care and antibiotics, she would likely have lost the ability to hunt and slowly starved due to her badly infected wound. How is that stray cat different from an ocelot cub stricken with a screwworm infection, which is unimaginably more painful? We didn't deliberately create either, but we can do something about the stray cat, but cannot aid the ocelot. Of course, that will not be true for much longer: we eradicated the screwworm from North America with sterile insect technique, and with gene drive could do the same for South America. Should we?

We must face this question openly. It doesn't obviate the other challenges, such as weighing humans against animals or how much more of a say South Americans should have in the case of screwworm. But since we are now developing the power to intervene, it becomes a moral issue where previously it was not.

That's why we need to discuss these challenges now, and why all research on these questions should be done in the open. It's not my decision to make, or yours, but everyone's. I need you as a check on my own intuitions, because I am not confident in my own ability to make wise decisions on this scale. We must face it together.

Why Open Science?

posted Dec 8, 2016, 11:39 AM by Kevin Esvelt   [ updated Dec 26, 2016, 8:12 AM ]

Most researchers keep their plans to themselves for a very good reason: the system punishes us if we don't. Share a brilliant idea, and another laboratory can throw more money and hands at the problem, publish first, and claim all of the credit.

Secretive research is not just wasteful and inefficient from the perspective of society. It's actively miserable for practicing scientists. Because no one shares the results of failed experiments, different labs fall into the same pit traps over and over again. We never hear about new successes or techniques until the project is complete, so it can take years to find out that the potential collaborator with a key piece of the puzzle was always just down the street. Since we only have the vaguest idea what others are working on, we always worry that someone else might be working on the same project. If they get there first, we've been 'scooped' and typically get nothing. Even if we publish first, we've still wasted years on a project that someone else would have completed soon afterwards. Paranoid secrecy may be fun in a game; less so when your life's work is at stake.

The harsh truth is that no one would rationally design the current scientific enterprise. It evolved in the time before modern communication technologies, and persists due to cultural traditions and a collective action problem. It's as though we're still sending out competing teams of explorers who still insist on returning with maps and reports every few years... even though all of them now have satellite uplinks.

That would be bad enough, but there's more. Our civilization, being unsustainable, quite literally depends on new technological advances. Those advances are getting more powerful with time. But who decides whether an advance will be positive or negative? The small team of ultra-specialist explorers, who can't reliably anticipate the consequences on their own.

There's evidence that risk analyses involving local citizens produce more comprehensive results than teams of expert risk analysts, even as judged by those same experts. But in an age of increasing dependence on successively more powerful technologies, we still practice science much as we did a century ago – that is, mostly blind to the ongoing efforts of others and any attempts to assess consequences. It's mind-bogglingly shortsighted, and a testament to the power of the status quo bias and collective action problems.

Of course, this doesn't mean we should reform all of science immediately, even if we could. We have a limited ability to predict the consequences of altering complex systems, including the scientific enterprise. That means we should start small, carefully measure outcomes, and only scale up if warranted.

The field of gene drive research is an ideal test case. Conducting gene drive experiments behind closed doors risks affecting the shared environment and the lives of others without their knowledge or consent. It denies other scientists and interested citizens the opportunity to voice suggestions or concerns that could improve safety and accelerate progress. And it greatly reduces our ability to build support for beneficial applications. In short, there are compelling moral and practical reasons for ensuring that gene drives and other ecological technologies are developed in the open light of day.

Opening science from the earliest stages will enhance safety and reliability by encouraging collective scrutiny of safeguards and research plans. It will accelerate scientific progress by enabling coordination among researchers. And it can improve public confidence and the likelihood of balanced assessment by actively inviting and addressing concerns early in development. For the field of gene drive research, open and responsive science is a moral and practical necessity.

That's why we're sharing all of our research proposals that involve gene drive here. Special thanks to all the other members of Sculpting Evolution for their courage - they're the ones placing their careers on the line in order to do what is right.

We hope that this step will help encourage journals, funders, holders of intellectual property, and policymakers to change scientific incentives in favor of open gene drive research.

Boldly Illuminating a Better Future

posted Nov 16, 2016, 6:35 PM by Kevin Esvelt   [ updated Dec 30, 2016, 1:31 PM ]

We've submitted an application to the Roddenberry Prize competition on behalf of the Responsive Science Project.  Our platform isn't live just yet, but our goal - an open and community-responsive scientific enterprise - is perfectly in alignment with the shared vision set out by Gene Roddenberry and his successors.

More details on our reasoning, as well as copies of our grant proposals involving gene drive, are available on our proposals page.

Belated update

posted Oct 25, 2016, 9:42 AM by Kevin Esvelt   [ updated Nov 18, 2016, 12:20 PM ]

As is hopefully evident from the site changes and main news page, we've now launched the Sculpting Evolution Group at the MIT Media Lab. We're at the Lab because it's one of the very few places where we can our work in a thoughtful and ethical manner. That is, a place that will actively support us in evaluating the consequences of our technologies and pioneering new ways to make science better serve society rather than strictly focusing on scientific publications.

In the tumult of setting up the lab and attempting to guide gene drive technology and new developments, posting has suffered more than a bit. Recent thoughts have been published at Project Syndicate and Nature, although most of my output has been in the form of talks - 38 of them so far this year. I'm hoping to make most of the presentations available. As always, everything is CC-BY.

Discussing gain-of-function research

posted Sep 8, 2015, 1:06 PM by Kevin Esvelt   [ updated Oct 25, 2016, 9:28 AM ]

In the course of promoting safeguards and transparency in the field of gene drives, many have drawn a comparison to so-called “gain-of-function” (GOF) research on influenza and other potentially pandemic pathogens. Like gene drives, GOF has the potential to affect people outside the laboratory if something goes wrong. I haven't previously touched on the subject, but Marc Lipsitch and Thomas Inglesby invited me to join them in highlighting the need to engage those most likely to be affected – in this case the clinical community, which has heretofore been largely silent on the issue – and holding up CRISPR and gene drives as an example to follow. Our work was published today in the Annals of Internal Medicine.

As an evolutionary biologist, my skepticism of GOF influenza research should not be surprising. These studies typically seek to evolve more virulent strains of influenza in the laboratory in order to provide advance warning of which mutations we should look out for in the wild. They are controversial because they deliberately create the very agents we fear, and by this point we know that mistakes are inevitable when relying on barrier confinement. Perhaps not this year, or this decade, but they will occur.

Yet there is also a cost of doing nothing. The question is whether the risk is worth it. Is the knowledge of which mutations we should fear sufficiently valuable?

First, it is not at all clear that advance warning that a virus is near to becoming a pandemic would allow us to do something about it. Second, we have abundant evidence that evolution is stochastic. That is, if we run an evolution experiment in the laboratory many times under conditions as identical as possible, the iterations will frequently come up with different solutions. This is true whether one is evolving individual proteins or entire genomes. The resulting behavior of the evolved variants may be functionally the same – especially for whole-genome evolution studies where many genes are evolving in concert – but the underlying mutations producing that outcome commonly differ. This certainly came up in our PACE study examining evolutionary pathways and stochasticity. It's not true for every system, but does seem to be the case for many. Precisely why this would be different for influenza is not clear.

I am consequently not at all concerned by the current moratorium on funding gain-of-function studies. In my view, the odds of discovering which mutations to fear – and being able to use that knowledge to prevent a pandemic – are very slim, and consequently not worth the attendant risk of accidentally causing the same pandemic we seek to prevent.

Not only might a human-created accidental pandemic lead to many unnecessary deaths and suffering, we must also consider the effects of such an event on public confidence in scientific research more generally. In a post-pandemic world, it would be very easy to question why the government should fund life scientists at all if their judgment was so poor as to knowingly risk creating the pandemic that killed a million people. This goes double when superior safeguards were available but were not employed. In short, reprise all the public-trust reasons why we should take stringent precautions to avoid an accidental gene drive release, then add the far more harmful consequences of a pandemic human virus.

Yet there are other reasons to perform gain-of-function studies. The Kawaoka laboratory just published research identifying superior influenza viral backbones for rapid vaccine production. This is important because current influenza vaccines require us to generate of large quantities of virus, which is subsequently inactivated and injected to provoke an immune response to the viral proteins.

Historically, we generated those quantities by leveraging chicken eggs – which we have in abundance – as growth chambers. Yet egg-based vaccine production is slow, vulnerable to disruptions in the egg supply, can select for mutations specific to replication in eggs, and typically produces viruses that are contaminated with egg proteins capable of causing allergic reactions in sensitive people. There is consequently a movement to switch influenza vaccine production to cultured mammalian cells.

The problem is that vaccine virus yields are quite low in cultured cells. For some vaccine candidates, yields can even be low in eggs. The researchers consequently sought to evolve a backbone capable of efficient virus replication in cultured cells. They succeeded.

On a technical level, this was a truly impressive work. Using library generation and screening, generating known mutations from the literature, and combining these hits to maximize viral replication required a tremendous amount of dedicated effort. The resulting backbones increased production in both mammalian cells and also in eggs, in some cases by up to 200-fold. As is usual for directed evolution experiments, the mechanism remains unclear.

This work was performed before the US government imposed a moratorium on gain-of-function research until an independent assessment could evaluate safety. But the new backbones did marginally increase the severity of disease in mouse models, if only marginally, so the experiments producing them qualify as GOF. Predictably, the study is being held up as an example of why the moratorium was a mistake or at least applied too broadly - though the director of NIAID maintains that an exemption would have been granted.

Why specifically an exemption would have been granted in this case is not clear, but it may have had to do with the fact that the researchers used a form of intrinsic confinement. Specifically, they altered a sequence in the hemagglutinin gene involved so that the resulting viruses were of low pathogenicity and consequently exempt from Select Agent status. They did not specify how many known mutations were required to regain pathogenicity (obviously there may be unknown mutational routes as well). It's certainly better than no intrinsic confinement.

But it may not be as good as alternative confinement approaches that could make all influenza virus research safer, and is certainly not as good as stacked confinement strategies. What might those be? While I'm not a practicing virologist, several do come to mind. For example, researchers could insert a sequence into the viral backbone that would be targeted by RNAi in human cells, then produce the virus in non-human cells. Since the goal is to generate inactivated virus, you don't need it to replicate in human cells to generate the immune response. Alternatively, produce it in human cells with the relevant RNA knocked out in order to permit viral replication specifically in those cells. There's this technology called CRISPR that makes it easy to do such things now...

If the RNAi trick doesn't work for some reason - this is biology, after all - there are other options. One might instead encode one of the viral proteins in the genome of the producing cell and delete it from the viruses being tested, thereby ensuring that the virus can only replicate in cells containing the missing component. Sure, it's possible that the viral replication cycle might require protein production to be delicately timed and responsive to the viral copy number. If so, this could be corrected using a trans-splicing intein to assemble one of the viral proteins once each half is produced. Yes, the devil is certainly in the details, but the bottom line is that there are several potential ways of incorporating intrinsic confinement.

While these aren't necessarily trivial endeavours, the authors of this work are clearly highly skilled; I am confident they could succeed. Once available, the safeguards could then be used for other types of experiments with these viruses, dramatically reducing the chance of accidentally causing a pandemic. The resulting risk reduction might be sufficient to make these kinds of studies worth pursuing. I personally would be much happier if someone tried.

One last point. According to Science News, “the researchers contend that their findings may help bring future pandemics under control faster.” This is indeed a critical challenge – I would prioritize the ability to quickly make and scale up a vaccine to any given virus extremely highly – but the approach still assumes that we will be able to make a suitable vaccine strain in the first place. That itself can take time – at least, if one relies on the current paradigm. This is an excellent reason to move beyond the current paradigm.

Instead of simply injecting killed virus into patients and waiting for the appropriate immune response, we might use vectored immunoprophylaxis (non-integrating gene therapy) to instruct cells to produce the appropriate antibodies. This approach has the greatest potential for a quick turnaround, as it only requires identifying functional antibodies from the first infected individuals, screening them for binding to the virus (or other agent) to isolate the most protective ones, then making lots of DNA encoding those antibodies for delivery into patients. In principle, you could start cranking out “vaccine” inside of a week and quickly scale up much faster than current vaccine production allows.

The overall lesson is that we should look further ahead towards more broadly applicable technologies. Investing in better safeguards can reduce risks across the board, potentially enabling highly beneficial experiments that would otherwise be too risky. And developing new technologies can offer benefits – in this case, truly rapid response to new and dangerous viruses – that current approaches could never hope to match. Combined, these two advances could enable swifter (albeit still highly flawed) detection of potentially dangerous viruses and let us do something about it.


Tags: engagement, transparency, responsibility, innovation, gain of function

Safeguards for laboratory gene drive research

posted Jul 31, 2015, 7:15 PM by Kevin Esvelt   [ updated Oct 25, 2016, 9:29 AM ]

I recently convened a diverse group of scientists from relevant fields, including those who published a drive-based genome editing method at UCSD in March, to agree on recommended safeguards for gene drive research in the laboratory. After months of discussion, the resulting manuscript was published in Science today.

It's clear that our recommendations were the product of a committee, or at least the academic equivalent. Opinions differed, and we had to compromise. It's fair to say that were it up to me alone, the recommendations would have erred a bit more on the side of caution. This may be because I've had more time to think about the implications and the morality of the situation, it could be a generational thing, or I might simply take the issue more seriously because in many ways I am responsible: when it comes to detailing RNA-guided gene drives, I chose to open the box.

In any case, the space constraints imposed by Science were a major obstacle to our effort, which was intended to prevent accidents while the formal National Academy of Science panel (which I was privileged to address yesterday) develops definitive guidelines. I've consequently written a more detailed analysis of the issues in order to assist that effort. I will revise this analysis periodically as the field advances in theory and experiment.

Sculpting Futures

posted Jul 28, 2015, 4:15 AM by Kevin Esvelt   [ updated Oct 25, 2016, 9:29 AM ]


I was fortunate to attend the MIT Media Lab's Knotty Objects celebration, which brought scientists and engineers together with artists and designers to spark creativity and new approaches. For this particular event, much of the focus was on design. As someone who knew very little about the field and culture thereof, it was intriguing to hear the ways in which designers conceptualize problem-solving and imagination. The final event was a debate pitting critical design, which emphasizes imagination and possibility, against practical design, which aims to solve real-world problems. It was clearly a bit scripted to emphasize the contrast, but the sound and fury helped define the deeper relationship.

Our task is to build a better world. We must imagine possible futures, create technologies and conceptual interventions designed to realize those futures, model the likely outcomes, and use those projections to decide upon courses of action based on people's values.

With respect to the Creativity Compass often cited by Joi Ito, imagination (most emphasized by art and science) must discover alternative states, while practicality (highlighted by engineering and design) must realize them. Imagination is enlightening, yet powerless alone; practicality is efficient, but shortsighted. Achieving the best of worlds requires both.


Yet this clarity came with an unsettling and unwelcome counterpoint from a less-admired field. While actions must be informed by projected outcomes, decisions will ultimately be determined according to people's values. How we balance opposed value systems to determine which possible world to realize can pose a coordination problem dwarfing the technical requirements. As our technologies increase in power and complexity, the area in which we most urgently need breakthroughs may be in politics and governance.










Tags: creativity, compass, imagination, practicality, insight, politics, governance, coordination, technology

Gene drives for kids

posted Jun 24, 2015, 8:03 AM by Kevin Esvelt   [ updated Oct 25, 2016, 9:29 AM ]

An explanation courtesy of the Up Goer Five Text Editor and inspired by xkcd:

It would be nice if we could change how some animals and other living things act towards us and towards other living things. We could make sure people don't get sick when they get bitten, make our food grow better by keeping animals from eating it, and help save some kinds of animals that are in trouble.

We know how to change one animal at a time, but it's hard to change all the animals of that same kind. That's because most of our changes make it harder for the animals to have children, so over time fewer and fewer animals will have the change.

We thought of a new way to change an animal so that all of its children will have that same change. This could let us change a small number of animals and let them go outside, where their children will have the change, and their children will have the change, and so on. After this has happened enough times, we will have changed all the animals of that kind.

Now we need to make sure it works and start getting people together to decide whether, when, and how to change different kinds of living things in order to make the world a better place.

Tags: education, transparency, responsibility, innovation

Empirical grounding, legitimacy, and trust

posted May 18, 2015, 6:17 PM by Kevin Esvelt   [ updated Oct 25, 2016, 9:30 AM ]

The question of whether it is ethical to insist that decision-making be empirically grounded remains difficult. To reprise, is it fair for us to initiate the terms of the debate, making sure that “no” is a legitimate answer and inviting suggestions for which experiments should be done to better analyze the risks and improve our models, but specifically attempting to ground the decision-making process in empirical reality?

This approach may be just like democracy - it's a terrible system, except for all the others. That said, there's a clear catch if we're going to insist on empirical grounding: we must recognize the legitimacy of other types of concerns. The concern itself may not be empirically grounded, but the person expressing it is quite real. This person may be factually mistaken. Their own model of reality may be woefully inaccurate. They may even have a value system that would make many recoil. Yet none of that matters, because everyone should have a voice when it comes to shaping the future.

Not only is it the right thing to do from a moral perspective, ensuring that discussions are broadly inclusive has practical benefits. First, ignoring concerns is a recipe for mistrust, resentment, and determined opposition, even in those who might otherwise be neutral or supportive. If future advances will be built upon a foundation of trust – and most will – actively inclusive listening is a cornerstone.

Second, every so often an unexpected voice will catch something the rest of us missed. The more complex, ambitious, and failure-prone the technological working, the more important it is to solicit a wide variety of opinions. For the most important technologies, the ones that will likely affect nearly everyone, we should make a point of inviting every critic, Luddite, and fanatic to speak, and do our best to distill their objections down to something empirically actionable. There's simply no better way to simultaneously identify failure modes and update our priors to reflect the beliefs of others.

All of this represents a dramatic departure from conventional approaches to technology development and public engagement. Typically, troubleshooting was limited to small panels of the technically qualified, while popular opinion and the public interest were seldom consulted when it came to innovation. But we now live in a more connected and less trusting world. The cost of communication has decreased dramatically, making it far easier to collaborate and solicit feedback from people with diverse viewpoints and expertise. At the same time, trust is more valuable – and easily lost – than ever before. And the single best way to promote trust is to be radically transparent. For some powerful technologies, the relevant stakeholders may include all the citizens of the globe. As has been demonstrated again and again, the world does not readily change without popular support. In this age of greater connectedness and skepticism, that support cannot be taken for granted - for trust must be earned.

Tags: transparency, legitimacy, responsibility, innovation, philosophy, connectedness, trust

Responsibility & legitimate debate

posted Feb 3, 2015, 11:21 AM by Kevin Esvelt   [ updated Oct 25, 2016, 9:30 AM ]

Responsible innovation involves ensuring that we have transparent, broadly inclusive, well-informed discussions on whether to move forwards with a given technology, and if so, how. For that to happen, we need science to build accurate models of the world to tell us what the likely outcomes are, what might go wrong, and whether our safeguards will be effective. Based on those models, we can collectively choose the future we want to live in – which may not involve our technology.

But it's hard to even have that discussion if we hit trigger words that force people to choose a position based on their sense of who they are. Is it wrong to deliberately avoid those words? Is that “selling” our technology, or is it simply trying to have an open discussion on the empirical merits of the proposal? As a scientist, it's hard to engage the concerns of people who refuse to vaccinate their children due to concerns that it might cause autism, simply because their model of the world isn't supported by science. Is it fair for us to initiate the terms of the debate, making sure that “no” is a legitimate answer and inviting suggestions for what experiments should be done to better analyze the risks and improve our models, yet deliberately attempt to ground the decision-making process in empirical reality?


Tags: responsibility, innovation, collective technologies, philosophy, empiricism, engagement

1-10 of 16