Covid-19 pandemic

posted Mar 8, 2020, 9:04 AM by Kevin Esvelt   [ updated Mar 8, 2020, 9:04 AM ]

Message sent to the MIT Media Lab list on preparing for the nascent pandemic on March 4, 2020

This message is intended to provide a general assessment of the global situation and help organize our collective response.

There is a great deal of information floating around on COVID-19, only some of it accurate. Many people claim that it's just a bad flu and the world is overreacting. Others believe it’s more dangerous than the 1918 influenza pandemic that caused the deaths of 50 million people. The truth is almost certainly somewhere in between.

Before I summarize what is known: this is a societal challenge, and the Lab can help.

This is not a foreign issue. It has nothing to do with race or ethnicity or cultural background.

The virus is in over 76 countries, and is now in the Boston area. Let's not panic. Let’s take action.

Agenda item 1: Discuss how we can help keep our community members safe. Item 2: Identify all relevant Lab projects and find ways to leverage them swiftly to help others. Bring your creativity and a list of ongoing efforts or expertise that you think might be useful.


What we know

  • The novel coronavirus SARS-CoV-2, which causes COVID-19 disease, is spreading worldwide

  • Everyone appears susceptible: the virus is new to humans, having jumped from an animal

  • The typical victim infects 2-2.5 other individuals depending on patterns of contact

  • Symptoms include fever (87.9%), dry cough (67.7%), fatigue (38.1%), sputum in the lungs (33.4%), shortness of breath (18.6%), sore throat (13.9%), headache (13.6%), bone pain (14.8%), chills (11.4%), nausea or vomiting (5.0%), nasal congestion (4.8%), diarrhea (3.7%)

  • 8 of 10 victims experience mild symptoms; 13.8% moderate to severe; 6% need hospital care

  • People without symptoms shed virus and can infect others

  • Risk level increases very sharply with age and ill health; children are almost never affected

  • ~6.7% of victims ages 15-49 need hospital care, but only 2% of those need an ICU

  • Some regions have shown that good hospital care can keep the overall death rate below 2%

  • The local mortality rate can rise to 5.8% if the healthcare system is overwhelmed

  • Imposing cordons sanitaires and isolation measures can halt transmission for the duration

  • A vaccine will not be widely available for 12-18 months; antivirals may arrive earlier

  • Those recovered are likely to be resistant for at least a year, but probably not more than 3 years

Therefore, our goal as a society is to keep too many people from getting infected at once. We need to ensure that there will be enough hospital beds for everyone who falls ill if the pandemic continues to spread, which appears likely but not certain. 

That starts here at the Lab.

Community health

We can protect our friends and colleagues by minimizing local transmission. For example:

  • Never shake hands! An elbow bump or a polite bow will suffice

  • Frequently wash your hands for 20 seconds with soap and water

  • Use hand sanitizer often, including to clean your phone

  • Clean commonly used objects with wipes and keep doors propped open as much as possible

  • Get a flu shot now if you haven't had one, as flu patients often also need hospital beds

  • If you feel sick in any way for any reason, stay home!

  • If you need healthcare, ask to be tested for infection, and alert MIT Medical if it's positive

To anyone who can’t find hand sanitizer in stores, you can make it yourself:

Hand sanitizer recipe:

  • 2/3 cup 99% isopropanol

  • 1/3 cup aloe vera gel

  • Mix thoroughly and pour into a small bottle to carry

What about cleaning agents?

  • Alcohol inactivates >99.9% of coronaviruses in <1 sec

  • Bleach is less effective, requiring longer to take effect, but most other disinfectants work well

  • Using wipes and cleaners is important, as coronaviruses can linger on surfaces for 2-9 days 

Risk assessment

Statistically, we’re unlikely to have been exposed as yet, but we should take precautions immediately.

There are almost certainly unidentified carriers in the Boston metro area, and the number is likely to grow rapidly, as occurred in Seattle. Keep in mind that the U.S. has not been testing widely, meaning that we should not expect to detect community transmission until there are hospital cases with no obvious source.

If we conservatively assume each victim infects two others 7 days after being infected, and there are 20 carriers in the Boston area this week, then there will be 20,000 newly infected individuals in ten weeks. That assumes transmission does not abate in the interim, which may occur; coronavirus is somewhat seasonal, but much less so than flu. We should hope for the best, but prepare for the worst.

One can hope that local officials will take action before we reach 20,000 active new cases in the area. Those preventive actions will keep us safer, but are also likely to impact our research. We should plan for that.


It’s fairly likely that public transit will be closed, and large gatherings discouraged. A (non-peer reviewed) assessment determined that those were probably the most effective of the measures imposed in China.

MIT may advise nonessential personnel to avoid campus this summer, or even sooner. University campuses in China have been closed for some time, with classes conducted remotely. Just in case, it may be prudent to focus on those aspects of your research that require your physical presence, and start thinking about how you could work remotely. Again, hope for the best, but prepare for the worst.

It's possible that schools will close. They are now closed in China, South Korea, Japan, and Italy, but have reopened in Taiwan. Whether Taiwan’s rate of transmission differs may determine whether they are also closed here. Since high schoolers are known to be vulnerable, it’s more likely that high schools will be closed than elementary schools, but we don’t know as yet. This may impact the parents among us. It’s also one of the areas where the Lab’s expertise could make a considerable difference.

Finally, anyone who is infected will need to self-isolate in their home for at least 14 days. Water shouldn’t be a problem, and food deliveries are likely to continue, but it’s not a bad idea to buy a little extra when grocery shopping to prepare, and it's worth ensuring that you have a reasonable supply of any essential medications. As always, hope for the best, but prepare for the worst, at least within reason.



These events may have a major impact on society and our daily lives. Our actions and especially our research can help mitigate those impacts, if not here then elsewhere in the world. We have an opportunity to make a difference. 



I am not a medical doctor or epidemiologist, but a generalist whose research overlaps with a number of relevant fields, including pandemic disease. I do not speak for the Lab’s Executive Committee or for MIT leadership. All assessments and opinions are my own or linked from a more knowledgeable source.

Let me know if you have any questions or concerns. 



Kevin M. Esvelt, Ph.D.

Leader, Sculpting Evolution Group

Assistant Professor, MIT Media Lab

Massachusetts Institute of Technology

Moral responsibilities for animal suffering

posted May 28, 2018, 6:43 PM by Kevin Esvelt   [ updated May 28, 2018, 6:52 PM ]

There is a clear moral case for Africans to make use of gene drive to help eradicate malaria, but what of the non-humans who suffer? In Leaps Magazine, I ask difficult questions concerning the power of new technologies and our responsibility for the consequences of choosing not to use them - in this case, for animal suffering.

In tribute

posted Jan 9, 2018, 7:15 AM by Kevin Esvelt   [ updated Jan 9, 2018, 7:15 AM ]

The currents of time carry us onwards, until the day comes when we can swim no longer. We are not wholly gone, for the ripples of our actions continue to touch the lives of others, a pattern whose breadth and beauty defines us.

Few patterns are as beautiful as that of Ben Barres.

I met Ben in person only once, in 2014. After that singular meeting we continued to correspond, often enough that I considered him a mentor. Even so, I cannot say I knew him well.

One need not know a person well to see their inner light.

Much has been written of Ben's passion, courage, and brilliance. There are several lovely epitaphs to his life and work – ethical, social, and scientific – woven from tales shared by those whose lives he touched.

One of his last efforts was a public call to ensure that postdocs can take their projects with them when they start their own labs, free of competition from their former advisor. In Ben's memory, this is a pledge I make gladly.

We will not see his like again.

Aotearoa: Mistakes and Amends

posted Jan 9, 2018, 7:13 AM by Kevin Esvelt   [ updated Jan 9, 2018, 7:13 AM ]

Publicly acknowledging a failure and trying to make amends can strengthen moral resolve. On November 16, I failed my partners.

Researchers should hold themselves morally accountable for all of the consequences of their work. That can require publicly acknowledging when we have done wrong and striving to make amends.

In trying to remedy a past error - my suggestion that self-propagating CRISPR gene drive might be used for invasive species control - I singularly failed to uphold the ideals of Responsive Science.

It was inexcusably wrong of me to publish a manuscript relating to gene drive and Aotearoa New Zealand without thinking to invite my new Māori partners to offer suggestions during the revision stage. My mistake could jeopardize a primary goal of our partnership: to ensure that daisy drive can only be developed and considered for use in Aotearoa under a co-governance model with Māori, the kaitiaki (caretakers) of the sacred taonga species.

As Melanie Mark-Shadbolt said of me (quoting with permission):

"... his naivety of the political situation Māori are in, and the publication of this paper without talking to the other partner (Māori) more than likely will have consequences for that partner (Māori) that the author (Esvelt) did not consider."

Her words are searingly true. This is precisely why Māori co-governance of daisy drive development is necessary if the method is ever to be used in Aotearoa: I know as little of the local ecosystem as I do of the local politics, meaning that I cannot possibly evaluate the likely consequences. It is why the matauranga Māori, the wisdom and way of knowing of the Māori people accumulated over generations in Aotearoa, will be essential to help ensure that any action taken is in the best interests of the taonga species.

She goes on:

"Pessimistically, it is now possible that Māori may never get co-governance in the discussion and/or development of gene-drive technologies in Aotearoa. Optimistically, however, this paper could be a wake-up call that the science sector in Aotearoa New Zealand needs to work in partnership with Māori and preserve New Zealand’s leadership on the world stage."

I hope that the latter will be true, but hope is not enough. Since I did not think to invite suggestions on a matter relevant to our partnership, I must hold myself responsible for any consequences of my thoughtlessness. That requires publicly acknowledging my failure, attempting to make amends, and striving to better uphold my duties in future.

Acknowledging mistakes is painful, but honesty requires it: better that we publicly admit wrongdoing and strengthen our resolve than continue to do wrong.

In February 2018, I will be meeting with Te Tira Whakamātaki (the Māori biosecurity network) and the Te Herenga Māori (the Māori National Network). At those meetings, I will bring up this story of my failure because it illustrates the necessity of inviting local wisdom and governance. It is my hope that hearing diverse Māori and New Zealander perspectives will not only aid my understanding of whether and how my inventions may benefit Aotearoa, but also shed light on how I might best fulfill my broader responsibilities to the world beyond.

The morality of nature

posted Jan 20, 2017, 3:15 PM by Kevin Esvelt   [ updated Jan 9, 2018, 7:12 AM ]

I've received many communications in response to the profile by Michael Specter in the New Yorker; more than I can answer. They've all been quite positive except for one topic: my comments on the morality of the natural world. One reader wrote:

'One topic that has been really bothering me though is your views on morality and nature. One should not anthropomorphize nature or natural selection and look at it in terms of good or bad, nature just IS, nature is balanced, that's it. As soon as humans intervene is when it becomes unbalanced, its up to you to decide how unbalanced we are willing to make it.

I guess my point is gene editing can be an incredible tool for good to humans but someone with your intelligence and power shouldn't be saying things like “the ridiculous notion that natural and good are the same thing.” or “Natural selection is heinously immoral.” Saying nature is immoral is just as wrong as equating nature to godliness.'

This is an issue deserving of in-depth exploration, so I'm sharing a slightly edited version of my reply:

I agree that nature is value neutral. I normally use the term amoral, not immoral, but only as long as we lack the power to influence it. Once we do, it becomes a test of our own moral character.

If failing to save a drowning child when we could have done so makes us responsible for that child's death, then acquiring the ability to mitigate animal suffering renders us morally responsible if we choose not to.

This is not a comfortable moral position. Right now, we spend ~$2.5 billion per year in the US on animal shelters and trap-neuter-release programs for stray and feral dogs and cats. Three weeks ago I rescued a limping stray cat. Had she not received care and antibiotics, she would likely have lost the ability to hunt and slowly starved due to her badly infected wound. How is that stray cat different from an ocelot cub stricken with a screwworm infection, which is unimaginably more painful? We didn't deliberately create either, but we can do something about the stray cat, but cannot aid the ocelot. Of course, that will not be true for much longer: we eradicated the screwworm from North America with sterile insect technique, and with gene drive could do the same for South America. Should we?

We must face this question openly. It doesn't obviate the other challenges, such as weighing humans against animals or how much more of a say South Americans should have in the case of screwworm. But since we are now developing the power to intervene, it becomes a moral issue where previously it was not.

That's why we need to discuss these challenges now, and why all research on these questions should be done in the open. It's not my decision to make, or yours, but everyone's. I need you as a check on my own intuitions, because I am not confident in my own ability to make wise decisions on this scale. We must face it together.

Why Open Science?

posted Dec 8, 2016, 11:39 AM by Kevin Esvelt   [ updated Dec 26, 2016, 8:12 AM ]

Most researchers keep their plans to themselves for a very good reason: the system punishes us if we don't. Share a brilliant idea, and another laboratory can throw more money and hands at the problem, publish first, and claim all of the credit.

Secretive research is not just wasteful and inefficient from the perspective of society. It's actively miserable for practicing scientists. Because no one shares the results of failed experiments, different labs fall into the same pit traps over and over again. We never hear about new successes or techniques until the project is complete, so it can take years to find out that the potential collaborator with a key piece of the puzzle was always just down the street. Since we only have the vaguest idea what others are working on, we always worry that someone else might be working on the same project. If they get there first, we've been 'scooped' and typically get nothing. Even if we publish first, we've still wasted years on a project that someone else would have completed soon afterwards. Paranoid secrecy may be fun in a game; less so when your life's work is at stake.

The harsh truth is that no one would rationally design the current scientific enterprise. It evolved in the time before modern communication technologies, and persists due to cultural traditions and a collective action problem. It's as though we're still sending out competing teams of explorers who still insist on returning with maps and reports every few years... even though all of them now have satellite uplinks.

That would be bad enough, but there's more. Our civilization, being unsustainable, quite literally depends on new technological advances. Those advances are getting more powerful with time. But who decides whether an advance will be positive or negative? The small team of ultra-specialist explorers, who can't reliably anticipate the consequences on their own.

There's evidence that risk analyses involving local citizens produce more comprehensive results than teams of expert risk analysts, even as judged by those same experts. But in an age of increasing dependence on successively more powerful technologies, we still practice science much as we did a century ago – that is, mostly blind to the ongoing efforts of others and any attempts to assess consequences. It's mind-bogglingly shortsighted, and a testament to the power of the status quo bias and collective action problems.

Of course, this doesn't mean we should reform all of science immediately, even if we could. We have a limited ability to predict the consequences of altering complex systems, including the scientific enterprise. That means we should start small, carefully measure outcomes, and only scale up if warranted.

The field of gene drive research is an ideal test case. Conducting gene drive experiments behind closed doors risks affecting the shared environment and the lives of others without their knowledge or consent. It denies other scientists and interested citizens the opportunity to voice suggestions or concerns that could improve safety and accelerate progress. And it greatly reduces our ability to build support for beneficial applications. In short, there are compelling moral and practical reasons for ensuring that gene drives and other ecological technologies are developed in the open light of day.

Opening science from the earliest stages will enhance safety and reliability by encouraging collective scrutiny of safeguards and research plans. It will accelerate scientific progress by enabling coordination among researchers. And it can improve public confidence and the likelihood of balanced assessment by actively inviting and addressing concerns early in development. For the field of gene drive research, open and responsive science is a moral and practical necessity.

That's why we're sharing all of our research proposals that involve gene drive here. Special thanks to all the other members of Sculpting Evolution for their courage - they're the ones placing their careers on the line in order to do what is right.

We hope that this step will help encourage journals, funders, holders of intellectual property, and policymakers to change scientific incentives in favor of open gene drive research.

Boldly Illuminating a Better Future

posted Nov 16, 2016, 6:35 PM by Kevin Esvelt   [ updated Dec 30, 2016, 1:31 PM ]

We've submitted an application to the Roddenberry Prize competition on behalf of the Responsive Science Project.  Our platform isn't live just yet, but our goal - an open and community-responsive scientific enterprise - is perfectly in alignment with the shared vision set out by Gene Roddenberry and his successors.

More details on our reasoning, as well as copies of our grant proposals involving gene drive, are available on our proposals page.

Belated update

posted Oct 25, 2016, 9:42 AM by Kevin Esvelt   [ updated Nov 18, 2016, 12:20 PM ]

As is hopefully evident from the site changes and main news page, we've now launched the Sculpting Evolution Group at the MIT Media Lab. We're at the Lab because it's one of the very few places where we can our work in a thoughtful and ethical manner. That is, a place that will actively support us in evaluating the consequences of our technologies and pioneering new ways to make science better serve society rather than strictly focusing on scientific publications.

In the tumult of setting up the lab and attempting to guide gene drive technology and new developments, posting has suffered more than a bit. Recent thoughts have been published at Project Syndicate and Nature, although most of my output has been in the form of talks - 38 of them so far this year. I'm hoping to make most of the presentations available. As always, everything is CC-BY.

Discussing gain-of-function research

posted Sep 8, 2015, 1:06 PM by Kevin Esvelt   [ updated Oct 25, 2016, 9:28 AM ]

In the course of promoting safeguards and transparency in the field of gene drives, many have drawn a comparison to so-called “gain-of-function” (GOF) research on influenza and other potentially pandemic pathogens. Like gene drives, GOF has the potential to affect people outside the laboratory if something goes wrong. I haven't previously touched on the subject, but Marc Lipsitch and Thomas Inglesby invited me to join them in highlighting the need to engage those most likely to be affected – in this case the clinical community, which has heretofore been largely silent on the issue – and holding up CRISPR and gene drives as an example to follow. Our work was published today in the Annals of Internal Medicine.

As an evolutionary biologist, my skepticism of GOF influenza research should not be surprising. These studies typically seek to evolve more virulent strains of influenza in the laboratory in order to provide advance warning of which mutations we should look out for in the wild. They are controversial because they deliberately create the very agents we fear, and by this point we know that mistakes are inevitable when relying on barrier confinement. Perhaps not this year, or this decade, but they will occur.

Yet there is also a cost of doing nothing. The question is whether the risk is worth it. Is the knowledge of which mutations we should fear sufficiently valuable?

First, it is not at all clear that advance warning that a virus is near to becoming a pandemic would allow us to do something about it. Second, we have abundant evidence that evolution is stochastic. That is, if we run an evolution experiment in the laboratory many times under conditions as identical as possible, the iterations will frequently come up with different solutions. This is true whether one is evolving individual proteins or entire genomes. The resulting behavior of the evolved variants may be functionally the same – especially for whole-genome evolution studies where many genes are evolving in concert – but the underlying mutations producing that outcome commonly differ. This certainly came up in our PACE study examining evolutionary pathways and stochasticity. It's not true for every system, but does seem to be the case for many. Precisely why this would be different for influenza is not clear.

I am consequently not at all concerned by the current moratorium on funding gain-of-function studies. In my view, the odds of discovering which mutations to fear – and being able to use that knowledge to prevent a pandemic – are very slim, and consequently not worth the attendant risk of accidentally causing the same pandemic we seek to prevent.

Not only might a human-created accidental pandemic lead to many unnecessary deaths and suffering, we must also consider the effects of such an event on public confidence in scientific research more generally. In a post-pandemic world, it would be very easy to question why the government should fund life scientists at all if their judgment was so poor as to knowingly risk creating the pandemic that killed a million people. This goes double when superior safeguards were available but were not employed. In short, reprise all the public-trust reasons why we should take stringent precautions to avoid an accidental gene drive release, then add the far more harmful consequences of a pandemic human virus.

Yet there are other reasons to perform gain-of-function studies. The Kawaoka laboratory just published research identifying superior influenza viral backbones for rapid vaccine production. This is important because current influenza vaccines require us to generate of large quantities of virus, which is subsequently inactivated and injected to provoke an immune response to the viral proteins.

Historically, we generated those quantities by leveraging chicken eggs – which we have in abundance – as growth chambers. Yet egg-based vaccine production is slow, vulnerable to disruptions in the egg supply, can select for mutations specific to replication in eggs, and typically produces viruses that are contaminated with egg proteins capable of causing allergic reactions in sensitive people. There is consequently a movement to switch influenza vaccine production to cultured mammalian cells.

The problem is that vaccine virus yields are quite low in cultured cells. For some vaccine candidates, yields can even be low in eggs. The researchers consequently sought to evolve a backbone capable of efficient virus replication in cultured cells. They succeeded.

On a technical level, this was a truly impressive work. Using library generation and screening, generating known mutations from the literature, and combining these hits to maximize viral replication required a tremendous amount of dedicated effort. The resulting backbones increased production in both mammalian cells and also in eggs, in some cases by up to 200-fold. As is usual for directed evolution experiments, the mechanism remains unclear.

This work was performed before the US government imposed a moratorium on gain-of-function research until an independent assessment could evaluate safety. But the new backbones did marginally increase the severity of disease in mouse models, if only marginally, so the experiments producing them qualify as GOF. Predictably, the study is being held up as an example of why the moratorium was a mistake or at least applied too broadly - though the director of NIAID maintains that an exemption would have been granted.

Why specifically an exemption would have been granted in this case is not clear, but it may have had to do with the fact that the researchers used a form of intrinsic confinement. Specifically, they altered a sequence in the hemagglutinin gene involved so that the resulting viruses were of low pathogenicity and consequently exempt from Select Agent status. They did not specify how many known mutations were required to regain pathogenicity (obviously there may be unknown mutational routes as well). It's certainly better than no intrinsic confinement.

But it may not be as good as alternative confinement approaches that could make all influenza virus research safer, and is certainly not as good as stacked confinement strategies. What might those be? While I'm not a practicing virologist, several do come to mind. For example, researchers could insert a sequence into the viral backbone that would be targeted by RNAi in human cells, then produce the virus in non-human cells. Since the goal is to generate inactivated virus, you don't need it to replicate in human cells to generate the immune response. Alternatively, produce it in human cells with the relevant RNA knocked out in order to permit viral replication specifically in those cells. There's this technology called CRISPR that makes it easy to do such things now...

If the RNAi trick doesn't work for some reason - this is biology, after all - there are other options. One might instead encode one of the viral proteins in the genome of the producing cell and delete it from the viruses being tested, thereby ensuring that the virus can only replicate in cells containing the missing component. Sure, it's possible that the viral replication cycle might require protein production to be delicately timed and responsive to the viral copy number. If so, this could be corrected using a trans-splicing intein to assemble one of the viral proteins once each half is produced. Yes, the devil is certainly in the details, but the bottom line is that there are several potential ways of incorporating intrinsic confinement.

While these aren't necessarily trivial endeavours, the authors of this work are clearly highly skilled; I am confident they could succeed. Once available, the safeguards could then be used for other types of experiments with these viruses, dramatically reducing the chance of accidentally causing a pandemic. The resulting risk reduction might be sufficient to make these kinds of studies worth pursuing. I personally would be much happier if someone tried.

One last point. According to Science News, “the researchers contend that their findings may help bring future pandemics under control faster.” This is indeed a critical challenge – I would prioritize the ability to quickly make and scale up a vaccine to any given virus extremely highly – but the approach still assumes that we will be able to make a suitable vaccine strain in the first place. That itself can take time – at least, if one relies on the current paradigm. This is an excellent reason to move beyond the current paradigm.

Instead of simply injecting killed virus into patients and waiting for the appropriate immune response, we might use vectored immunoprophylaxis (non-integrating gene therapy) to instruct cells to produce the appropriate antibodies. This approach has the greatest potential for a quick turnaround, as it only requires identifying functional antibodies from the first infected individuals, screening them for binding to the virus (or other agent) to isolate the most protective ones, then making lots of DNA encoding those antibodies for delivery into patients. In principle, you could start cranking out “vaccine” inside of a week and quickly scale up much faster than current vaccine production allows.

The overall lesson is that we should look further ahead towards more broadly applicable technologies. Investing in better safeguards can reduce risks across the board, potentially enabling highly beneficial experiments that would otherwise be too risky. And developing new technologies can offer benefits – in this case, truly rapid response to new and dangerous viruses – that current approaches could never hope to match. Combined, these two advances could enable swifter (albeit still highly flawed) detection of potentially dangerous viruses and let us do something about it.

Tags: engagement, transparency, responsibility, innovation, gain of function

Safeguards for laboratory gene drive research

posted Jul 31, 2015, 7:15 PM by Kevin Esvelt   [ updated Oct 25, 2016, 9:29 AM ]

I recently convened a diverse group of scientists from relevant fields, including those who published a drive-based genome editing method at UCSD in March, to agree on recommended safeguards for gene drive research in the laboratory. After months of discussion, the resulting manuscript was published in Science today.

It's clear that our recommendations were the product of a committee, or at least the academic equivalent. Opinions differed, and we had to compromise. It's fair to say that were it up to me alone, the recommendations would have erred a bit more on the side of caution. This may be because I've had more time to think about the implications and the morality of the situation, it could be a generational thing, or I might simply take the issue more seriously because in many ways I am responsible: when it comes to detailing RNA-guided gene drives, I chose to open the box.

In any case, the space constraints imposed by Science were a major obstacle to our effort, which was intended to prevent accidents while the formal National Academy of Science panel (which I was privileged to address yesterday) develops definitive guidelines. I've consequently written a more detailed analysis of the issues in order to assist that effort. I will revise this analysis periodically as the field advances in theory and experiment.

1-10 of 20