Failing to imagine the consequences of the Human Genome Project

Back when the Human Genome Project was just getting underway, a significant number of biologists opposed it. They argued that the project arguing was too expensive, that current technology might not be up to the task, and that even if the project were to be completed, the data would be uninterpretable or useless. Today it’s easy to recognize the lack of imagination behind those criticisms. The Human Genome Project initiated a thorough transformation of the relationship between technology and the life sciences. While technology has always been important in biology, as in any science, the pace and scale of technological change over the last two decades is genuinely astonishing. We shouldn’t fault those early critics too much for not being able to see it.

We should draw an important lesson from one of those early criticisms of the Human Genome Project, because it illustrates the problem of thinking too narrowly about new technologies and the data they might generate. Some biologists thought that the data put out by the Human Genome Project would be largely useless because the knowledge and the computational tools to interpret it didn’t yet exist. On top of that, a large fraction of the genome likely had no function and thus it would offer no useful information. Critics who made this argument missed the point however, because they viewed the reference genome simply as a dataset to be interpreted, rather than what it turned out to be: a foundation for new analyses, new assays and experimental methods, and new technologies to measure biology at greater scale and higher resolution. Without the Human Genome Project, there would be no genome-wide association studies, no analyses of ancient DNA, no CRISPR-based gene therapies, no synthetic biology, and no recognition of the vast diversity of cell types and states that are now revealed by single-cell ‘omics.

Print Friendly, PDF & Email