In the early days of the semiconductor industry, integrated circuits were designed by one or two engineers with slide-rules, hand-drawn on paper, and then given to a lithographer to print onto silicon wafers. As circuits became more complex, blueprints gave way to software. These digitally represented designs were much more than a reproduction of a pencil sketch: productivity, design quality, and communication all improved rapidly thanks to software’s ability to codify desired behaviors into actionable layouts, while also allowing for easy, iterative design improvements.
Today, large teams of engineers design circuits using high-level languages that automate the process, and chip layouts more detailed than a street map of the entire U.S. can be generated automatically. The result has been a revolution in engineering and design, manifesting itself as Moore’s Law and the Information Age itself.
Today, a similar revolution is happening in biology, most notably in the field of synthetic biology. And comparisons between computer-aided design (CAD) and computer-aided biology (CAB) are hardly accidental.
Biology, like integrated circuit design, is complicated.
In recent years, automation has revolutionized how we “do” biology: driving down the cost of sequencing, facilitating open-source science, and pushing screening and many other processes towards higher throughput. In parallel, this trend has pushed biological experimentation into the realm of “big data,” where the inherent complexity of biology is finally beginning to be codified in the form of large datasets from increasingly optimized experimentation.
However, the engineering and synthetic biology world has not quite been able to harness and systematize these developments into a sustainable positive feedback loop. Single-factor experiments, such as the one described above, remain the norm because of how this automation has scaled — in the form of liquid handling robots or electronic “lab notebook” technology, for example, but not at the foundational