Dr. Sarah Richardson earned a B.S. in biology from the University of Maryland and a Ph.D. in human genetics and molecular biology from Johns Hopkins University School of Medicine. She specialized in the design of genomes: as a DOE Computational Graduate Fellow she designed a synthetic yeast genome; as a Distinguished Postdoctoral Fellow of Genomics at the Department of Energy’s Lawrence Berkeley National Laboratory she worked on massive scale synthetic biology projects and the integration of computational and experimental genomics. As the CEO of MicroByre she led the integration of machine learning and microbial genomics for industrial biotechnology.
Talk Synopsis
Artificial Intelligence is the phrase of the day, and even if we aren’t eager to deploy it we’re under significant pressure to. Unfortunately this puts scientists into a tough position, because they usually are not ready to use AI, at all. The algorithms that appear to be transforming the capabilities of Silicon Valley require a specific kind of infrastructure, a vast amount of properly curated data, and a willingness to force or coerce the accessibility of that data — three things that are missing in many of our scientific endeavors.
The hype isn’t helping us be judicious about how we apply AI to science. We have to first address the fact that we can’t deploy LLMs usefully, that our data isn’t in a state to support supervised learning, and that our workforce isn’t sufficiently incentivized to confront or solve those hurdles. Machine learning has helped us innovate before, and it can again, but only if we put effort in now to prepare ourselves for it.