The Art of Timing in Emerging Technologies in Biotech
We’re well into the winter break, and I’ve been keeping myself busy by building a pipeline to calculate 10x Visium spot uncertainty scores (will be posted to my github soon). It’s been much more time-consuming than I initially thought, with new concerns seeming to appear following each small step forward. But that’s science I guess; every answer produces another question.
One such concern that popped into my head is that spatial transcriptomics might never be widespread enough to make this pipeline particularly useful. But since I’m already 3 weeks into the project, I’m just going to argue against this concern by reviewing the history of omics technologies. Today, you can sequence multiple human genomes with good depth for a couple hundred dollars, but when it first came out in 2006, Illumina’s DNA sequencer cost $300,000 for a single human genome. I’m sure a lot of professionals back then assumed that the cost made Illumina sequencing unusable for drug development. However, early adopters like the Broad Institute invested in the technology, believing that it would follow a cost trajectory similar to Moore's law. They were right, as the NHGRI famously tracked sequencing costs dropping faster than Moore’s Law from 2007 onward (although it’s supposedly starting to level out). Broad’s investment in sequencing centers, pipelines, and expertise allowed them to scale massive genomics projects once costs dropped and high-throughput sequencing became routine for them. Their belief in the long-term trajectory of the technology paid off, positioning them to lead the field once sequencing became affordable and ubiquitous.
Now I don’t want to go on and say that spatial transcriptomics cost will 100% follow the same path as Illumina, but it leads me to wonder how we can determine the potential value of future technologies. I feel like in biotech, this often comes down to two related questions: Will this technology succeed, and when does it make sense to engage with it?
It’s tempting to just immediately try to ask, “Will this technology succeed?” but that question isn’t so straightforward or helpful in real time. Success is easy to identify in hindsight, but early on, promising technologies in all fields tend to carry a mix of hype, uncertainty, and incomplete data. A more useful lens might be to ask what capabilities a new technology enables, even if it never becomes cheap or ubiquitous. Even niche, partial success can still shift workflows, reveal new questions, or create opportunities that weren’t possible before. An example of this is self-driving cars; they still aren’t necessarily financially successful, but they’re enticing technologies because of how they are advancing sensors, mapping systems, and AI.
The true skill in biotech and venture capital isn’t predicting if a technology will be profitable, but judging when the right time is to engage with it. You have to weigh the cost of waiting with the cost of being wrong, and I’m sure there are quant finance people out there with models just for understanding that balance, but in practice, I’d wager that it comes down mostly to experience and intuition. Waiting too long for a technology could mean lost expertise, slower iteration, and reliance on tools built by others, whereas acting too early risks investing time and resources into technologies that may plateau or completely fail. Based on these factors and a bit of research, there are some things that I would argue are important to look for, such as:
The pace of iteration – how quickly a technology is evolving can be a good signal of both hype and functional development, but it should be interpreted cautiously, since rapid updates don’t always guarantee long-term success or stability.
The maturity of the supporting ecosystem – the availability of tools, standards, and experienced practitioners can dramatically reduce the cost of adoption and accelerate meaningful progress.
The scope of new questions the technology enables – the more a technology opens doors to novel research directions or capabilities, the greater its potential impact, even if it doesn’t immediately achieve mainstream adoption.
As for the timing of spatial transcriptomics, it seems to be at an awkward middle phase in its lifecycle. When looking at the factors I mentioned, spatial transcriptomics has a fast pace of iteration, with new platforms and protocols being announced regularly. The supporting ecosystem is still fairly undeveloped given the cost of the tool and lack of standardization. Despite this, the questions enabled by spatial transcriptomics could be transformative if the supporting ecosystem improves. This supporting ecosystem struggle was something I heard over and over again at the 3rd Annual Spatial Transcriptomics Summit, and to me, it’s the limiting factor regarding the success of the technology. If we are able to overcome the supporting ecosystem issue as a scientific community, it will be a gradual implementation of standardized tools, softwares, quality control metrics, etc…
So looking at this from an investment point of view, I would say it is not too late to buy into spatial transcriptomics. But I will say that history shows that early engagement, even without complete knowledge, can create opportunities that waiting for certainty would miss. The lesson I’ve taken from writing this post is that thoughtful risk-taking and careful observation are invaluable. This careful risk-taking relies on being attentive to how technologies develop over time in order to best position oneself for the future.
Comments
Post a Comment