Why the FDA says the future of clinical science is built on data, not animals

Why the FDA says the future of clinical science is built on data, not animals

Introduction

For decades, animal models have been the default bridge between discovery biology and human trials. If you wanted to understand safety, dosing, or risk, you went through animals. That workflow was expensive, slow, and ethically fraught, but largely unquestioned.

The FDA’s Roadmap to Reducing Animal Testing in Preclinical Safety Studies, published in 2025, signals that this era is ending. Not because regulators are taking a moral stance, but because science has moved on.

In this article, we’ll tell you more about the Roadmap, and what it means for integrating computational workflows into drug development pipelines.

First-in-Human Dosing Without Animals

What is most striking about the Roadmap is not its intent to reduce animal use. It is what the FDA explicitly endorses in its place: bioinformatics and in silico modeling as decision-enabling evidence.

Consider first-in-human dosing. Historically, animal pharmacokinetics were treated as mandatory. In the Roadmap, the FDA states that physiologically based pharmacokinetic (PBPK) models may be used to inform first-in-human dosing decisions and, in some cases, to justify waiving animal studies altogether. These models simulate absorption, distribution, metabolism, and excretion directly in humans, rather than extrapolating from another species.

Immunogenicity as a Sequence Problem

The same logic appears in immunogenicity assessment. The FDA acknowledges that animal immune responses to human monoclonal antibodies are often misleading and not predictive of human outcomes.

Instead, the Roadmap highlights machine learning models that analyze antibody amino acid sequences to predict immunogenicity risk. Rather than asking whether an animal reacts, regulators are increasingly willing to ask whether the sequence itself looks problematic.

Off-Target Risk at the Proteome Scale

Off-target risk is another clear example. Traditionally, this was assessed using animal tissue cross-reactivity panels. The Roadmap explicitly describes bioinformatics approaches that screen a therapeutic’s sequence against the entire human proteome to identify unintended binding. This is not a refinement of animal testing, it’s a computational replacement that is broader, faster, and more human-relevant.

Virtual Humans and Systems-Level Modeling

The FDA also elevates quantitative systems pharmacology (QSP) models, which simulate how drugs perturb human biological pathways. These models can explore efficacy and toxicity trade-offs in silico, reducing reliance on animal disease models that often fail to translate. In effect, the FDA is endorsing the idea of testing drugs in “virtual humans” rather than diseased animals.

Toxicity Prediction as a Data Problem

Perhaps most telling is how the agency frames toxicity prediction. The Roadmap notes that AI models trained on large historical datasets can perform as well as, and sometimes better than, animal studies. Importantly, the FDA’s response is not to defend animal testing, but to invest in larger shared databases to further improve these models. The confidence is clearly placed in computation that improves with data.

Making Animal Studies the Exception

These approaches are already being operationalized. The FDA proposes reducing or eliminating long-duration primate studies when modeling and other non-animal model data indicate low risk, and explicitly states a long-term goal of making animal studies the exception rather than the norm.

Taken together, the message is unmistakable: regulatory science is becoming computational science.

What This Means for Bioinformatics Teams

This shift has practical consequences. In silico approaches require high-quality data, reproducible pipelines, validated models, and clear interpretation for regulators. They are not one-off analyses; they are living systems that evolve as new data arrive.

Bioinformatics is no longer something that happens after the experiment. It shapes which experiments happen at all.

By placing computational evidence at the center of regulatory decision-making, the FDA is quietly redefining what credible preclinical science looks like. The teams that succeed will be those that can integrate biology, data, and modeling into coherent, defensible narratives. As regulators increasingly trust data over animals, the teams that invest in strong bioinformatics partnerships will move faster and with more confidence.

Click here to schedule a free introductory call with a member of our team to talk about how we can help optimize your drug development pipeline for the future.

Originally published by Bridge Informatics. Reuse with attribution only.

Share this article with a friend