Five stages of study design
Across every program type, the design of a study follows the same arc — from the foundation of the hypothesis through to the interpretation of results.
We examine and document the scientific basis of the program — building the structured argument on which every downstream design decision depends.
- Mechanistic rationale
- Signal and biomarker landscape
- Methodological validity
- Evidence synthesis
- Assumption map
We design the scientific architecture of the study — the question, the endpoints, and the definition of what a meaningful result looks like, grounded in the biology.
- Study rationale
- Endpoint selection
- Success criteria
- Study type design
- Confounder strategy
We specify every operational element of the study — who is enrolled, what is measured, when, and how — each decision traced back to a scientific rationale.
- Population definition
- Sample collection design
- Assay specifications
- Specimen management
- Site capability criteria
We define the analytical framework before data collection begins — locking the questions, the methods, and the decision rules the analysis will follow.
- Analytical question specification
- Analytical method design
- Subgroup rationale
- Data quality criteria
- Missing data strategy
We evaluate the data against the mechanistic predictions that opened the study — producing a scientific interpretation and the program recommendations that follow from it.
- Results evaluation
- Endpoint assessment
- Subgroup analysis
- Unexpected findings investigation
- Program recommendations
- Scientific narrative
What makes the data interpretable
The four methodological conditions.
The analytical plan is specified prior to the collection of the first sample. A plan written after data exists is a hypothesis, not a finding.
Endpoints are determined with mechanistic and biological insights and not based on whether they are convenient or previously accepted.
Every assumption the study requires is documented before the protocol is written. Hidden assumptions cannot be tested or defended.
A result is only valuable if it is actionable when positive and decisive when negative. Statistical significance is not a substitute for design rigour.
The studies we design
Our practice spans the full research arc — from the earliest mechanistic discovery studies through to clinical trials.