
Method2Model
Solutions & Evidence
A unified library of decision-focused modeling for medical research: use cases, case studies, and articles showing how high-stakes protocol decisions become explicit, reviewable computational models and simulations.
You don’t need more “services”
You need to make the next high-stakes decision without betting months of work on hidden assumptions
Most studies don’t fail because the science is wrong
They fail because one or two implicit assumptions quietly break in the real world
and the design decisions built on them only get questioned after time, budget, animals, or patients are already committed
a short written feasibility verdict (modelable / modelable with changes / not a fit right now) + the lowest-effort next step
What this prevents :
(the expensive version of “learning”)
Protocol rewrites
and approval delays after
“final” lock
“Null” results driven by variability, timing, or adherence instead of biology
Budget burn on measurements that never change the decision
What decision are you facing right now?
Pick the situation that matches where you are today:
- About to lock a protocol (design risk / feasibility)
- Finalizing sample size & budget (power / detectability)
- Preparing IRB / grant / sponsor review (defensibility)
- Mid-study drift or amendment (least-risk adjustment)
- Results diverged (interpretation / next-step decision)
- Planning a redesign / next study (learning / v2 decision)
- Building a reusable asset (reuse / continuity)

Phase A — Before you run it (Design / De-risk)
Before you run it: scan for protocol blind spots (find what breaks first)
Decision moment: 2–6 weeks before protocol lock or submission.
You’re here if: the protocol feels “scientifically sound,” but you can’t defend why it will hold in real life.
Most studies don’t fail because the science is wrong.
They fail because one or two implicit assumptions quietly break in the real world.
We turn your protocol into an explicit, reviewable model and run focused “what-if” scenarios—so design risks surface before time, budget, animals, or patients are committed.
Blind spots we catch:
- Recruitment collapse — feasible on paper, impossible in practice
- Endpoint mismatch — sensitivity that misses the real effect
- Power erosion — how variability and non-adherence quietly kill signal
The outcome: Assumptions Map (MRR) • Model Architecture • Risk Simulations
Used before: experiments • submissions • funding decisions
For protocol lock & feasibility risk
Before you commit to a sample size: validate power against real-world noise
Decision moment: finalizing N, budget, timeline, or sponsor approval.
You’re here if: the power math looks fine, but reality will add heterogeneity, missingness, and behavior.
Most studies don’t fail because the effect isn’t there.
They fail because the power calculation assumed a clean world—and the real world adds heterogeneity, missingness, and behavior that quietly kills signal.
We translate your design into an explicit, reviewable model and run realistic simulation scenarios—so you see whether your study is powered for the world you’ll actually face, not the world a formula assumes.
- Heterogeneity dilution — responders/non-responders blur the average effect
- Variance inflation — noise, measurement variability, or site effects widen uncertainty
- Missingness & non-adherence — dropouts and protocol deviation quietly erode detectability
The outcome: Real-world Power Map • Sample Size Scenarios • Design Trade-offs
Used before: finalizing sample size • budgeting & timelines • protocol lock / sponsor approval
For sample size, budget, and sponsor sign-off
Before you measure everything: stop paying for low-information data
Decision moment: endpoint strategy alignment, lab planning, and feasibility budgeting.
You’re here if: you’re collecting “a lot” but not sure it changes the decision.
Most studies don’t suffer from too little data.
They suffer from the wrong data at the wrong time—expensive panels, noisy biomarkers, and schedules that miss the window where the signal actually appears.
We formalize your study question and mechanism into an explicit, reviewable model, then test candidate biomarkers and sampling schedules to identify what delivers the most information per cost, burden, and risk.
Measurement failures we prevent:
- Low-information markers — biomarkers that add noise without improving inference
- Timing blind spots — sampling schedules that miss peak response or early warning signals
- Over-collection — unnecessary tests that increase cost, burden, and missingness
The outcome: Measurement Priority Map • Optimal Timing Windows • Minimal “must-measure” set
Used before: protocol finalization • lab planning • budget/feasibility decisions • endpoint strategy alignment
For minimal, high-information measurement planning
Before you lock a regimen: stress-test robustness under adherence & variability
Decision moment: dose justification, safety planning, go/no-go, optimization.
You’re here if: the “optimal” regimen may not survive real-world adherence, exposure drift, or variability.
Most dosing problems don’t come from choosing the “wrong drug.”
They come from choosing a regimen that doesn’t hold up under real-world variability—patient-to-patient differences, exposure drift, missed doses, and the practical limits of compliance.
We build a transparent PK/PD (or exposure–response) layer aligned with your protocol, then run regimen scenarios to quantify trade-offs and identify dosing strategies that are robust—not just optimal on paper.
Regimen risks we stress-test:
- Exposure variability — metabolism, body size, comorbidities, interactions, site effects
- Compliance realism — missed doses, delays, interruptions, dose reductions
- Efficacy–toxicity trade-offs — gain in response vs. increase in adverse events
The outcome: Regimen Scenario Matrix • Robust Dose/Interval Options • Trade-off Summary
Used for: protocol design • dose justification • safety planning • go/no-go and optimization decisions
For regimen robustness & defensible dose choice
Before you generalize: prevent “works here, fails there” surprises
Decision moment: translational planning, phase transitions, rollout, scaling/investment.
You’re here if: you need to know what transfers vs what breaks across populations/settings.
Most translational failures don’t happen because the biology is irrelevant.
They happen because the context changed: species differences, baseline risk, dosing/exposure, care pathways, adherence, measurement timing, or patient heterogeneity. What worked in one setting can underperform—or reverse—in another.
We formalize the differences between settings as explicit model assumptions and run constrained scenarios—so you can identify which components are likely transferable, which are context-dependent, and what needs adjustment before scaling.
Transferability breaks we diagnose:
- Population shift — baseline risk, comorbidities, heterogeneity, prior treatments
- Exposure shift — dose, schedule, bioavailability, adherence, real-world interruptions
- Measurement & workflow shift — endpoints, timing, follow-up, care pathway effects
The outcome: Transferability Map • Context Adjustment Scenarios • “What transfers / what doesn’t” summary
Used before: translational planning • Phase transition decisions • real-world rollout • investment/scaling decisions
For transferability risk & context adjustments
Before you change a pathway: simulate bottlenecks and drop-offs before rollout
Decision moment: quality improvement, implementation planning, ops decisions, scaling.
You’re here if: the pathway is clean on paper but breaks at handoffs, capacity limits, or variability.
Most clinical protocols look clean on paper.
They break in practice—at decision points, handoffs, capacity limits, and real-world variability that silently drives delays, drop-offs, and inconsistent outcomes.
We convert your care pathway (guideline, SOP, or clinical protocol) into an explicit, reviewable model, then simulate bottlenecks and failure modes—so improvement ideas can be tested before rolling out changes across a clinic or hospital.
Pathway failures we map:
- Decision-point inconsistency — different choices under the same scenario
- Bottlenecks & capacity limits — queues, delays, and resource constraints
- Drop-off points — where patients miss steps, follow-up breaks, or referrals fail
The outcome: Pathway Model • Bottleneck Map • Intervention “what-if” Scenarios
Used for: hospital quality improvement • implementation planning • clinical operations decisions • scaling & deployment risk reduction
For pathway reliability before rollout
Not sure which block fits your situation?
Stage-0 helps you identify which decision risks actually matter before you invest further.You get a short written verdict — modelable as-is, modelable with changes, or not a fit right now — plus the lowest-effort next step.
Phase B — Before you submit it (Defensibility / Review)
Before you submit it: avoid the rewrite loop (IRB/grant/sponsor)
Decision moment: when scrutiny decides your timeline.
You’re here if: key assumptions are still implicit and you need reviewer-proof logic.
Most protocols don’t get delayed because the idea is weak.
They get delayed because key assumptions stay implicit—and no one can defend the logic under review.
We turn your protocol into an explicit, reviewable structure—so scrutiny becomes a clean evaluation, not a rewrite loop.
Failure points we harden:
- Unstated assumptions — what the design is silently betting on
- Logic discontinuities — unclear eligibility, timing, or decision rules
- Claim–endpoint mismatch — endpoints that don’t actually test the question
The outcome: Assumptions Log (MRR) • Review-ready Architecture • Defensible Logic Pack
Used before: IRB / ethics review • grant submission • sponsor sign-off
For reviewer-ready logic & defensibility
Phase C — During the study (Mid-course decisions)
During the study, make the amendment that won’t invalidate the study
Decision moment: DSMB discussions, sponsor updates, amendment pressure.
You’re here if: reality drifted and you must adjust without introducing hidden bias.
Most studies don’t fail at the start.
They fail mid-way—when response rates, toxicity, adherence, or enrollment drift away from assumptions, and teams are forced to make decisions under pressure.
We translate your protocol and interim signals into an explicit, reviewable model and run constrained “what-if” scenarios—so you can see which adjustment is most consistent with the original logic and least likely to introduce bias or invalidate the study.
Mid-study risks we help navigate:
- Response/toxicity drift — when observed rates diverge from planning assumptions
- Enrollment/adherence changes — when feasibility shifts mid-course
- Rule changes with hidden costs — adjustments that seem small but break interpretability
The outcome: Interim Consistency Snapshot • Low-risk Scenario Options • Decision Rationale Note
Used during: DSMB discussions • protocol amendments (with oversight) • sponsor updates
Important: We do not replace clinical judgment or regulatory/ethics oversight. We provide transparent scenario analysis to support defensible decisions.
For amendment safety & interpretability
Phase D — After the study (Interpretation / Validation / Learning)
After you run it, turn “null results” into a clear next-step decision
Decision moment: reviewer replies, go/no-go, redesign planning.
You’re here if: you have a result, but you’re not sure what it truly means.
Most studies don’t end with certainty.
They end with a result—and a lingering question: can we trust what it means?
We turn your study logic into an explicit, reviewable model and test logical consistency between observed outcomes and underlying assumptions—so divergence becomes visible, explainable, and actionable.
Divergences we map:
- Null or weaker-than-expected effects — when design or variability quietly erodes signal
- Conflicting subgroup patterns — when an unmodeled factor reshapes response
- Outcome drift — when timing, adherence, or workflow changes what was actually measured
The outcome: Consistency Check • Gap Analysis • Hypotheses Map
Used for: reviewer replies • next-study redesign • go/no-go decisions
For divergence explanation & next-step clarity
After a study, prove the result holds or learn exactly why it doesn’t
Decision moment: manuscript robustness, internal QA, sponsor confidence, audit readiness.
You’re here if: stakeholders ask “would this hold if we reran it?”
Most results don’t get challenged because they’re “wrong.”
They get challenged because no one can clearly show that the outcome is logically consistent with the inputs, assumptions, and protocol—especially when reviewers, collaborators, or stakeholders ask, “Would this hold if we reran it?”
We take your study inputs and a summary of the observed data, translate the method into an explicit, reviewable model, and compare model outputs against real outcomes—so logical consistency can be assessed in a transparent, defensible way.
What we validate:
- Input-to-outcome traceability — can the result be explained from the stated method and inputs?
- Consistency under plausible ranges — does the finding persist across realistic parameter uncertainty?
- Reproducibility of reasoning — can others follow (and critique) the logic without guessing?
The outcome: Reproduction Report • Consistency Metrics • Sensitivity Notes
Used for: manuscript robustness • internal QA • sponsor confidence • audit-ready documentation
For reproducibility & defensible results
When simulation and reality disagree: find the hidden variable shaping outcomes
Decision moment: next-study design, biomarker selection, mechanistic refinement, go/no-go clarity.
You’re here if: the model–reality gap is telling you something important, but you can’t name it yet.
Most teams panic when a model doesn’t match observed outcomes.
But in practice, a model–reality gap often points to something unaccounted for: a confounder, a missing biological pathway, a site/workflow effect, or a measurement limitation that quietly shaped the result.
We compare your observed outcomes against a transparent model of the study logic, then map where divergence concentrates and what kinds of hidden variables or missing relationships could plausibly explain it, so the next step becomes testable, not speculative.
Hidden sources we help surface:
- Confounding pathways — an unmeasured factor driving subgroup patterns or response
- Missing mechanism/link — a biological or behavioral pathway not represented in the logic
- Measurement limits — timing, sensitivity, or proxy endpoints masking the true effect
The outcome: Divergence Map • Candidate Confounders List • Testable Hypotheses Plan
Used for: next-study design • biomarker/feature selection • mechanistic refinement • go/no-go clarity
For hidden-variable discovery & testable next steps
Phase E — Build the asset (Reuse)
After one project: stop rebuilding from scratch, reuse what you already proved
Decision moment: when continuity and reuse matter more than heroics.
You’re here if: logic and code are fragile, scattered, or stuck as tribal knowledge.
Most teams don’t lose progress because the science is hard.
They lose it because the logic and code stay fragile—spread across notebooks, hard drives, and “tribal knowledge” that can’t be reviewed, reused, or defended when the team changes.
We turn your model into a documented, versioned, reproducible package—so your lab or team can rerun, adapt, and extend it across future studies without rebuilding the foundation.
Failure modes we prevent:
- Model drift — the “same model” produces different results over time
- Unreviewable logic — assumptions hidden in code or in one person’s head
- Non-reproducible runs — outputs that can’t be rerun, audited, or shared
The outcome: Reusable Model Package • Documentation & Assumptions Log (MRR) • Versioned Repo Structure
Used for: lab continuity • multi-project reuse • onboarding collaborators • sponsor/reviewer confidence
For reuse, continuity, and defensible delivery
Evidence
Selected examples
A selection of computational modeling case studies where we turned medical and pharmaceutical methods into working models, simulations, and tools that support specific protocol and pathway decisions. Client-identifying details are removed; technical documentation and code are available on request for most projects.
If you’d like to see what a similar model could look like for your own project, you can start your own computational modeling case from our Stage 0 form.

“This is a sanitized example to demonstrate deliverable structure, not clinical advice.”
Featured examples
(methods + medical)
These are examples of how we document and verify logic under publication and review constraints.
They are not claims about your specific study, they illustrate the architecture, rigor, and documentation standard we apply.
-
Dual-Platform ABCD1 mRNA Therapy for X-Linked ALD | Method2Model
Most ALD mRNA programs don’t fail on biology—they fail on decision ambiguity: unclear success definitions, CNS “signal” without thresholds, and…
-
In-Vitro Screening System — Complete Architecture + Formal Formula Pack for Deterministic QC and Selection (5HN & CD8)
This case study documents an end-to-end, stage-locked in-vitro screening system for sOMF exposure in 5HN and CD8 models. The work…
-
Equilibrium Window of Health: Early-Drift Monitoring Core (Stage-Locked, Deterministic, Event-Aware)
Equilibrium Window of Health is a stage-locked, event-aware early-drift monitoring core designed for real-world longitudinal data. It infers a patient-specific…
All Modeling Case-Studies
Browse the full library of computational modeling case studies, including dynamic health models, trial simulations, device models, and more.
Science & Articles
Brief articles and reflections at the intersection of computational modeling, medicine, and study design—many of which are also shared on LinkedIn and Medium. … together with the case studies above, they show how computational modeling can support real medical and pharma decision-making — from design and power to interpretation and reuse.

Featured Articles
-
A Bridging mRNA Strategy for ALD: How “Model-First” Design Prevents Million-Euro Mistakes
Dual-platform mRNA programs multiply uncertainty—and teams often pay for that uncertainty with months of experiments, expensive animal cohorts, and late-stage surprises.…
-
Targeted Delivery of Chemotherapy and Immunotherapy: From Concept to Clinic
Targeted delivery systems for chemotherapy and immunotherapy aim to flip the traditional risk–benefit balance of cancer treatment: instead of flooding the…
-
mRNA Therapy as a Bridging Strategy in Adrenoleukodystrophy (ALD): Stabilizing Patients While Awaiting Curative Treatment
A time-limited “bridge” therapy with ABCD1 mRNA could help stabilize boys with cerebral ALD while they wait for definitive treatment with…
All Articles
All articles related to computational modeling, targeted therapies, and the future of human-relevant preclinical science.
Code & Reproducible Work (GitHub)
Each computational modeling case study here is backed by real code. For most projects, we keep a dedicated GitHub repository linked from the case page. If you want to explore the broader codebase and reusable modeling components, you can browse our GitHub profile.
The repositories are organised by project, matching the case studies on this page (mRNA modeling, trial simulation, device models, etc.).
