
Make your protocol defensible-before reviewers send it back.
You own the biological story. We protect the integrity of the design logic.
We turn protocol choices into explicit, reviewable decision models—so your assumptions survive IRB/grant/sponsor scrutiny.
Review-ready, forwardable outputs—built for IRB, grant, and sponsor scrutiny. Not a CRO. Not a biostats service. We formalize decision logic.
~1 business day • no sensitive data required • we test decision logic, not outcomes
Most review delays aren’t about the idea. They’re about defensibility gaps.
Protocols get slowed down when key assumptions stay implicit: why this endpoint, why this timing, why this population, why this sample size, why this decision rule.
Reviewers don’t need you to be “right.” They need the logic to be explicit, consistent, and defensible.
When that happens, you rarely get a clean failure—you get an ambiguous readout. Ambiguity is where your review cycle expands and your study story weakens.

If you don’t fix this (PI-cost bullets)
- You enter a rewrite loop (IRB / grant / sponsor) because the logic isn’t written in a reviewable way.
- You get “one killer question” you can’t answer cleanly (endpoint choice, window timing, feasibility, detectability).
- You lock a fragile design and discover it only after animals/patients/time are committed.
- You burn credibility: the story looks like post-hoc patching instead of pre-specified logic.
What you buy: reviewer-proof logic in writing
Assumptions Log (MRR / Logic Risk Register)
What the design is betting on—explicitly.
Review-ready Protocol Logic Map
Eligibility, timing, endpoints, decision rules—expressed as a coherent structure.
Defensibility Note (Rational Design Justification)
A forwardable memo that answers “why this design?” before the reviewer asks.
Artifacts are inspectable (LaTeX/PDF), versioned, and reviewable—not slide decks or black-box outputs.
Audit tiers (pick the depth you need)
+ Protocol Blind-Spot Scan
+ Real-World Power Check (lite: variance + missingness + effective N)
Best for:
An upcoming IRB / grant / sponsor review, when you need a fast, defensible view of what will break—or get attacked—first.
Outputs:
- Assumptions & Fragility Register (MRR) — lite, 1-page artifact
- Reviewer Attack Map — top questions + clean responses
Scope:
Top assumptions + first-break analysis + power realism under real-world drift.
Typical turnaround:
Fast (days)
Best when you need:
A clean, defensible story before your next submission or committee meeting.
+ Tier A + Reviewer-Ready Logic Pack
+ Measurement Decision Design (included when measurement choice is decision-critical)
Best for:
Protocol lock and submission cycles where a rewrite loop would be costly.
Outputs:
- Assumptions & Fragility Register (MRR) (full)
- Assumptions Map (same artifact family—deeper view, not a different thing)
- Defensibility Note (Rational Design Justification)
- Eligibility, timing, and decision rules expressed as a traceable single-page decision graph
- Measurement priority & timing memo (when decision-critical)
Includes (standard scenario stress tests):
- Missingness + informative dropout
- Site-to-site variability / measurement drift
- Heterogeneity + effect dilution
Typical turnaround:
1–2 weeks
Best when you need:
You need the protocol to be reviewer-proof and operationally realistic.
+ Tier B + Mid-Study Amendment Decision (pre-plan)
+ Transferability / Scaling (if bridging contexts)
+ Optional: Reusable model asset (documented I/O + scenario engine; code only if needed)
Best for:
High-stakes or late-stage studies where interpretability, reuse, and multi-site complexity truly matter.
Outputs:
- Full Assumptions Map + Architecture + Scenario Pack (audit-grade)
- “What changes conviction” thresholds (what would actually flip the decision)
- Optional: reusable model asset (documented I/O + scenario engine; code only if needed)
Includes:
Expanded scenario coverage + an audit trail suitable for sponsor, partner, and committee scrutiny.
Typical turnaround:
Project-based
Best when you need:
The downside isn’t being wrong—it’s becoming ambiguous under scrutiny and variance.
Scope discipline: Stage-0 and audits test decision logic—not outcomes.
What decision are you facing right now?

Pick the gate you’re at today:
Modules (same library, packaged for PI reality)
- Design • Review • Mid-Study • Post-Readout • Asset
Phase A — Before you run it (Design / De-risk)
PI question:
What will a reviewer attack first? What breaks first in real life
Why you need it:
“Sound” science often fails on one hidden assumption (feasibility, endpoint behavior, detectability).
If you don’t do it:
you find fragility after recruitment/experiments start—when changes are expensive or impossible.
You get:
Assumptions Log + reviewable architecture + focused what-if scenarios
PI question:
What should we measure—so it actually changes the decision?
If you don’t do it:
you over-collect low-information biomarkers → cost, burden, missingness, noise
You get:
minimal must-measure set + timing windows + measurement ROI notes
PI question:
Will the regimen hold under adherence and variability?
If you don’t do it:
a viable mechanism “fails” because regimen fragility erased signal
You get:
regimen scenarios + robust options + dose rationale memo
PI question:
What transfers across populations/settings—and what breaks?
If you don’t do it:
you generalize too early and get “works here, fails there” surprises
You get:
transferability map + adjustment scenarios + what must change
PI question:
Will the pathway break at handoffs/capacity limits?
If you don’t do it:
implementation variance dominates outcomes and damages inference
You get:
bottleneck map + drop-off points + what-if interventions
Not sure which block fits your situation?
Stage-0 helps you identify which decision risks actually matter before you invest further. You get a short written verdict — modelable as-is, modelable with changes, or not a fit right now — plus the lowest-effort next step.
Phase B — Before you submit it (Defensibility / Review)
PI question:
Can we defend “why this design?” without a rewrite loop?
If you don’t do it:
IRB/grant/sponsor cycles delay timeline because logic gaps stay implicit
You get:
assumptions log + review-ready architecture + defensibility pack
Phase C — During the study (Mid-course decisions)
PI question:
What amendment reduces risk without invalidating interpretability?
If you don’t do it:
you “fix” under pressure and end with a study that can’t be interpreted.
You get:
constrained scenarios + low-bias amendment options + rationale note.
Guardrail:
We don’t replace ethics/regulatory oversight—scenario transparency only. ✅
Phase D — After the study (Interpretation / Validation / Learning)
PI question:
Null because biology failed—or because design drifted?
If you don’t do it:
you kill a viable line or repeat the wrong next study.
You get:
consistency check + gap analysis + next-step hypotheses map
PI question:
Would this hold if rerun? Can others follow the logic?
If you don’t do it:
reviewers challenge robustness and you can’t defend it cleanly.
You get:
reproduction report + consistency metrics + sensitivity notes.
PI question:
What missing variable is shaping the outcome?
If you don’t do it:
you repeat the same blind spot at higher cost
You get:
divergence map + candidate confounders + testable next-step plan.
Phase E — Build the asset (Reuse)
Inspect our math
Public, versioned artifacts—reviewable by peers.
De-identified examples on Zenodo and accompanying GitHub repos. Outputs are delivered in research-grade formats (LaTeX/PDF) suitable for citation and review—not marketing decks.
Enter Stage-0 (PI Intake)
Stage-0 is the first gate in our process. We review protocol logic and assumptions only—no outcome optimization.
You’ll get a short written verdict in ~1 business day: modelable as-is, modelable with changes, or not a fit right now, plus the lowest-effort next step.
Form fields (minimal):
- Study type (preclinical / translational / clinical)
- Decision gate (protocol lock / power / review / mid-study / post-readout / asset)
- Short protocol synopsis (non-sensitive)

No sensitive data required for Stage-0.
