
Method2Model
Process
From protocol logic to defensible decisions, reviewable models, and reproducible code — locked and verified at every stage.
We turn complex medical methods into transparent computational decision models and executable code. Each step includes a clear “approval gate” so the logic stays consistent, scope stays controlled, and the decisions built on the model remain explainable and reviewable.
The golden rule of the method to model process
We never start coding or simulation work without a reviewed and approved decision definition and method. Stage 1 (Method Review) is mandatory for every project. Without a clear, model-ready method and a clearly stated decision, any method-to-model process would rest on shaky ground.

“Is modeling the right tool to support this decision?”
Before any commitment, we assess modeling potential using a non-sensitive summary—no NDA, no identifying details required.
Inputs
- 5–10 lines describing your question and the decision you need to make
- A non-sensitive protocol/method summary (abstract-level)l)
Outputs
- Feasibility Note: Yes / Yes with changes / Not a fit now (with brief rationale)
- Decision Use-Case Selection: recommendation of 1–2 best-fit decision use cases from our 12 solutions
- Minimum Inputs List: the minimum information needed to start Stage 1 for that decision
Cost: Free | Privacy: No NDA | Turnaround: ~24 hours
“Turn text into explicit, reviewable decision logic.”
This is the most important stage: we extract hidden assumptions, define inputs, select the best-fit use case, and lock the model architecture around the decision you need to defend—before any formulas or code.
Privacy
NDA is included in the Stage 1 intake form.
Deliverables
(Stage 1 outputs)
Use-Case Confirmation Lock (Scope Lock)
the chosen use case + clear scope boundaries + success criteria
Assumptions Map / Log (MRR)
assumption IDs, evidence level, impact, sensitivity, and validation paths
Input Spec Sheet (ISS)
variable definitions, units, timing, ranges, expected missingness/noise
Model Architecture (Reviewable)
inputs → states → transitions → outputs, dependency map, fixed vs fitted parts
Scenario Pack v1 (Definition-level):
baseline + stress scenarios, parameter ranges, evaluation metrics
Verification Plan (Key differentiator)
how we will prove in Stage 3 that the delivered code matches this architecture
- Architecture-to-code traceability requirements
- Acceptance tests and scenario equivalence checks
- Version-lock rules (architecture/formula/code IDs)
Result
You receive a review-ready logic + architecture package for the target decision, which you can share internally (team, collaborators) or externally (sponsor / review planning).
Approval gate
We proceed only after the Architecture + Scenario Pack v1 + Verification Plan for that decision are approved.
“Convert architecture into solid mathematics.”
We translate the locked architecture into explicit equations and formalize the input/output definitions as an official schema.
Deliverables
(Stage 2 outputs)
- Formula Pack: full mathematical formulation, parameters, constraints, boundary/initial conditions
- I/O Contract (Official Schema): types, shapes, ranges, file structures for inputs/outputs
- Formula Lock: sign-off-ready confirmation that definitions are fixed for coding
Goal
Prevent scope creep and eliminate ambiguity before implementation — and make sure the way the decision is computed cannot drift as the project moves into code.
Approval gate
We move to coding only after Formula Pack + I/O Contract are approved.
“Deliver a reproducible decision model—and prove it matches the locked logic.”
We implement the approved formulas as a professional Python package and deliver evidence that the code is faithful to the Stage 1 architecture.
Inputs
(Stage 3 requirements)
- A sample dataset (synthetic or de-identified is usually sufficient)
- If you plan to run locally: your compute specs (OS/CPU/RAM/GPU constraints)
Deliverables
(Stage 3 outputs)
- Python Code Package / Repo (versioned): structured, documented, and reusable
- Runbook + Config Templates: install/run instructions and scenario configs
- Sample Run Outputs: a validated example run on your data format + output schema
- Verification Evidence (“Proof of Match”):
- completed traceability matrix (architecture → code components)
- acceptance test report (pass/fail)
- scenario equivalence outputs (baseline checks)
- version IDs linking architecture ⇄ formulas ⇄ code
Result
You receive a long-term digital decision asset: code you own and can run, audit, and defend when reviewers, sponsors, or auditors ask how the decision was made.
“We run it for you—securely—and deliver decision-ready outputs.”
If you prefer to focus on interpretation, we execute the model on your real data and deliver final reports.
Deliverables
(Stage 4 outputs)
- Simulation Results: raw outputs + structured result files
- Final Decision Notes: concise, decision-support interpretation linked back to Stage-1 assumptions and architecture.
- Run Certificate: code version + scenario config + run metadata
Scope clarity
Managed runs are scoped upfront (datasets, scenario count, iterations), with clear pricing tied to that scope.
Why this process is
research-grade
- No black box: every assumption and decision rule is visible and reviewable.
- Traceability: you can track any output back to protocol logic, assumptions, and the decision it supports.
- Defensibility at code level: the Verification Plan + Evidence prove that the implementation faithfully encodes the Stage-1 decision logic.
- Ownership & security: you receive the code at Stage 3; sensitive runs happen only if you choose Stage 4 under NDA.

Approvals, Change Requests, and Versioning
To keep every Method-to-Model project traceable and defensible, we treat the work like a reviewable product—not a loose set of files. Every major decision artifact (MRR, Architecture, Formula Pack, Code Package) is versioned and frozen when approved, so later choices can be defended.
Versioned deliverables
Each major deliverable has a clear version number (e.g., MRR, Model Architecture, Formula Pack, Code Package). This makes discussions, reviews, and revisions unambiguous—everyone knows exactly which version is being referenced.
Freeze after approval
When you approve a deliverable, it is marked as frozen. Future work builds on that frozen version, so the project stays consistent and the logic—and the decisions derived from it—don’t drift over time.
See an example: Stage-1 record with versioned PDFs (Zenodo)
Change Requests (CR)
If something substantial needs to change after a deliverable is frozen, we log it as a Change Request with a clear description of what changes, why it changes, and the expected time/cost impact—so scope stays controlled and previously approved decisions stay defensible.
“Before a deliverable is frozen, revisions are part of the normal stage loop. Change Requests apply only after approval/freeze.”
Data, IP, and Privacy
We design the workflow to minimize sensitive exposure while keeping results usable for real research decisions.
Data minimization by default
Whenever possible, we work with de-identified, aggregated, or synthetic data. Identifiable patient data is rarely necessary for the types of modeling, simulation, and consistency checks we deliver.
NDA when appropriate
We do not require an NDA for Stage 0. If you choose to move forward, NDAs and data-processing terms can be put in place starting Stage 1, when sensitive materials may be shared.
Client-first privacy
By default, your materials, model logic, and outputs remain private to you. We do not publish or share anything publicly unless you provide explicit written permission and the content has been sanitized appropriately.
Policies
If you need more detail on data handling, security expectations, and publication rules, please see our Policies page.
“We avoid identifiable data by default. If sensitive data is required for managed runs, it happens only in Stage 4 under NDA/DPA.”
Not sure which stage fits your situation?
Start with Stage 0 (Free Feasibility Check).
Send a short, non-sensitive summary of your study or protocol, and within one business day you’ll receive:
- A clear feasibility answer (modelable / modelable with changes / not a fit yet)
- The most relevant decision use-case from our 12 modeling solutions
- A minimum inputs list for Stage 1, so you know exactly what’s needed next to support that decision

