You Are Here—The Pathfinder Phase: A Systems-Level Framework for Fault-Tolerant Quantum Computing

Insider Brief:
- The Novo Nordisk Foundation Quantum Computing Programme has entered its Pathfinder Phase, outlining a structured roadmap to utility-scale fault-tolerant quantum computing by 2035.
- The Pathfinder Framework organizes development through three interlinked systems: the Mission Ladder, Mission-Driven Project Management, and a Data-Driven Workflow Framework.
- The Mission Ladder defines progress across algorithm and hardware development using formal metrics of interest (quality, speed, and scale) linked to project milestones.
- The program uses data-driven methods and machine learning to optimize design and fabrication processes, while coordinating interdisciplinary teams through shared, mission-aligned objectives.
Fail to plan, plan to fail—a familiar adage that fulfills the duty of self-chastisement during reflections of perceived failure, a reminder that structure and organization are most high. In a world where action and execution are king, even the most difficult tasks can be broken into smaller iterations and gradually completed over time. The monumental challenge of developing fault-tolerant quantum computing may be no exception.
At least, that’s the assumption behind the Novo Nordisk Foundation Quantum Computing Programme and its newly launched Pathfinder Phase. According to a program statement, the Pathfinder Framework is designed to define a clear and measurable roadmap toward utility-scale FTQC by 2035. It is the second phase in a multi-decade plan led by NQCP and Quantum Foundry Copenhagen, following a Preparation Phase that concluded in 2024. A final Scaling Phase is expected to follow the Pathfinder Phase.
The mission is ambitious, but the path is laid out with notable precision. The Pathfinder Framework divides its work into three distinct but interlinked systems: the Mission Ladder, Mission-Driven Project Management, and a Data-Driven Workflow Framework. Together, these components provide a scaffolding for the program’s R&D, ensuring progress isn’t just forward-moving and largely left up to change, but instead it is informed, measurable, and adaptable.
Mission Ladder: Structuring the Ascent to FTQC
At the center of the framework is the Mission Ladder, which organizes both algorithmic and hardware development efforts into levels, each with specific metrics and goals. On the software side, the Applications & Algorithms track spans six levels, ranging from early use-case identification to the estimation of quantum resources required for execution. In parallel, hardware R&D is structured into nine levels, modeled loosely on traditional Technology Readiness Levels.
Technology Readiness Levels are a standardized way of measuring how mature a particular technology is, originally developed by NASA and now widely used in industry and research. They range from TRL 1 (basic principles observed) to TRL 9 (actual system proven in operational environment). Here’s a quick breakdown:
- TRL 1–3: Early-stage research — basic science and proof-of-concept.
- TRL 4–6: Development — lab testing, component validation, and subsystem integration.
- TRL 7–9: Deployment — prototype demonstration, full system validation, and operational use.
In the context of the Pathfinder Framework, the hardware development ladder is modeled loosely on this scale, with each level representing increasing confidence in performance, scalability, and integration of quantum processors.
The purpose of the ladder is to map dependencies between algorithmic requirements and hardware capabilities, and to define the necessary targets in terms of three key metric pillars: quality, speed, and scale. These are not general descriptors, but rather formalized metrics of interest that evolve alongside project milestones. According to the framework document, bidirectional dependencies are especially critical for systems involving long-range qubit connectivity, whether within a single QPU or across distributed architectures.
Coordinated Complexity: Mission-Driven Project Management
Project coordination in this context isn’t managed through traditional timelines alone, but instead through alignment with specific mission ladder levels. Each project must define deliverables in terms of its position on the ladder and the associated MOI targets. As noted in the framework, this allows teams across different subdomains, such as fabrication, control, and modeling, to understand how their objectives contribute to the broader system.
In other words, progress is framed by problem-space advancement rather than arbitrary dates. This structure also provides built-in visibility across interdependent teams and projects, which helps prevent misalignment between engineering, theory, and use-case validation.
Data-Driven Workflows: A Foundation of Feedback
While the mission ladder sets the roadmap, the data-driven framework keeps the car on the road. Here, the focus is on reproducibility, process control, and optimization through parameter correlation analysis.
The program outlines four key parameter classes: design parameters, process control parameters, monitors, and metrics. Input parameters (the DPs and PCPs) are systematically varied, while outputs (monitors and metrics) are observed to infer correlations. This structure allows both process stabilization and component optimization. The goal, ultimately, is to automate the identification of optimal system configurations by linking low-level parameter changes with high-level performance indicators.
All data is stored in a central warehouse, with critical parameters curated in a labeled database. According to the framework, supervised machine learning techniques will eventually be used to close the loop on optimization, speeding up design cycles and increasing experimental efficiency.
Toward a Systems-Level Approach
A wide range of qubit modalities are being considered under the Pathfinder Framework, and the program does not appear to be betting on a single hardware winner. Instead, the structure itself allows candidate platforms to be evaluated under shared metrics, compared according to the requirements of mission-driven algorithms. This avoids the pitfall of technology-specific benchmarks and keeps the program oriented toward its end goal: useful, fault-tolerant systems that can solve real problems in materials and life sciences.
If “doing hard things” is possible through methodical planning, NQCP’s approach is designed to test that hypothesis. It also reflects a growing trend in quantum R&D: the shift away from siloed research and toward systems engineering practices familiar in aerospace and complex software development.
The field has long understood what needs to be built (or can at least generally agree). The challenge now is to structure the path so we can actually get there.