$$ % Dirac notation \newcommand{\ket}[1]{\left|#1\right\rangle} \newcommand{\bra}[1]{\left\langle#1\right|} \newcommand{\braket}[2]{\left\langle#1\middle|#2\right\rangle} \newcommand{\expect}[1]{\left\langle#1\right\rangle} % Common operators \newcommand{\tr}{\operatorname{tr}} \newcommand{\Tr}{\operatorname{Tr}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} % Complexity \newcommand{\bigO}[1]{\mathcal{O}\!\left(#1\right)} $$

Surface Code & Compilation Explorer

Interactive tools for understanding QEC scaling and chiplet compilation

1. Logical Error Rate vs Code Distance

The surface code suppresses logical errors exponentially with code distance \(d\). The leading-order formula is:

\[p_L \approx \left(\frac{p}{p_\text{th}}\right)^{\lfloor d/2 \rfloor}\]

where \(p\) is the physical error rate per gate and \(p_\text{th} \approx 1\%\) is the threshold. Each +2 in distance multiplies the exponent by one, suppressing \(p_L\) by another factor of \((p/p_\text{th})\).

Note

Formula caveat: This is a leading-order approximation. Stim-based simulations give slightly different values — the formula captures the qualitative scaling but not the exact prefactors.

Tip

For algorithms requiring billions of gates, you need \(p_L \lesssim 10^{-12}\) per logical gate. With \(p = 0.001\) and \(p_\text{th} = 0.01\), that requires \(d \geq 25\). Notice how the distance requirement scales: halving the physical error rate (to 0.0005) lets you drop two distance levels for the same \(p_L\).


2. Below vs Above Threshold

The threshold \(p_\text{th}\) is a phase transition: on one side, more distance helps; on the other, it hurts.

When \(p < p_\text{th}\): the ratio \(p/p_\text{th} < 1\), so raising \(d\) drives \(p_L \to 0\). QEC works.

When \(p > p_\text{th}\): the ratio \(p/p_\text{th} > 1\), so raising \(d\) makes \(p_L\) explode. You are encoding an error into larger and larger structures.

Tip

Try sweeping p across the threshold at 1%. Watch the curve flip from downward-sloping (exponential suppression) to upward-sloping (exponential amplification). This is the single most important phenomenon in all of QEC: below threshold, spending more qubits always helps. Above it, spending more qubits makes things worse.


3. Physical Qubit Cost

Every logical qubit requires many physical qubits. For the rotated surface code (the variant Chipmunq targets):

  • Data qubits: \(d^2\)
  • Ancilla qubits: \(d^2 - 1\)
  • Total per logical qubit: \(2d^2 - 1\)
  • Lattice surgery CNOT (3 patches: control, target, ancilla): \(3(2d^2 - 1)\)
Tip

Key insight: The cost grows as \(\approx 2d^2\) — quadratic, not linear. Going from \(d=7\) to \(d=15\) (roughly doubling distance) quadruples the qubit count per logical qubit. Yet the LER suppression per \(+2\) in distance is always the same multiplicative factor \((p/p_\text{th})\). This is the fundamental trade-off in fault-tolerant computing.


4. Compilation Impact on Logical Error Rate

Compilation is not free: every routing decision, every SWAP insertion, affects the physical noise profile the surface code must correct. The question is whether QEC still works after compilation.

The paper measures: - Chipmunq: 2.2× LER increase over the ideal (no-compilation) baseline - LightSABRE: 128× LER increase — at this level, QEC is qualitatively broken

Note

These multipliers come from Stim-based simulation. They are applied to the approximate formula here for illustration; the qualitative behavior is accurate, the exact values are simulation-derived.

Tip

Key insight: LightSABRE’s 128× LER degradation is not just a quantitative setback — it is a qualitative failure. Once LER hits the noise floor (the physical error rate \(p\)), increasing \(d\) provides no further benefit. All the physical qubits spent on error correction are wasted. Chipmunq’s 2.2× overhead, by contrast, just means you need 1–2 extra distance levels to hit your target — a manageable constant-factor cost.


6. Noise-Aware Routing

Chipmunq’s router assigns costs to edges proportional to their expected noise:

\[c(e) = -\log(1 - p_e)\]

This is the only cost function where the total path cost equals \(-\log P(\text{no error on path})\), making shortest-path algorithms equivalent to minimum-error routing.

Tip

Why \(-\log(1-p_e)\)? The probability of an error-free path through edges \(e_1, e_2, \ldots\) is \(\prod_i (1-p_{e_i})\). Taking the log turns this into a sum: \(\log P = \sum_i \log(1-p_{e_i}) = -\sum_i c(e_i)\). Minimizing \(\sum c(e_i)\) is exactly maximizing the probability of an error-free path. Standard Dijkstra’s algorithm, applied to this cost function, becomes a minimum-error router.