$$ % Dirac notation \newcommand{\ket}[1]{\left|#1\right\rangle} \newcommand{\bra}[1]{\left\langle#1\right|} \newcommand{\braket}[2]{\left\langle#1\middle|#2\right\rangle} \newcommand{\expect}[1]{\left\langle#1\right\rangle} % Common operators \newcommand{\tr}{\operatorname{tr}} \newcommand{\Tr}{\operatorname{Tr}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} % Complexity \newcommand{\bigO}[1]{\mathcal{O}\!\left(#1\right)} $$

Chiplet Architecture Explorer

Interactive tools for understanding chiplet yield, defects, and placement strategies

1. Yield vs Scale

Manufacturing large quantum chips is hard — a single defective qubit can render the entire chip unusable. Chiplet architectures sidestep this by splitting the device into smaller, independently testable chips.

The comparison: - Monolithic chip: \(P(\text{works}) = q^N\) — one defect anywhere kills it - One chiplet of size \(N/k\): \(P(\text{works}) = q^{N/k}\) — much more likely to be defect-free

where \(q\) is the per-qubit yield (probability that any given qubit is functional).

Tip

Try q = 0.99, N = 1000, k = 4. The monolithic chip shows ~0.004% probability of being defect-free — you’d need to fabricate ~25,000 chips to get one working one. Each 250-qubit chiplet has ~8% chance — you need ~50 chiplets to get 4 working ones. The chiplet approach is ~500× more manufacturable at this scale.


2. Break-Even Analysis

Where does chiplet modularity become decisively better? This plot shows how \(P(\text{defect-free})\) evolves with total qubit count \(N\).

Tip

At typical fabrication yields (q ≈ 0.99), the monolithic curve hits 1% probability around N = 460 qubits. The chiplet line (for k=4 chiplets) doesn’t hit 1% until N ≈ 1850 — 4× more qubits for the same manufacturing viability. Beyond a few thousand qubits, monolithic fabrication at useful yields becomes essentially impossible without near-perfect per-qubit yield.


3. Defect Map Simulator

Real chiplets have defective qubits. Chipmunq’s size-aware placement shifts the patch position to maximize working qubits underneath it, rather than always placing it at the center.

Tip

Click “Reroll defects” to generate a new random defect map. Notice how size-aware placement consistently finds a patch position that avoids more defects than center placement. With high defect rates (>15%), even size-aware placement may not find a fully clean patch — the code distance may need to be reduced.


4. Placement Overhead & Utilization Trade-off

Defects impose a routing overhead — the compiler must insert extra SWAPs to route around broken qubits. But there’s a catch: the strategy that minimizes overhead (size-aware) sometimes leaves chiplets partially empty, reducing utilization.

Note

Schematic curves: The shapes are illustrative, anchored to the paper’s reported values — size-aware is 40% less overhead than center at the reference defect density; LightSABRE (no defect awareness) is 2× center.

Tip

The trade-off in plain terms: Size-aware placement reduces overhead by 40% compared to center placement — but by shifting patches to avoid defects, it leaves some chiplet area unused. At 10% defect density, center utilization stays at ~70% while size-aware drops to ~50%. For resource-constrained systems, this gap matters. But the alternative — placing patches on broken qubits — silently corrupts the error correction, since defective qubits cannot participate in stabilizer measurements.