Problem
docs/transpiler-performance.md currently presents precise benchmark numbers, hardware details, and methodology claims (for example, averages over multiple runs), but this repo does not include:
- a benchmark harness
- raw benchmark outputs
- fixture apps / commands used to produce the numbers
- any reproducible source for the documented figures
That makes the guide hard to trust and impossible to refresh safely.
Suggested fix
Choose one of these directions:
- Add benchmark scripts, fixture apps, raw results, and a brief reproduction section to the repo.
- Rewrite the guide as qualitative guidance (tradeoffs, expected directionality, caveats) and remove exact numbers that cannot be reproduced from source control.
Why this matters
Performance claims age quickly. Without reproducible artifacts, the docs risk becoming stale in a way that looks authoritative.
Problem
docs/transpiler-performance.mdcurrently presents precise benchmark numbers, hardware details, and methodology claims (for example, averages over multiple runs), but this repo does not include:That makes the guide hard to trust and impossible to refresh safely.
Suggested fix
Choose one of these directions:
Why this matters
Performance claims age quickly. Without reproducible artifacts, the docs risk becoming stale in a way that looks authoritative.