Performance

This page presents performance benchmarks for Panache, comparing its formatting and linting speed against popular alternatives like Prettier, Pandoc, rumdl, mdformat, mado, markdownlint, and markdownlint-cli2 on real Quarto and Markdown documents. The benchmarks highlight Panache’s efficiency and suitability for on-save formatting and fast repository-wide checks.

Overview

Panache is designed for speed without compromising on correctness. Built in Rust and compiled to native code, it delivers fast formatting with zero startup overhead. In this document, we present benchmarks comparing Panache’s formatting and linting performance against popular alternatives like Prettier, Pandoc, rumdl, mdformat, mado, markdownlint, and markdownlint-cli2 on a realistic corpus of Quarto and Markdown documents. We don’t include R Markdown benchmarks here because no other tool supports that format.

We have split the benchmarks into two suites: per-document benchmarks that run each formatter on each document in the corpus individually, and repository-wide benchmarks that run formatters on entire repositories of tracked documents. The former highlights raw formatting speed on a variety of real-world documents; the latter captures the overhead of processing multiple files in a single run, which is more representative of on-save formatting or large-scale linting. Caches are disabled for all benchmarks to show worst-case performance.

The numbers on this page are produced by the scripts in benches/ and read from the JSON files written next to this document. The benchmark chunks on this page are intentionally not executed during preview or render, so small content edits reuse the existing JSON files instead of rerunning the benchmarks. Refresh the benchmark data explicitly with the commands at the bottom of this page, then delete docs/_freeze/guide/performance/ and re-render if you want the page to pick up newly generated results.

Formatting

Single-Document

In Figure 1, we compare formatting time per document across panache, Prettier, Pandoc, rumdl, and mdformat. Each dot is one document; the y-axis shows time relative to panache (×, lower is faster). Panache sits at 1× by construction (dashed baseline) and other formatters land above it. Hover a point to see the absolute wall-clock time in milliseconds. Formatters are ordered left-to-right from fastest to slowest on average.

Figure 1: Formatting time per document, comparing panache to Prettier, Pandoc, rumdl, and mdformat. Each dot is one document; the y-axis shows time relative to panache (×, lower is faster).

Repository-Wide

In Figure 2, we compare formatting time across panache, Prettier, rumdl, and mdformat on tracked Markdown files from several repositories.

Figure 2: Repository-wide formatting benchmarks on standard Markdown repos, comparing panache to Prettier, rumdl, and mdformat. Each dot is one repo/tool pair; the y-axis shows time relative to panache within that repo (×, lower is faster).

In Figure 3, we compare formatting time across panache and rumdl on tracked .qmd files from several Quarto repositories.

Figure 3: Repository-wide formatting benchmarks on Quarto repos. Each dot is one repo/tool pair; the y-axis shows time relative to panache within that repo (×, lower is faster).

Each dot is one repo/tool pair. Panache sits at 1× by construction (dashed baseline), and points above it are slower on that repository. Hover a point to see the absolute wall-clock time in milliseconds and the corpus size.

Linting

Single-Document

In Figure 4, we compare linting time per document across panache lint, rumdl check, mado check, markdownlint, and markdownlint-cli2. Each dot is one document; the y-axis shows time relative to panache lint (×, lower is faster). Panache sits at 1× by construction.

Figure 4: Linting time per document, comparing panache lint to rumdl check, mado check, markdownlint, and markdownlint-cli2. Each dot is one document; the y-axis shows time relative to panache lint (×, lower is faster).

Repository-Wide

This suite benchmarks linting on the same standard Markdown repositories as the formatting comparison, using panache lint, rumdl check, mado check, markdownlint, and markdownlint-cli2. In Figure 5, each dot is one repo/tool pair; the y-axis shows time relative to panache lint within that repo (×, lower is faster).

Figure 5: Repository-wide linting benchmarks on standard Markdown repos, comparing panache lint to rumdl check, mado check, markdownlint, and markdownlint-cli2. Each dot is one repo/tool pair; the y-axis shows time relative to panache lint within that repo (×, lower is faster).

In Figure 6, we compare linting time across panache and rumdl on tracked .qmd files from several Quarto repositories.

Figure 6: Repository-wide linting benchmarks on Quarto repos. Each dot is one repo/tool pair; the y-axis shows time relative to panache lint within that repo (×, lower is faster).

Reproducing

All benchmarks are reproducible and require hyperfine to run. The scripts in benches/ are designed to be run from the repository root:

# Download test documents (idempotent)
cd benches/documents && ./download.sh && cd ../..

# Run comparison benchmark and write JSON
bash benches/compare_all.sh --json --out docs/guide/performance_data.json

# Run repository-wide Markdown formatting benchmark
bash benches/compare_repo_suite.sh --mode format --track markdown --out docs/guide/performance_repo_markdown_format_data.json

# Run repository-wide Quarto formatting benchmark
bash benches/compare_repo_suite.sh --mode format --track quarto --out docs/guide/performance_repo_quarto_format_data.json

# Run repository-wide Markdown lint benchmark
bash benches/compare_repo_suite.sh --mode lint --track markdown --out docs/guide/performance_repo_markdown_lint_data.json

# Run repository-wide Quarto lint benchmark
bash benches/compare_repo_suite.sh --mode lint --track quarto --out docs/guide/performance_repo_quarto_lint_data.json

# Run per-document lint benchmark
bash benches/compare_lint_single.sh --out docs/guide/performance_lint_single_data.json

# Or run the human-readable text variant
bash benches/compare_all.sh

# Re-render this page (uses freeze cache by default)
quarto render docs/guide/performance.qmd

# Force the benchmark to re-run by invalidating the freeze cache
rm -rf docs/_freeze/guide/performance
quarto render docs/guide/performance.qmd