Reproducible performance benchmarks for key backend engineering decisions. Each benchmark isolates one variable and measures its impact.
Available Benchmarks
| # | Benchmark | What It Measures | Key Finding |
|---|---|---|---|
| 01 | Thread vs Async vs Event Loop | Memory and CPU cost per concurrent task | |
| 02 | TCP vs HTTP Overhead | Protocol overhead per request | |
| 03 | JSON vs Protobuf | Serialization speed and wire size | |
| 04 | DB Indexing Impact | Query time with/without index | |
| 05 | N+1 vs Batching | Query count impact on latency | |
| 06 | Cache vs No Cache | Cache hit rate vs effective latency |
Running Benchmarks
Each benchmark directory contains a README.md with complete, runnable code. Requirements: Python 3.8+ (standard library only) unless otherwise specified.
Interpreting Results
All benchmarks report:
- p50 (median): typical case
- p99: tail latency (worst 1%)
- Memory usage: peak heap during benchmark
- Throughput: requests per second at steady state
Run each benchmark 3 times and take the median. Results vary by hardware; focus on relative comparisons, not absolute values.