Skip to content

Add reproducible latency benchmarks #56

@haasonsaas

Description

@haasonsaas

Context

Formal.ai claims sub-10ms p50 proxy overhead. Gate has no published latency numbers. Without benchmarks, Gate cannot compete in enterprise evaluations where latency is a hard requirement.

Proposal

Create a reproducible benchmark suite:

  • Benchmark script in scripts/benchmark.sh or internal/benchmark/
  • Scenarios: direct connection vs Gate-proxied for each protocol
    • PostgreSQL: simple SELECT, complex JOIN, INSERT batch
    • MySQL: equivalent queries
    • With and without policy evaluation (0 policies, 1 policy, 10 policies)
    • With and without masking enabled
  • Metrics: p50, p95, p99 latency; throughput (queries/sec); CPU/memory overhead
  • Hardware baseline: document the test hardware (e.g., M4 MacBook Pro, or 4-core EC2 instance)
  • Automated: runs in CI on demand (not every push) to catch regressions
  • Published: results in README or dedicated benchmarks page

Acceptance Criteria

  • Benchmark script is reproducible and documented
  • Covers PostgreSQL and MySQL with varying policy loads
  • Reports p50/p95/p99 latency and throughput
  • Results published with hardware specification
  • Can be run in CI for regression detection

Metadata

Metadata

Assignees

No one assigned

    Labels

    performancePerformance improvements

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions