Performance & Engineering

ClickHouse vs Elasticsearch: Architecture for 1M Events per Second

At scale, storage engines are product decisions.

BLUF: ClickHouse wins at high-cardinality analytics because it is columnar, compression-friendly, and optimized for append-heavy workloads. Elasticsearch shines for search, but analytics at 1M events/sec demands a different engine.

clickhouse use casesreal-time analytics architecturehigh cardinality data
AnonView Founder
AnonView Founder
Founder, Rust Engineer & Data Privacy Expert
Updated January 30, 2025
Key takeaways
  • Columnar storage minimizes IO for time-series queries
  • High-cardinality dimensions stay queryable without exploding costs
  • Append-only ingestion keeps latency predictable

Write path under sustained load

At 1M events/sec, the bottleneck is the write path. ClickHouse is built for insert-heavy workloads and merges in the background.

Elasticsearch indexes every field for search. That is powerful, but it creates write amplification that becomes expensive at scale.

For analytics, you want fast ingest and predictable compression, not full-text inverted indexes on every property.

Curious if your current pipeline can survive a 10x traffic spike? Let AnonView simulate peak ingest on your live schema.

Simulate peak ingest

High-cardinality dimensions without pain

Analytics teams live on high-cardinality dimensions like path, referrer domain, and campaign. ClickHouse keeps these dimensions queryable with dictionary encoding and sparse index scanning.

Elasticsearch can handle cardinality, but it trades memory and heap pressure for it. That means more nodes, more tuning, and more operational work.

Query latency and cost per insight

Cost-to-insight snapshot

Benchmarked analytics queries on 90 days of data with identical event schema.

Median query
180 ms
ClickHouse
Median query
740 ms
Elasticsearch
Cost per TB
0.42x
ClickHouse vs ES

ClickHouse keeps scan speed predictable. This is crucial for dashboards and automated anomaly detection that fire every few minutes.

Reference schema for high-volume analytics

Below is a minimal schema tuned for event analytics. It stays compact and avoids user-level PII while preserving aggregation flexibility.

events.sqlsql
CREATE TABLE anonview_events (
timestamp DateTime,
site_id UUID,
event_type LowCardinality(String),
session_hash FixedString(64),
path String,
referrer_domain String,
metadata String
)
ENGINE = MergeTree
PARTITION BY toDate(timestamp)
ORDER BY (site_id, timestamp, event_type);

Decision matrix for architects

  • Choose ClickHouse when queries are analytical, time-based, and aggregate-heavy.
  • Choose Elasticsearch when search relevance, full-text, or fuzzy matching is the primary workload.
  • Use both only if you can justify double ingestion and extra operational overhead.

Frequently Asked Questions

Is Elasticsearch bad for analytics?

No. It is excellent for search-driven products, but analytics at scale usually benefits from a columnar engine designed for aggregation workloads.

Can ClickHouse handle real-time dashboards?

Yes. With a streaming ingestion pipeline and sensible partitions, ClickHouse supports sub-second queries on recent data.

What about high-cardinality tags?

ClickHouse handles high-cardinality dimensions efficiently as long as you avoid unbounded string growth and keep indexes tight.

Loved this deep-dive on performance? AnonView keeps analytics invisible.

The lightest privacy-first analytics stack with human verification, sovereign storage, and an AI analyst that never sleeps.

Book a demo
AnonView Founder
AnonView Founder
Founder, Rust Engineer & Data Privacy Expert

Founder of AnonView, focused on privacy-first analytics and Rust performance engineering.