Refactor: standard install/start/check/stop/load/query interface per system#860
Open
alexey-milovidov wants to merge 37 commits intomainfrom
Open
Refactor: standard install/start/check/stop/load/query interface per system#860alexey-milovidov wants to merge 37 commits intomainfrom
alexey-milovidov wants to merge 37 commits intomainfrom
Conversation
…/data-size
Each local system now exposes a small set of single-purpose scripts with a
stable contract, so they can be driven by a shared lib/benchmark-common.sh
and reused by external tooling (e.g. an online "run query against system X"
service):
install env prep + system install (idempotent)
start start daemon (idempotent; empty for stateless tools)
check trivial query, exit 0 iff responsive
stop stop daemon (idempotent)
load runs create.sql + loads data, deletes source files, sync
query SQL on stdin; result on stdout; runtime in fractional seconds
on the last line of stderr; non-zero exit on error
data-size prints data footprint in bytes (one integer to stdout)
Each system's old monolithic benchmark.sh is replaced by a 4-line shim that
sets a couple of env vars (BENCH_DOWNLOAD_SCRIPT, BENCH_RESTARTABLE) and
exec's lib/benchmark-common.sh. The shared driver runs the unified flow:
install -> start+check -> download -> load (timed) -> for each query
{flush caches; optionally stop+start to neutralize warm-process effects;
run query 3x} -> data-size -> stop. Output format ([t1,t2,t3], Load time,
Data size) matches the previous benchmark.sh exactly so cloud-init.sh.in's
log POST to play.clickhouse.com keeps working unchanged.
For dataframe/in-process systems (pandas, polars-dataframe, chdb-dataframe,
daft-parquet*, duckdb-dataframe, sirius), the engine is wrapped in a small
FastAPI server (server.py) so the start/stop/query interface still applies.
BENCH_RESTARTABLE=no for these (and for embedded CLIs like duckdb, sqlite,
datafusion, etc.) since restarting a single Python/CLI process between
queries would dominate query time.
Scope: 88 local systems refactored. Cloud/managed systems and a handful of
non-functional ones (csvq, dsq, locustdb, mongodb, polars CLI, exasol,
spark-velox) are intentionally left untouched.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Resolves conflict in clickhouse-datalake{,-partitioned}: upstream switched
the datalake variants from filesystem-cache to userspace page-cache (PR #818).
The refactored install/query scripts now adopt the page-cache approach.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
mongodb: query takes a MongoDB aggregation pipeline (Extended JSON, one line) on stdin instead of SQL — these are the same canonical 43 ClickBench queries, just expressed as mongo pipelines. queries.txt is generated from queries.js (the source of truth) by replacing JS-only constructors (NumberLong, ISODate, NumberDecimal) with their EJSON canonical form. The shim sets BENCH_QUERIES_FILE=queries.txt to point the driver at it. polars: wrapped in a FastAPI server analogous to polars-dataframe, but the load step uses pl.scan_parquet (LazyFrame) so the parquet file remains needed at query time — the load script does NOT delete hits.parquet. data-size returns the on-disk parquet size since a LazyFrame has no materialized in-memory size. Both systems now expose the standard install/start/check/stop/load/query/ data-size scripts and a 4-line benchmark.sh shim, removing the old benchmark.sh / run.js / query.py / formatResult.js paths. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
alexey-milovidov
commented
May 7, 2026
…use in query Per review: clickhouse-local persists table metadata in its --path dir, so the CREATE TABLE only needs to run once during ./load. ./query just runs the query against the persisted table. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
alexey-milovidov
commented
May 7, 2026
alexey-milovidov
commented
May 7, 2026
…atively Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
… readiness Per review (alexey-milovidov): clickhouse start leaves the system in the desired state (server running) even when it returns non-zero with "already running". Make the shared driver tolerate non-zero from ./start and rely on bench_check_loop as the authoritative readiness signal. This lets per-system start scripts stay simple — they just need to make a best-effort attempt to launch. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
prmoore77
added a commit
to gizmodata/ClickBench
that referenced
this pull request
May 7, 2026
…ouse#860) Adopts the per-system 7-script interface from ClickHouse#860 for gizmosql/, and replaces the Java sqlline-based gizmosqlline client with the C++ gizmosql_client shell that ships with gizmosql_server. Scripts (matching the contract from lib/benchmark-common.sh): benchmark.sh - 4-line shim that exec's ../lib/benchmark-common.sh install - apt + curl gizmosql_cli_linux_$ARCH.zip; no openjdk, no separate gizmosqlline download start - idempotent server bring-up (skips if port 31337 is open) check - cheap TCP probe (auth-gated SQL would need credentials) stop - kills tracked PID; pkill belt-and-braces fallback load - rm -f clickbench.db, then create.sql + load.sql via gizmosql_client; deletes hits.parquet and sync's query - reads one query from stdin, runs via gizmosql_client with .timer on + .mode trash; emits fractional seconds as the last stderr line (parsed from "Run Time: X.XXs") data-size - wc -c clickbench.db Notes: - BENCH_DOWNLOAD_SCRIPT=download-hits-parquet-single, BENCH_RESTARTABLE=yes (gizmosql is a server, so per-query restart neutralizes warm-process effects, matching the clickhouse/postgres pattern in ClickHouse#860). - util.sh now exports GIZMOSQL_HOST/PORT/USER/PASSWORD - the env vars gizmosql_client reads natively, so query/load can call gizmosql_client with no flags. The server still receives the username via --username. - PID_FILE moved to a stable /tmp path (was /tmp/gizmosql_server_$$.pid, which broke across the start/stop process boundary in the new layout). This PR depends on ClickHouse#860 (which introduces lib/benchmark-common.sh and the contract). Once ClickHouse#860 lands, this PR's diff against main will be only the gizmosql/ files. Validated locally on macOS with gizmosql v1.22.4: the query script produces the expected fractional-seconds last line on stdout/stderr separation, and exits non-zero on error paths. See https://docs.gizmosql.com/#/client for gizmosql_client docs.
2 tasks
Resolves merge conflicts:
- Removed cedardb/run.sh, gizmosql/run.sh — superseded by the standard
query interface; the refactor branch already replaced them.
- Restored datafusion{,-partitioned}/make-json.sh, doris{,-parquet}/get-result-json.sh
with main's dated-results version. These are independent post-run JSON
builders, still referenced from the per-system READMEs.
- Kept the thin benchmark.sh shim in gizmosql/, spark-{auron,comet,gluten}/,
trino/. Per-system result-JSON auto-save (added on main while this branch
was in flight) is intentionally not carried over: under the new interface,
result.csv is the single timing artifact and JSON construction belongs in
separate tooling.
- gizmosql/{install,load,query,util.sh}: merge auto-took main's switch from
gizmosqlline (Java) to gizmosql_client (CLI shipped with the server),
but the refactor branch's load/query still referenced GIZMOSQL_SERVER_URI
and GIZMOSQL_USERNAME. Updated install to drop openjdk + gizmosqlline,
load to use gizmosql_client (and stop the server first to release the
database file), and query to drive gizmosql_client with .timer/.mode trash
and parse "Run Time:" instead of "rows selected (... seconds)".
…-system layout These four entries were added on main while this branch was in flight (the existing trino/ scripts here were a memory-connector stub that never worked end-to-end). Rebuild each one against the new install/start/check/stop/load/ query/data-size contract so they share lib/benchmark-common.sh: - trino, trino-partitioned: Hive connector + file metastore + local Parquet hardlinked into data/hits/ (matches main's working impl from PR #856). - trino-datalake{,-partitioned}: same, plus the AnonymousAWSCredentials shim to read clickhouse-public-datasets/hits_compatible/athena from anonymous S3 (the published bucket size is reported by data-size since the data is read on demand). BENCH_DOWNLOAD_SCRIPT="" — no local dataset to fetch. - benchmark.sh in all four becomes a 4-line shim. Old run.sh deleted.
…r-system layout
These four entries were added on main while this branch was in flight.
Adapt them to the install/start/check/stop/load/query/data-size contract:
- presto, presto-partitioned: Hive connector + file metastore + local Parquet
hardlinked into data/hits/.
- presto-datalake{,-partitioned}: same plus the AnonymousAWSCredentials shim
(compiled in a throwaway trinodb/trino container, since the prestodb image
ships only a JRE) so the hive-hadoop2 plugin can read the public bucket
anonymously. BENCH_DOWNLOAD_SCRIPT="" — schema-only load against S3.
Each benchmark.sh becomes a 4-line shim. Old run.sh deleted.
These two entries were added on main while this branch was in flight. Adapt to the install/start/check/stop/load/query/data-size contract: - BENCH_DOWNLOAD_SCRIPT="" — the vortex bench binary fetches Parquet and converts to .vortex on first invocation. - BENCH_RESTARTABLE=no — embedded Rust CLI; per-query restart would dominate query time. - query: stages stdin into a temp queries-file and passes -q 0, since the bench binary addresses queries by index rather than reading SQL on stdin. - The single variant uses the `clickbench` binary (vortex 0.34.0); the partitioned variant uses `query_bench clickbench` (vortex 0.44.0). Old run.sh deleted.
Quickwit was added on main while this branch was in flight. Adapt to the install/start/check/stop/load/query/data-size contract: - BENCH_QUERIES_FILE="queries.json" — Quickwit accepts Elasticsearch-format JSON queries via the /_elastic compat API, not SQL. queries.json holds one ES query per line; queries not expressible in Quickwit are encoded as the literal "null". - BENCH_DOWNLOAD_SCRIPT="" — the load script fetches hits.json.gz directly (there is no shared download-hits-json helper) and pipes it through `quickwit tool local-ingest`, since v0.9's sharded ingest-v2 endpoint caps single-node throughput at a few MB/s. - BENCH_RESTARTABLE=yes — relies on the common driver's per-query restart to flush Quickwit's fast_field_cache and split_footer_cache (the result caches are already disabled in node-config.yaml). - query: returns non-zero for "null" queries so the framework records null in the per-query timing array; otherwise reports .took (ms → seconds). Old run.sh deleted.
The original used /tmp/gizmosql_server_$$.pid where $$ is the calling process's PID. That worked when benchmark.sh sourced util.sh and called start/stop in the same shell, but under the new per-system layout each of start, stop, load, and query sources util.sh in its own subshell — so stop_gizmosql couldn't find the PID file written by start_gizmosql. Use a fixed path under the system directory instead. Also expose wait_for_gizmosql so callers (like load) can wait for readiness without restarting.
Conflict only in gizmosql/benchmark.sh — kept the thin shim. Main switched gizmosql to the official one-line installer (PR #879); fold that into gizmosql/install so we stop hand-detecting arch and downloading the zip. Other changes auto-merged: quickwit/index_config.yaml gained tag_fields on CounterID + record:basic on text fields (PR #886), and assorted result JSONs for ClickHouse Cloud / Citus / Cratedb / etc.
start/stop scripts may emit progress lines (clickhouse-server prints PID table tracking, sudo's chown invocation, postgres's startup messages, etc.). With BENCH_RESTARTABLE=yes those scripts run before every query, so their output interleaves with the parseable [t1,t2,t3] / Load time / Data size lines and breaks the cloud-init log POST to play.clickhouse.com. Redirect both stdout and stderr from ./start and ./stop to /dev/null at the three call sites in lib/benchmark-common.sh. The check loop is the authoritative readiness signal, so losing start's output costs nothing in steady state; for debugging, run ./start manually outside the driver.
The DuckDB installer at install.duckdb.org drops the binary into ~/.duckdb/cli/latest/duckdb and only suggests adding that directory to PATH. Previously each install attempted a per-user symlink into ~/.local/bin, which silently no-ops when that directory isn't on PATH (default for root in cloud-init). The result was ./check failing for 300s with no useful error. Symlink to /usr/local/bin/duckdb via sudo right after install instead; that's on PATH for every user, and the symlink is itself idempotent.
Ubuntu's docker.io ships the docker CLI without the v2 compose plugin, so the existing `command -v docker` short-circuit skipped installation on boxes that already had docker but no `docker compose`. ./start then ran `docker compose up -d`, which silently failed, and ./check timed out at 300s. Fall back to docker-compose-v2 for the Ubuntu package name. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Throughput variant of ClickBench. N connections (default 10) hold open sessions and each picks a uniformly random query from the standard 43-query set; the run goes for a fixed wall-clock window (default 600s) after a warmup. Reports completed queries, QPS, latency p50/p95/p99, and per-query mean. Backends: ClickHouse over HTTP (stdlib http.client), StarRocks over the MySQL wire protocol (pymysql). Each system's recommended path so neither is paying a wire-format penalty the other isn't. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ned}/query: pass query via temp file `python3 - <<'PY' ... PY` directs the heredoc into python3's stdin so the interpreter can read its program from there. Once the heredoc is fully consumed, sys.stdin (the same FD) is at EOF — so sys.stdin.read() inside the heredoc returned an empty string, and chdb / hyper / sail dutifully ran the empty query and reported ~0.000s for every try. Stage stdin into a temp file in bash before invoking the heredoc and pass the path as argv[1]; the python script reads the query from that file. Also include result materialization in the timing window for chdb/query and chdb-parquet-partitioned/query (move `end = ...` past fetchall / str(res)) — the timer was previously stopped before the result was realized, which would have under-counted query time even when the stdin bug wasn't masking it entirely.
Right now ./check stderr is silently dropped while the loop retries for 300s, then we report "did not succeed within 300s" with no clue why. For deterministic failures (missing env var like YT_PROXY for chyt, an install step that didn't run, etc.) the user wastes 5 minutes and still has to dig through the per-system check script to find out what happened. Capture the last attempt's stderr and print it on timeout. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The upstream install path assumes RHEL/Rocky/Alma — yum, grubby, SELinux, the wheel group, /data0. On Ubuntu/Debian the prereqs phase silently half-completes (several |||| true skips), the gpadmin user is sometimes not created, and db-install would later die at `yum install -y go`. Either way ./check times out at 300s with no diagnostic. Bail with a clear "needs yum" message before doing anything destructive, and call out the requirement in the README. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Cloud-init runs scripts as root with HOME unset. Tools that follow
XDG-ish conventions then fall over: the GizmoSQL one-line installer
exits at line 32 with "HOME: parameter not set" (it runs under `sh -u`),
duckdb-vortex's `INSTALL vortex` writes to /.duckdb/extensions/... and
later fails to find it ("Extension /.duckdb/extensions/v1.5.2/..."),
and duckdb-datalake{,-partitioned} queries crash 43 times each with
"Can't find the home directory at ''" while autoloading httpfs.
Each affected install script tried to paper over this locally with
`export HOME=${HOME:=~}`, but the export only lives for that script —
the sibling load/query scripts the lib runs in fresh subprocesses still
see HOME unset. Set it once here so every per-system step inherits it.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
apt's monetdb5-sql post-install creates /var/lib/monetdb as the monetdb user's home dir, so the existing `if [ ! -d /var/lib/monetdb ]` guard skipped `monetdbd create` and left the dbfarm uninitialized. ./check then looped 300s on `mclient: cannot connect: control socket does not exist` and the run died. Probe the dbfarm marker file (.merovingian_properties) instead of the directory, and explicitly `monetdbd start` after create — both are idempotent, and a daemon that's already up just no-ops. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
paradedb/paradedb:0.10.0 (the prior pin) was rotated out of Docker Hub — docker pull returned "manifest not found" and ./check timed out. The oldest tags still hosted are 0.15.x, so move both directories onto a real Postgres-version-specific tag (latest-pg17) that paradedb still maintains. This unblocks the image pull. NOTE: paradedb dropped its pg_lakehouse / parquet_fdw extension after 0.10.x (the parquet_fdw_handler() function no longer exists), so create.sql still needs to be reworked away from the foreign-table approach for queries to succeed end-to-end. That's a separate change. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The prior URL (qa-build.oss-cn-beijing.aliyuncs.com selectdb-doris-2.1.7-rc01) returned 404 — SelectDB stopped publishing free standalone tarballs once the product moved fully to a managed-cloud offering. VeloDB (the company that now stewards SelectDB) hosts the official Apache Doris release binaries instead, which are functionally what SelectDB ships today. Pin to the current stable (4.0.5) and use the symmetric $dir_name path layout that doris/install already uses, instead of the hardcoded selectdb-doris-2.1.7 segment. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous install used `add-apt-repository ppa:greenplum/db`, but that PPA was unpublished and the upstream greenplum-db/gpdb repo was archived in 2024 — apt-get update fails with "ppa 'greenplum/db' not found" and there's no other pre-built path to a working Greenplum on Ubuntu. Drop the native install entirely and use the community- maintained woblerr/greenplum:7.1.0-ubuntu22.04 Docker image, which bundles a single-node Greenplum 7 cluster. This means the install/start/check/stop/load/query/data-size scripts all run against a container at localhost:5432 with the gpadmin user and `demo` database. Loading via gpfdist relied on the native install's GPHOME layout, so swap it for plain `COPY hits FROM '/tmp/hits.tsv'` after `docker cp`-ing the TSV in (and drop the now-unused EXTERNAL TABLE from create.sql). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ALIAS) chyt's check/load/query/data-size scripts hard-require YT_PROXY (and read YT_TOKEN, CHYT_ALIAS) to talk to the remote YTsaurus cluster — see chyt/README.md for the demo-cluster setup. Cloud-init had no mechanism to carry those through, so the bench loop just timed out for 300s on "YT_PROXY: YT_PROXY is required" before the run-time-error fix landed. Add a `@runtime_env@` placeholder to cloud-init.sh.in and have run-benchmark.sh substitute it with `export VAR=...` lines for any of the listed vars that are set in the operator's shell. Anything unset collapses to an empty line, so unrelated systems are unaffected. Use awk + printf %q so values with shell-special characters round-trip cleanly. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Hologres is a managed cloud service — the host only needs a psql client to talk to it. The previous `yum install postgresql-server` excluded Debian/Ubuntu/etc. and installed an unnecessary server. Pull a postgres docker image once and drop a `psql` shim into ./bin/ that wraps `docker run`, then prepend ./bin to PATH so the rest of the script and run.sh (including `command time -f '%e' psql ...`, which bypasses bash function lookup) call psql normally. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Cloudberry's install assumed a RHEL-family host (yum, grubby, the wheel group, /data0, sshd-on-localhost). Wrapping it in a Rocky 9 container makes the benchmark portable to any host that runs docker. install does the heavy lifting: pulls rockylinux:9, runs all the build deps (with --allowerasing for the libcurl/libcurl-minimal conflict, plus procps-ng / iputils / libyaml-devel / xerces-c-devel / pgdb that the upstream list happens to need on a fresh Rocky 9), builds Cloudberry 1.5.3 from source (capped at -j16 so the link phase doesn't OOM big boxes), sets up gpadmin + sshd + ssh-key auth, sets cap_net_raw on /bin/ping (gpinitsystem pings localhost), then runs gpinitsystem. The container uses bridge networking (not --network host) so the container's own port 22 belongs to its sshd, not the host's — otherwise gpinitsystem's ssh-to-localhost lands on the host's sshd. Port 5432 is published. start/stop/check/load/query are thin docker-exec adapters around the existing gpadmin shell flow. load streams hits.tsv via `tar -h | docker exec -i` so any host-side symlink to the dataset is dereferenced (docker cp leaves dangling symlinks pointing at host paths). Tested end-to-end on Ubuntu 26.04: install → start → load (hits.tsv, 99,997,497 rows) → check → query Q1/Q2/Q5 → stop → start → check. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
ParadeDB removed pg_lakehouse / parquet_fdw upstream after the 0.10.x line, so the parquet-partitioned variant has no working replacement on any current ParadeDB image. Annotate the existing 2024-07-13 result with a "historical" tag and an explanatory comment, and replace the install script with a fail-fast that refuses to run — that prevents future runs from silently producing broken or non-comparable numbers under the same system label. The single-table paradedb/ directory is being reworked separately to exercise pg_search instead. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous benchmark loaded data via pg_lakehouse / parquet_fdw —
both of which were dropped from ParadeDB after 0.10.x. With no parquet
path left, lean on pg_search (the BM25-indexed full-text extension that
remains ParadeDB's distinguishing feature) instead:
- benchmark.sh: switch BENCH_DOWNLOAD_SCRIPT to download-hits-tsv.
- create.sql: CREATE EXTENSION pg_search and a regular Postgres hits
table (postgresql/'s schema).
- load: server-side `COPY hits FROM '/tmp/hits.tsv'` (TSV is docker
cp'd into the container first), VACUUM ANALYZE, then build a BM25
index over the text columns (URL, Title, SearchPhrase, Referer).
Index built post-load to avoid maintenance overhead during COPY.
- queries.sql: identical to postgresql/queries.sql — with proper
TIMESTAMP/DATE columns the parquet-era casts are no longer needed.
- data-size: pg_total_relation_size('hits') instead of the parquet
file size.
- template.json: drop "column-oriented" / "Parquet, single", become
plain "ParadeDB" with row-oriented + search tags reflecting what
the system is now.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…asses Two follow-on bugs from the previous monetdb fix surfaced in the next run: the install gate on `.merovingian_properties` is unreliable across MonetDB packaging variants (the 15:58 UTC run skipped `monetdbd create` silently and only worked because the package auto-started monetdbd), and the refactored ./check / ./start use raw `mclient -u monetdb`, which prompts for a password the lib's check loop has no way to answer. The original benchmark side-stepped both via the query.expect wrapper. Make install always try `monetdbd create` + `monetdbd start` (both silently no-op when already done) and stamp /root/.monetdb with the default monetdb-user credentials. Pass `-P monetdb` to mclient in start/check so they work even when $HOME is unset and the dotfile isn't found. Point data-size at /var/lib/monetdb (the dbfarm install actually creates) instead of the obsolete /var/monetdb5/ path. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The 12:43 UTC run hit two distinct races. databend-meta and databend-query were nohup'd back-to-back, so query's first dial of the raft port (9191) regularly raced meta's bind, query exited, and the bench loop polled a dead process for 300s. Wait for /dev/tcp on 9191 to accept before starting query. The next run made it through warm-up but blew up mid-bench with `No file descriptors available (os error 24)` from the object-store layer; bash's default 1024 fd limit is well below what databend-query opens during ingest + concurrent reads. Bump to 65536 in start. Finally, the original benchmark waited 600s for the HTTP API to appear; the refactor's lib defaults to 300s. Restore the longer timeout via BENCH_CHECK_TIMEOUT in benchmark.sh. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Each of server / worker-* / daemon-manager waits for its dependency via `curl --retry-max-time 120 --max-time 120` — only two minutes for fdb→tso→server to come up on a fresh c6a.4xlarge. The 14:00 UTC run hit exactly that: server's curl gave up before tso was ready, server exited, and ./check returned `service "server" is not running` until the bench loop's 300s timeout fired. Bump every wait to 600s, and raise BENCH_CHECK_TIMEOUT to 1200s so the bench loop's budget covers the full bring-up. Two adjacent fixes in the same area: - ./start used to run hdfs/create_users.sh after `sleep 5`, racing the namenode container — move it into ./load, which only runs after ./check has confirmed the whole stack is up. - hdfs/create_users.sh used unguarded `hdfs dfs -mkdir`, which fails on the second BENCH_RESTARTABLE iteration when /user/clickhouse already exists. Switch to `mkdir -p` and drop the redundant first line. Drop the obsolete `version: "3"` key from docker-compose.yml (Compose v2 ignores it and warns). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The four download-hits-{tsv,csv,parquet-single,parquet-partitioned}
helpers used to live at the repo root next to ~100 system directories,
which made the top-level harder to navigate. They're shared library
code, so they belong with lib/benchmark-common.sh.
lib/benchmark-common.sh now resolves BENCH_DOWNLOAD_SCRIPT relative to
LIB_DIR. The handful of legacy monolithic benchmark.sh files (and two
doris loads) that called the helpers directly via ../download-hits-*
are updated to ../lib/download-hits-*.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Spark + Velox via Apache Gluten, using the existing spark-gluten/ shape (install/start/stop/check/load/query, lib-based benchmark.sh shim). Differs from spark-gluten/ only in: - query.py pins the Gluten backend to "velox" via spark.gluten.sql.columnar.backend.lib, so the benchmark name reflects the engine actually doing the work (Gluten can in principle also use the ClickHouse backend). - README.md and template.json identify the system as "Spark (Velox)". Brings the unmerged add-spark-velox branch onto the refactor branch in the per-system-script-interface format. ARM builds still need a custom Gluten/Velox bundle (upstream limitation, called out in README). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Both files were byte-identical to the canonical query set except for the missing final newline, which was enough to make them register as their own variant. Adding \n folds them back into the canonical hash (45 → 47 systems on it; total distinct variants 42 → 41). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
benchmark.shinto 7 single-purpose scripts (install,start,check,stop,load,query,data-size) with a stable contract, driven by a new sharedlib/benchmark-common.sh.Why
Previously, every system's
benchmark.shbundled installation, server lifecycle, dataset download, data loading, and query dispatch into one script — andrun.shhard-coded the per-query orchestration. There was no programmatic per-query entry point, so:run.shran all 3 tries inside a single CLI invocation, so OS-cache warmth from try 1 leaked into tries 2/3.The new per-system interface
installstartcheckSELECT 1). Exit 0 iff responsive.stoploadsync.query0.123)data-sizeEach system's
benchmark.shbecomes a 4-line shim that sets a couple of env vars andexec's the shared driver:The shared driver runs
install → start+check → download → load (timed) → for each query: flush caches; if BENCH_RESTARTABLE=yes, stop+start; run query 3× → data-size → stop. The output log shape (Load time:,[t1,t2,t3],per query,Data size:) is identical to the oldbenchmark.sh, socloud-init.sh.in's POST to play.clickhouse.com keeps working unchanged.BENCH_RESTARTABLE=nofor embedded CLIs (duckdb, sqlite, datafusion, …) and dataframe wrappers — restarting a single CLI/Python process between queries would dominate query time. For these, OS caches are still flushed between queries.Scope
Refactored (88 systems):
Not refactored (intentionally out of scope):
Validated end-to-end on a 96-core / 185 GB ARM machine
null(framework's error path works)All 88 refactored systems pass
bash -nand have executable bits set on the 7 scripts + benchmark.sh.Bug fixes surfaced during validation
lib/benchmark-common.sh:data-sizenow runs beforestop(clickhouse and pandas need the server up to report size).clickhouse/start: idempotent (was erroring when already running).duckdb/load,sqlite/load:rm -f hits.db/mydbfor idempotent reruns.postgresql/load:-v ON_ERROR_STOP=1so COPY data errors actually fail the script instead of silently rolling back.BENCH_DOWNLOAD_SCRIPTmay now be empty for systems that read directly from S3 datalakes / remote services (clickhouse-datalake*, duckdb-datalake*, chyt, …).Flagged for follow-up review
duckdb-memory—:memory:semantics force a per-query reload; will inflate timings vs. the original single-process flow.cloudberry,greenplum— multi-phase install (reboot between phases); the shim only runs phase 1.sirius— GPU-dependent; long-livedduckdbCLI subprocess proxy; review the stdin/sentinel protocol.paradedb*,pg_ducklake,pg_mooncake— Docker container created ininstallthendocker cpinload(small divergence from the originaldocker run -v ...due to the lifecycle order:startruns beforedownload).Test plan
bash -non all 88 systems' scripts🤖 Generated with Claude Code