Compare IfcOpenShell geometry output across versions, kernels, and iterator flags. Records per-element metrics (bbox, centroid, V/E/F, area, volume, manifold/closed/genus, vertex hash) and timing into a SQLite DB. Diffs and regressions become SQL queries.
ifcbench/
├── envs.yaml # how to build each IfcOpenShell environment
├── settings.yaml # which (kernel x flags) cells to run
├── envs/ # gitignored; one venv per IfcOpenShell version
├── results/ # gitignored; SQLite DB + raw probe JSONL
├── models.json # manifest of fixtures (sha256, schema, visibility)
├── models/ # gitignored; user-supplied IFC files (nested paths supported)
├── scripts/
│ ├── index_models.py # scan models/, populate models.json
└── src/ifcbench/
├── cli.py
├── runner.py # outer driver — spawns probes, ingests JSONL
├── probe.py # inner — runs in env venv, stdlib only + ifcopenshell
├── metrics.py # geometry metric extractors
├── db.py # SQLite schema + helpers
└── config.py # YAML/JSON loaders
$ pip install -e .To learn about all capabilities, do ifcbench -h
Models are mirrored here. A batch of models is stored in a JSON file (example).
$ ifcbench get-models --url https://cloud.thinkmoult.com/models/index.json`Configure envs.yaml to declare which testing environments you want to setup.
Each entry in envs.yaml produces one venv at envs/<label>/:
install.pip— list of pip install specs (PyPI names, URLs, local wheel paths, source dirs). Pip handles all uniformly.install.paths— list of dirs to prepend tosys.path(for source-tree installs that don't have a clean wheel). Written into a.pthfile in the venv's site-packages so probes pick them up automatically.python— interpreter to base the venv on (must be ABI-compatible with any precompiled wheel).
Then do: ifcbench setup-envs to create the environments.
Edit settings.yaml to declare run rows (kernel x settings).
Each entry in settings.yaml is a (kernel, settings, num_threads) cell.
A "setting" is run against every selected env and every selected model (or a filtered subset).
Two timing modes:
num_threads > 1— throughput. Per-element times are not meaningful; total wall time per file is.num_threads = 1— per-element timing. Slow but attributable.
Do a run - this will test the geometry of all models using your environment and settings and store the results in the database.
ifcbench run --env ifcopenshell-python-313-0.8.5-linux64 --setting opencascade$ ifcbench failures 1
$ ifcbench compare 1 2
$ ifcbench compare 1 2 --model fooExport one element's raw mesh, a simple viewable mesh file, and a headless multi-view preview from a chosen benchmark env + setting:
$ ifcbench inspect-shape --env ifcopenshell-python-313-0.8.5-linux64 \
--setting opencascade --model duplex --id 12345Artifacts are written under results/inspect/.../ by default:
mesh.json— structured raw geometry + metricsmesh.txt— human-readable V/F/E dumpmesh.obj— simple 3D mesh for external viewerspreview.svg— headless 4-view contact sheetrequest.json,stdout.log,stderr.log,result.json— execution diagnostics
inspect-shape defaults to num_threads=1 even if the selected benchmark
setting uses cpu_count, because single-element debugging is easier to reason
about and less brittle that way. Override with --num-threads if needed.
See src/ifcbench/db.py. Per-element row:
- counts: V, E, F, sub-mesh count
- topology: is_closed, is_manifold, euler_char, genus
- integrals: surface_area, volume
- bbox + centroid (vertex-mean and area-weighted)
hash_quantised— 16-byte fingerprint of sorted, rounded vertex settime_ns— populated when num_threads = 1error_kind/error_msg— failures are first-class rows