Commit fa7fb19
The in-memory DuckDB connection used for archive compaction was using the
default memory_limit (80% of physical RAM), causing multi-GB spikes when
decompressing large parquet archives. With 800 MB of compressed archives,
this easily hits 5-7 GB.
Set memory_limit=2GB so DuckDB spills to disk instead of consuming all
available RAM. Also set preserve_insertion_order=false to reduce memory
pressure since compaction has no meaningful row order.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 8d682ee commit fa7fb19
1 file changed
Lines changed: 8 additions & 0 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
309 | 309 | | |
310 | 310 | | |
311 | 311 | | |
| 312 | + | |
| 313 | + | |
| 314 | + | |
| 315 | + | |
| 316 | + | |
| 317 | + | |
| 318 | + | |
| 319 | + | |
312 | 320 | | |
313 | 321 | | |
314 | 322 | | |
| |||
0 commit comments