Skip to content

Cap DuckDB compaction memory at 2 GB (#758)#762

Merged
erikdarlingdata merged 1 commit intodevfrom
fix/758-compaction-memory-limit
Mar 30, 2026
Merged

Cap DuckDB compaction memory at 2 GB (#758)#762
erikdarlingdata merged 1 commit intodevfrom
fix/758-compaction-memory-limit

Conversation

@erikdarlingdata
Copy link
Copy Markdown
Owner

Summary

  • Set memory_limit = '2GB' on the in-memory DuckDB connection used for parquet archive compaction
  • Set preserve_insertion_order = false to further reduce memory during COPY operations
  • DuckDB automatically spills to disk when the limit is hit, so compaction still works correctly

Root cause: the default memory_limit is 80% of physical RAM. Decompressing 800 MB of parquet archives (user reported 230 MB query_store_stats alone) easily expands to 5-7 GB in memory.

Companion to #760 which fixed the long-running event handler and delta cache leaks.

Test plan

  • Builds clean
  • Launched Lite, startup footprint dropped from 349 MB to 258 MB
  • Open/close tabs stable, no growth

🤖 Generated with Claude Code

The in-memory DuckDB connection used for archive compaction was using the
default memory_limit (80% of physical RAM), causing multi-GB spikes when
decompressing large parquet archives. With 800 MB of compressed archives,
this easily hits 5-7 GB.

Set memory_limit=2GB so DuckDB spills to disk instead of consuming all
available RAM. Also set preserve_insertion_order=false to reduce memory
pressure since compaction has no meaningful row order.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant