Fixing memory issues in n8n is critical for ensuring stability—especially when dealing with large data sets, long workflows, or high execution volumes. Here’s a comprehensive guide to diagnose and fix memory leaks, crashes, or out-of-memory (OOM) errors in n8n:
💥 Symptoms of Memory Issues
- n8n crashes with “Out of Memory” or Killed errors (especially in Docker)
- Workflows become slower over time
- Node execution fails with "heap out of memory"
- Docker container restarts unexpectedly
- System becomes unresponsive during large executions
🧠 Common Causes and Fixes
🧱 1. Large Data Payloads in Memory
Problem:
- Passing huge arrays or objects between nodes
- Using
Function or Merge node on massive data
✅ Fix:
- Use SplitInBatches node to break large arrays into smaller parts
- Limit or filter incoming data using query params or Set nodes
- Avoid building large objects in a single Function node
🧪 2. Workflows Returning Too Much Data
Problem:
- Workflow returns massive data (e.g., all rows from Google Sheets, Airtable, etc.)
✅ Fix:
- Use
Return All = false and add pagination
- Filter only necessary fields using the Set node
- Avoid passing unnecessary data downstream
🔁 3. Infinite or Unbounded Loops
Problem:
- Loops processing 10,000+ items
- Recursion with no base case
✅ Fix:
- Add loop guards (e.g., IF node to break after N iterations)
- Use SplitInBatches + Wait to spread load across time
- Move large processing logic to Execute Workflow sub-flows
📦 4. Large Executions Stored in DB
Problem:
- Execution data being saved (especially large ones)
✅ Fix:
- Disable saving execution data unless needed:
EXECUTIONS_DATA_SAVE_ON_SUCCESS=false
EXECUTIONS_DATA_SAVE_ON_ERROR=true
- Or prune old execution data with:
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168 # in hours (e.g., 7 days)
🐳 5. Docker Memory Limits Too Low
Problem:
- Docker container killed due to memory overuse
✅ Fix:
- Add memory limits in Docker:
docker run --memory="2g" --memory-swap="2g" ...
services:
n8n:
mem_limit: 2g
- Monitor with
docker stats or system tools
⚙️ 6. Node.js Heap Size Too Small (Self-Hosted)
Problem:
- "JavaScript heap out of memory"
✅ Fix:
- Increase Node.js memory allocation:
export NODE_OPTIONS="--max-old-space-size=2048"
n8n start
Use values like 1024, 2048, or 4096 (in MB)
🧼 7. Memory Leak from Long-Running Executions
Problem:
- Long workflows accumulate memory over time
✅ Fix:
- Break workflow into smaller sub-workflows
- Add Wait or Webhook to pause and release memory
- Use external services (e.g., DB, cache) for temp storage instead of passing large JSONs
🧰 8. Use External Queues or DB for Large Payloads
Instead of processing large arrays in-memory:
- Save input data to a DB (e.g., PostgreSQL, MySQL)
- Use a Webhook or Cron to process rows in small chunks
- For messaging: use Redis, RabbitMQ, or Kafka (with n8n Enterprise/Custom nodes)
✅ Memory Management Checklist
| ✅ Check | Description |
| ---------------------------------- | ---------------------------- |
| Use SplitInBatches? | Break up big arrays |
| Disable save on success? | Reduce DB memory usage |
| Limit Docker memory? | Allocate 2GB+ RAM |
| Increase heap size? | Use NODE_OPTIONS |
| Avoid heavy Function/Loop nodes? | Refactor to smaller flows |
| Offload to sub-workflows? | Modularize logic |
| Use database/cache for large data? | Offload memory load |
| Monitor usage? | docker stats, htop, logs |