How to Handle Large JSON Files Without Crashing Your Browser

JSON Formatter Hub

How to Handle Large JSON Files Without Crashing Your Browser

JSON.parse() loads the entire file into memory before returning anything. For small files this is fine. For a 100MB database export, it will exhaust browser memory and crash the tab. This guide covers the practical strategies that actually work when JSON is too large to handle with standard tools.

The Problem: In-Memory Parsing

Every standard JSON parser works by reading the entire input, building a complete in-memory representation, and then returning it. A 100MB JSON file can expand to 500MBโ€“1GB in memory because the in-memory object graph has overhead for each key, value, and reference.

Rule of thumb:

Strategy 1: jq โ€” The JSON Swiss Army Knife

jq is a lightweight command-line JSON processor. It streams through the file without loading everything into memory.

# Pretty-print the first 10 array elements
jq '.[0:10]' large-data.json

# Get all keys of the root object
jq 'keys' large-data.json

# Count array items
jq 'length' large-data.json

# Extract a specific field from every object in an array
jq '.[].email' users.json

# Filter: only objects where age > 30
jq '[.[] | select(.age > 30)]' users.json

# Get only specific fields from each object
jq '[.[] | {name, email}]' users.json

# Convert array of objects to CSV
jq -r '.[] | [.name, .email, .age | tostring] | join(",")' users.json

Strategy 2: Streaming Parsers (Node.js)

The stream-json npm package streams JSON token by token, so memory usage stays constant regardless of file size:

const { createReadStream } = require('fs');
const { chain } = require('stream-chain');
const { parser } = require('stream-json');
const { streamArray } = require('stream-json/streamers/StreamArray');

let count = 0;
chain([
  createReadStream('large-users.json'),
  parser(),
  streamArray()
]).on('data', ({ value }) => {
  console.log(value.email);
  count++;
}).on('end', () => {
  console.log(`Processed ${count} records`);
});

Strategy 3: Streaming Parsers (Python)

ijson iterates over items in a large JSON array without loading the whole file:

import ijson

with open('large-users.json', 'rb') as f:
    for user in ijson.items(f, 'item'):
        print(user['email'])

Strategy 4: Split the File into Chunks

Using jq

# Split a 100,000-item array into chunks of 1,000
jq -c '.[]' large.json | split -l 1000 - chunk_
for f in chunk_*; do
  jq -s '.' "$f" > "${f}.json"
done

Using Python

import json

CHUNK_SIZE = 1000

with open('large.json') as f:
    data = json.load(f)

for i in range(0, len(data), CHUNK_SIZE):
    chunk = data[i:i + CHUNK_SIZE]
    with open(f'chunk_{i // CHUNK_SIZE}.json', 'w') as out:
        json.dump(chunk, out, indent=2)

Strategy 5: Import into a Database

SQLite with JSON support

-- Load a JSON array into a SQLite table
CREATE TABLE users AS
SELECT
  json_extract(value, '$.id')    AS id,
  json_extract(value, '$.name')  AS name,
  json_extract(value, '$.email') AS email
FROM json_each(readfile('users.json'));

Strategy 6: JSONL (Newline-Delimited JSON)

# Each line is one JSON object:
{"id":1,"name":"Alice","email":"alice@example.com"}
{"id":2,"name":"Bob","email":"bob@example.com"}

# Process line by line in Python
with open('data.jsonl') as f:
    for line in f:
        record = json.loads(line.strip())
        process(record)

Summary: Which Strategy to Use

File SizeBest Approach
Under 5MBBrowser formatter (use this tool)
5โ€“50MBjq to extract the section you need, then browser tool
50MBโ€“1GBStreaming parser (stream-json / ijson) or jq
Over 1GBDatabase import (SQLite or PostgreSQL)
Any size, repeated queriesConvert to JSONL or import to database

Related Articles