The Ripple Method

How Context Reads Context

March 28, 2026 · Martijn Aslander

Niklas Luhmann wrote 90,000 note cards by hand over 45 years. Bielefeld University has been transcribing them since 2015 at a rate of 5 cards per hour. After 11 years, they've verified about 27,000 cards to publication standard — roughly a third of the indexed 73,715.

We tried to use AI to read the rest. It didn't work. Then we found something unexpected: the system reads itself.

The problem

30,493 cards in Luhmann's Zettelkasten have no transcribed text. They exist as scanned images of handwritten German — dense, abbreviated, with specialized vocabulary like "Aequivalenzfunktionalismus" and "Abweichungsverstärkung."

We tried throwing AI at it. Here's what happened.

The journey

Attempt 1 — Bare OCR

Send the scan to Claude Vision. "Read this handwritten German text." Result: 163% Character Error Rate. Worse than random. The model hallucinates German-sounding words that aren't there.

Attempt 2 — Vocabulary constraint

Give the model Luhmann's known vocabulary (2.4 million words from the already-transcribed cards) and say: "Only use words from this list." Result: 139% CER. Slightly better, still unusable.

Attempt 3 — Visual few-shot

Send 3 already-transcribed cards as examples: "This is what Luhmann's handwriting looks like. Here's the correct reading. Now read this new card." Result: 149% CER. Worse. The model gets confused by multiple images.

Attempt 4 — Two-pass correction

First pass: raw OCR. Second pass: correct unknown words using vocabulary matching. Result: the model hallucinated corrections, making things worse.

Attempt 5 — Semantic reconstruction

Stop trying to read individual words. Instead, tell the model everything we know about the card: its number, its section, the text of all neighboring cards, and the topics they discuss. Then show the scan and say: "Given this context, what does this card say?"

Result for card 21-3d5b11z: 27% CER. From 163% to 27%. The card had 8 transcribed neighbors providing rich context.

What happened

The breakthrough wasn't in reading the handwriting better. It was in not needing to read every word.

Card 21-3d5b11z sits in section 21 (Organisationstheorie). Its neighbors discuss Maruyama's theory of deviation amplification. Card 25/4b2m, which it references, discusses morphogenesis. The section is about the genesis of differentiation.

Given all that context, the model doesn't need to decipher every letter. It reconstructs the meaning from the structure — and gets most of the words right in the process.

The system reads itself. Each card makes its neighbors more legible.

The ripple method

This observation leads to a strategy:

Wave 1: The inner ring

Start with untranscribed cards that have the most transcribed neighbors — cards surrounded by known context. These have the highest chance of accurate reconstruction. Perhaps 8-10 transcribed neighbors, rich section context, known cross-references.

Wave 2: The growing ring

The cards transcribed in Wave 1 become context for their own neighbors. Cards that were unreachable before now have transcribed cards next to them. The context grows. The ring expands.

Wave 3, 4, 5...

Each wave strengthens the next. The network fills in from the center outward, like a ripple on water. The last cards transcribed — the most isolated ones — have the least context and the lowest accuracy. But by then, most of the network is already mapped.

This is exactly how Luhmann built the system in the first place — from a core of interconnected cards outward, each new card drawing meaning from its position in the network.

The numbers

ApproachCERContext used
Bare OCR163%None
Vocabulary constraint139%Word list only
Visual few-shot149%3 example images
Semantic reconstruction (rich context)27%8 neighbor cards + section + references

The pattern is clear: the more network context you provide, the better the reading. Not linearly — there's a threshold. Below 3-4 neighbors, the context isn't rich enough. Above 6-8, it starts working dramatically better.

What this means beyond Luhmann

The ripple method isn't specific to Luhmann's archive. It applies to any collection where:

• Items are partially legible or partially transcribed
• Items are connected to each other (references, links, proximity)
• Context narrows the space of possible interpretations

Historical archives, medieval manuscripts, personal correspondence, even degraded digital files — anywhere that items exist in a network rather than in isolation, context can fill the gaps.

The deeper principle

This connects to something I've been building in my own knowledge system (ThetaOS): the idea that connections compound. Each new link doesn't just connect two nodes — it strengthens every adjacent connection. I call this myelination, borrowing from neuroscience: the more a pathway is used, the faster and more reliable it becomes.

Luhmann's archive is proof of concept. The network structure he built in 1952 is helping us read his handwriting in 2026. Not through magic — through context. Through the accumulated weight of 45 years of cross-references making each card's meaning recoverable from its position in the web.

Luhmann called his Zettelkasten a "communication partner." Seventy years later, it's still communicating — helping us read what he wrote by showing us where he placed it.

Status

The ripple method is currently in testing. We've validated it on 10 ground-truth cards with promising results on high-context cards and predictably poor results on isolated ones. A full run on the 30,493 untranscribed cards would cost approximately $200-600 depending on the resolution and number of waves.

This is ongoing work. If you're interested in the approach — especially if you work on digital humanities, archival transcription, or handwriting recognition — the code is open source.

Explore the Network 8 Lessons for Knowledge Systems