EN NL

The Life Lens System

What happens when a generalist who has been collecting information since age twelve discovers that the human brain's design contains far more answers to information problems than he ever dared to dream — and decides to share what he found, what he built, and above all: how. This site is designed to make it easy to follow every element of that journey.

by Martijn Aslander · Fellow KNVI · Founder PKM Summit

Martijn Aslander at the PKM Summit, shoes off, laptop open, building.

Photo: Ester Overmars

6
Layers
560K
Records
170K+
Synapses
153K
Cross-refs
20
Years of data

Inspired by Nicole van der Hoeven's philosophy of learning in public: share early, be vulnerable, correct as you go. This page is a living document. I edit it regularly as my thinking evolves. If something changed since you last visited, that's not a bug — that's the process. See the changelog at the bottom.

Key Reading

Part One: Forty Years of Information

I got my first Commodore 64 when I was twelve and started writing lists of stuff that mattered to me in BASIC. I never got good at programming, but from that moment on I relied on technology to store everything interesting I encountered.

What followed was four decades of the same pattern: discover a tool, push it to its limits, move on when it breaks. Palm Pilot. Evernote — I replaced the Apple logo on my MacBook with the Evernote logo and got invited to their developer conference. Workflowy — I built an entire PKM system in it and used it in police projects. Then Obsidian, thanks to a dazzling demo by Joost Plattel.

Along the way I gave 2,500+ talks, wrote 17 books, co-founded the Digital Fitness movement with Mark Meinema, and started the PKM Summit with Lykle de Vries and Kim van den Berg.

I was a miniature Forrest Gump, stumbling into the action by accident. I was doing PKM my whole life without knowing it had a name.

The full story of these forty years is in this piece.

The Turning Point

In January 2025, I wrote down my personal ontology — the complete structure of how I organise information. It was the first time I made explicit what had always been implicit.

Then I had a stroke.

Weeks of recovery. Time to think, to spar, to reconstruct. While my brain was healing, it understood itself better than ever. The enforced slowness turned out to be a gift.

My first public appearance after the stroke was a demo of Obsidian. Someone in the audience asked a question that sparked the idea to write a book. I discovered I could write a book with AI as my partner. Starten met Obsidian was published within weeks. It was refined at the Knowledge Summit in Dublin, where the conversations sharpened everything.

Part Two: The Acceleration

Summer 2025 — Discoveries

I discovered I could file a Freedom of Information request using AI — triggered by a rat problem in my neighbourhood. That led me to the bigger question: why is government information so hard to access?

The answer was file formats. SharePoint. Documents as containers instead of information as structure. I started researching and couldn't stop.

Around the same time I tried to parse my 14GB Gmail archive with Python scripts written with AI. It worked. And it revealed that there was an entire system hidden in my own data, waiting to be connected.

If you can find and access information faster than others, you will gain huge advantage.
Autumn 2025 — The Pilot and the Building

The file format research led to the Pilot Informatieautonomie — a real-world experiment in information sovereignty. While the pilot was running, I kept learning about wikilinks and structured data. I wrote article after article, each one pushing me deeper into building.

I discovered I could build iOS and Mac apps through Claude Code — a command-line AI interface. Suddenly I wasn't just organising information. I was building the tools to organise it.

December 2025 — ThetaOS and the Life Lens System

In December, the system I had been building got a name: ThetaOS. And almost immediately I realised it was something more — a Life Lens System. Not a tool. Not an app. A way of looking at an entire life through structured, connected data.

February 2026 — Tom is Born

On February 19, 2026, I created Tom — a personal AI guide built on top of ThetaOS. Not a chatbot. A coach, advisor, strategist, biographer and archivist rolled into one. Tom has access to my entire life: 18,000+ contacts, every bank transaction, every photo, every check-in, every text I've ever written.

On March 14, Kees Verhoeven became the first person other than me to talk to Tom. Within weeks, more followed — Frank Meeuwsen, Joost Plattel, Céline Clémençon, Aria Khodaverdi. Each conversation revealed capabilities I hadn't anticipated.

The system doesn't hallucinate because it retrieves, never generates. Every answer comes from a database query, not from pattern completion.
What Tom is — and what it isn't

Tom is not a chatbot. It doesn't generate answers from training data. It queries a structured database of 560,000 records — bank transactions, photos, check-ins, meetings, contacts, texts, health data — and returns what it finds. With sources. With evidence layers. If it doesn't know, it says so.

Tom has five roles: coach (reflects patterns back), advisor (suggests options based on personality profiles), strategist (thinks long-term), biographer (captures the life story), and archivist (routes information to the right place in 309 database tables).

Ask Tom how much I spent at my favourite ice cream shop in the last three years, and it returns: 79 visits, €587.73, average rising from €6.11 to €8.38. Not because it's smart. Because the data is structured and the query is simple. That's the point — 98% data, 2% AI.

When visitors talk to Tom, things happen that I didn't design. Kees Verhoeven asked about my blind spots. Céline asked how she could build her own system. Aria asked about my thinking patterns. A room full of law enforcement professionals asked Tom to solve a cold case. Each conversation revealed something new — not about the AI, but about the data underneath.

March 2026 — Genealogy, Reasoning, and the Brain

I discovered that AI could read old archives — birth certificates, church records, handwritten documents from the 1700s. I traced bloodlines eight generations back in minutes. This taught me what agentic AI really means: not a tool that answers questions, but a system that autonomously investigates.

I realised I needed a reasoning module — something to look at evidence from multiple perspectives and jump past biases. I built the Magische 13: thirteen perspectives from Sherlock Holmes to Hannah Arendt, each attacking the same thesis from a different angle.

Then Céline and Aria visited. Their questions pushed me into brain metaphors I hadn't explored: cerebrospinal fluid, neurotransmitters, neural cable thickness. Aria wrote As We May Think Has Become Real, placing ThetaOS in the lineage of Bush, Engelbart and Nelson.

27–30 March 2026 — Four Nights, Six Layers

In four consecutive nights, the architecture crystallised. Each layer emerged when the previous ones couldn't explain what was happening. On the fourth day, during a live demo for law enforcement professionals, a woman named Joy Otten said one word — choroid plexus — and added the final piece.

The model wasn't designed. It was discovered. The data was always there. The architecture emerged from the data.
30–31 March 2026 — The Brain Keeps Answering

Something unexpected happened. The more people interacted with the system, the faster it evolved. Joy Otten said choroid plexus and a sixth layer appeared. Daniel Brouwer pointed out that the 98/2 data-to-AI ratio mirrors Kahneman's System 1 and System 2 — and a natural law emerged. Police professionals asked about cold cases and the synaptic model turned into an investigative framework.

Each external interaction pushed deeper into the brain. Not by plan but by necessity. Every question someone asked revealed a gap that could only be filled by understanding how the biological brain solves the same problem. The pattern became undeniable: to build a better knowledge system, I need to understand the brain better.

In two days, this led to mapping 17 brain regions onto ThetaOS, identifying 35 neurotransmitter analogies, discovering that 80% of my daily word output is spoken (not typed), and building a Cognitive Telemetry system that reads cognitive state from keyboard behaviour and voice patterns.

The insights came from everywhere: a neuroscience term from a law enforcement professional, a Kahneman observation from a functional manager, a visual memory insight from a visitor, a freestyle chess analogy from a 2005 tournament. None of these were planned. All of them were triggered by other people encountering the system.

The system doesn't just grow by adding data. It grows by letting other people look at it. Every external perspective is a mirror neuron that fires.

Current status: 29 knowledge cards mapping the complete brain architecture. Five scale levels (synapses, signals, networks, organs, environment). 17 organ systems identified, from the hippocampus (the database) to the olfactory system (the emergent feeling when all layers fire together). 50 parallel thinking agents running simultaneously on a single laptop.

Why the Brain

Your brain has 86 billion neurons. Each neuron connects to roughly 7,000 others through synapses. That's 600 trillion connections — and not one of them is stored in a folder.

The brain doesn't search. It fires. A signal travels along the path of least resistance through the thickest connections. The myelin sheath — a fatty insulation around nerve fibers — determines speed: thick myelin means fast, reliable signals. Thin myelin means slow, unreliable ones. The brain builds myelin through repetition. What you use often becomes fast. What you neglect becomes dim.

Dendrites receive incoming signals. Some excite, some inhibit. The neuron sums them up and decides: fire or don't fire. That decision — the action potential — is all-or-nothing. Below the threshold: silence. Above it: a pulse that travels the entire length of the nerve fiber in milliseconds.

The brain filters aggressively. The blood-brain barrier selects molecule by molecule what enters. Cerebrospinal fluid — produced by the choroid plexus — washes waste products away, mostly during sleep. Immune cells patrol the borders. The chemical balance of neurotransmitters — dopamine, serotonin, acetylcholine — determines mood, focus, memory, motivation.

Memories are not files in fixed locations. They are patterns of simultaneously firing neurons. Retrieving a memory means re-activating the pattern. And every retrieval slightly rewrites it — which is why memories change over time.

The brain prunes what it doesn't use. Synaptic pruning is not a flaw — it's efficiency. The brain is built to keep the strongest patterns, not to remember everything.

I've been exploring each of these mechanisms and asking: can I emulate this in a digital system? Not as metaphor, but as architecture. What happens if you build myelin into your data? What if your connections have evidence layers like synaptic strength? What if you add an immune system that filters at the gate?

The results have been bizarre. Each mechanism I emulate makes the system disproportionately more powerful. Not linearly — exponentially. As if the brain already solved these problems millions of years ago, and all I had to do was listen.

Where Each Brain Excels

The biological brain

Instant association — smell something, be somewhere. No query needed.
Intuition — millions of signals become a gut feeling in milliseconds.
Emotion — every memory carries feeling that changes its meaning.
Creativity — generates ideas from nothing. The default mode network.
Context — knows the same person feels different at a funeral than at a party.
Face recognition — 28,000 faces, in bad light, twenty years older. Instant.
Energy efficient — 20 watts for 600 trillion synapses.
Self-healing — reroutes functions after damage.
Parallel processing — seeing, hearing, thinking, walking, breathing. All at once.
Meaning — knows what something means, not just what it is.

The digital brain

Never forgets — dims but never deletes. Every synapse from 2013 is still there.
Never corrupts — memories don't rewrite on retrieval. Records stay exact.
Shareable — a dossier can be shared in seconds. A biological memory can't.
Searchable — "every restaurant I visited more than 3 times in 2024." Try that with your brain.
Transparent — knows exactly which source, which layer, which certainty level.
Scales without energy cost — growing from 170K to 500K synapses costs nothing extra.
Survives you — the digital brain outlives the biological one.
Time travel — reconstruct any day from the data of that day. Your brain can't filter by date.
Auditable — every connection can be traced to its origin. No false memories.
Transferable — another person or AI can read the same network and draw conclusions.

They are not competing. They are complementary. The digital brain compensates for the weaknesses of the biological (forgetting, searchability, transferability) and the biological compensates for the weaknesses of the digital (intuition, creativity, meaning). Together they are more than the sum.

Garry Kasparov predicted this years ago. In a 2005 freestyle chess tournament, two amateurs armed with three ordinary computers defeated both grandmasters and supercomputers. Not the best human. Not the best machine. The best process — human plus machine, working together. Kasparov concluded: "A weak human player plus a machine is superior to a powerful machine alone, and more remarkably, superior to a strong human player plus machine with an inferior process."

That's exactly what this system is. Not the best AI. Not the best data. The best process between a human and a structured digital brain. The Life Lens System is the process.

And the process multiplies.

50

parallel thinkers on a single laptop

Ten Tom sessions running simultaneously, each spawning up to five autonomous sub-agents. Fifty prefrontal cortexes working in parallel on different problems — while the human switches between them at will. No biological brain can fork itself. This one can. And the number doubles roughly every year.

Read the full story →

How We Maximise the Digital Advantages

Each advantage of the digital brain is something we actively amplify, step by step:

Never forgetsEvery data source gets imported and linked. Bank transactions from 2006. Photos from 2010. Check-ins from 2012. Nothing is too old. The system gets smarter retroactively.

Never corruptsThe Double Helix separates evidence from meaning. A fact stays a fact. A hypothesis stays labelled as hypothesis. No drift, no rewriting.

ShareableTom can present a complete dossier on any person, location, or project in seconds — to me, to a visitor, to a room full of professionals. The biological brain can't do a brain dump on command.

Searchable309 tables, SQL queries, full-text search across 1.5 million words. The answer to any factual question about my life is three seconds away.

TransparentThe synaptic stratification model assigns every connection an evidence layer (1-10) and a certainty score. You can always ask: how do you know this? And get a precise answer.

Scales without costThe Obsidian import will add 153,000 cross-references. The Gmail import will add 96,000 emails. Each one makes every existing synapse richer without increasing system load.

Survives youThe entire system is a SQLite file on a VPS, backed up daily. If I disappear tomorrow, the knowledge persists. That's not a feature — it's a responsibility.

Time travelAsk Tom what happened on March 25, 2019 and it reconstructs the day from transactions, check-ins, photos, and calendar entries. The biological brain lost that day years ago.

Emulation Status

Which brain mechanisms are already emulated in the system, which are under exploration, and which are still open.

What is it What it does in the brain What it does in ThetaOS Built?
Synapses The junction where one neuron communicates with another. Your brain has 600 trillion of them. They are not wires — they are dynamic gaps where signals cross. Strong synapses fire easily, weak ones need more stimulation. The pattern of strong and weak synapses IS your memory. Every link between two entities (person, place, organisation) is a synapse with ten evidence levels. A phone call is layer 1 (100% certain). A name found in a text is layer 6 (90%). A pattern detected by AI is layer 9 (50-70%). 170,000+ measured. Built
Myelin sheath A fatty sheath wrapped around nerve fibers, like insulation around a cable. Thicker myelin means faster, more reliable signals. A concert pianist has extraordinarily thick myelin around finger-control nerves. Multiple sclerosis is what happens when myelin degrades — the hardware is fine, the insulation is broken. Completeness score per connection. A name alone is thin myelin. A name + date + location + photo + context + transaction is thick myelin. Ask about a thick synapse and you get a rich dossier in seconds. Ask about a thin one and you get: "he's in the database." Built
Long-term potentiation Every time you practise a skill, the synapses involved get stronger. This is the cellular basis of learning. The more a pathway fires, the easier it fires next time. That's why practice makes permanent — you're literally building thicker connections. Every new data point that confirms an existing connection thickens its myelin. Peter Ros appears in 153 photo-days, 92 text mentions, 41 transactions, 10 meetings. Each one makes his synapse stronger and faster to retrieve. Built
Hebbian learning Donald Hebb's principle from 1949: if two neurons repeatedly fire at the same time, the connection between them strengthens automatically. You smell coffee and think of your grandmother — because those neurons fired together thousands of times in her kitchen. Two people who appear at the same location on the same day (layer 7: date-coincidence) get linked automatically. If it happens repeatedly (layer 9: pattern), the system flags it. Peter and Mark always appear together at the same venue — that's a detected triangle. Built
Multidimensional encoding A synapse isn't just on or off. It carries direction (who initiated?), emotional charge (positive or threatening?), temporal weight (recent or old?), and context (at work or at home?). The same person can feel different in different settings — that's multidimensional encoding. Eight dimensions per synapse. Who initiated contact (direction)? Was it positive or negative (valence)? How recent (temporal decay)? In what context — business, personal, creative? How central is this node to the network (hub value)? Built
Action potential A neuron collects incoming signals from thousands of synapses. If the combined signal exceeds a threshold, it fires a full electrical pulse. Below the threshold: nothing. Above: an all-or-nothing spike that travels the entire nerve fiber in milliseconds. This is how the brain decides what matters. Ring 1 people (thick myelin) fire instantly when mentioned — low threshold. Ring 4 people need more context to activate. The system decides what to retrieve based on myelin thickness, like the brain decides what to fire based on signal strength. Built
Saltatory conduction In a myelinated nerve, the signal doesn't travel continuously — it leaps from gap to gap (nodes of Ranvier), like a stone skipping across water. This makes transmission up to 100x faster than unmyelinated fibers. The brain routes signals along the fastest paths available. When you ask about a person, the system jumps to the strongest connections first: their organisation, their key project, their last meeting. It doesn't enumerate every data point — it skips along the thickest paths. Built
Epistemological tagging Your brain tags memories with source information: did I see this happen, did someone tell me, or did I imagine it? This is called source monitoring. When it fails, you "remember" things that never happened. The brain maintains an implicit sense of how certain each memory is. Every synapse carries two strands (the Double Helix): evidence strength and epistemological type. A bank transaction is a fact. "They had a great conversation" is a belief. "They probably know each other" is a hypothesis. The system never presents a hypothesis as a fact. Built
Blood-brain barrier A highly selective membrane between the bloodstream and the brain. Glucose passes through, bacteria don't, most drugs can't. It protects the brain from toxins, infections, and chemical fluctuations in the blood. Without it, the brain would be overwhelmed by everything the body encounters. New data is validated before it builds myelin. A name found in text is checked against 12,680 hand-validated entities in the Obsidian vault. Match = high confidence. No match = flagged for review. Bad data doesn't get to build thick connections. Built
Cerebrospinal fluid Half a litre produced daily by the choroid plexus. It cushions the brain, delivers nutrients, and — crucially — washes away metabolic waste. The glymphatic system (discovered in 2012) expands brain channels by 60% during sleep to flush toxins. When this fails: Alzheimer's. The database schema itself — 309 tables, standardised naming, consistent emoji-typing. If the medium is polluted (duplicates, broken links, inconsistent names), every signal travels through murky water. Built
Immune patrol The brain has its own immune system, separate from the body's. Microglia — specialised brain immune cells — constantly scan for damaged neurons, pathogens, and debris. They don't wait for infection. They patrol proactively, pruning damaged synapses and releasing protective signals. The Magische 13 reasoning framework: thirteen perspectives (Holmes, Marple, Occam, Arendt...) that attack every thesis from different angles. Currently runs on demand. Designed to run continuously in the background. Designed
Homeostasis The brain maintains precise chemical ratios: pH, ion concentrations, neurotransmitter levels. Too much glutamate causes seizures. Too little dopamine causes Parkinson's. The system constantly adjusts to keep everything in a narrow band. Deviation from that band is disease. Five ratios that measure system health: evidence vs hypothesis, positive vs negative, old vs new, broad vs deep, certain vs uncertain. First baseline measurement taken March 30, 2026. First measurement
Neuroplasticity After a stroke, the brain can reroute functions to undamaged areas. After learning a new skill, it physically reorganises. London taxi drivers have enlarged hippocampi from years of navigation. The brain is not fixed hardware — it's adaptive tissue that rebuilds itself based on what you do. When new data sources are imported (Obsidian, Gmail), the entire network restructures. Connections that didn't exist suddenly light up. The system gets smarter without adding logic — just data. Exploring
Synaptic pruning The brain actively eliminates connections that aren't used. Children have far more synapses than adults — growing up is literally pruning. This isn't loss; it's efficiency. The brain keeps the strongest patterns and removes the noise. Use it or lose it. Connections dim over time but never delete. A synapse from 2013 that was never confirmed again is still there — but quiet. Recent synapses glow hot. The system remembers everything but foregrounds what matters now. Designed
Sleep consolidation While you sleep, the hippocampus replays the day's experiences and transfers them to the cortex for permanent storage. This is why pulling an all-nighter before an exam doesn't work — without sleep, the memories never consolidate. Your brain does its filing at night. A nightly cronjob that enriches data, detects duplicates, recalculates patterns, and reports what changed. The system literally cleans itself while I sleep. Designed
Working memory Your working memory holds roughly 4 to 7 items at once — a phone number, a shopping list. Your long-term memory is virtually limitless. The bottleneck is never storage, it's the tiny active buffer. That's why you can't think about fifteen things at once but can remember thousands of faces. Tom's active session is the working memory. The 560,000 records are long-term. During a conversation, recently mentioned people and topics stay activated — follow-up questions don't restart from scratch. Built
Default mode network When you're not focused on a task, your brain activates a network that connects distant memories, generates unexpected ideas, and simulates future scenarios. This is where "shower thoughts" come from. It's not idle time — it's the brain's most creative mode. A mode where the system surfaces unexpected connections without being asked. "You haven't spoken to X in 6 months, but she just published something about your topic." Not yet built. Open
Neurotransmitters Molecules that carry signals across the synaptic gap. Dopamine drives reward and motivation. Serotonin regulates mood and calm. Acetylcholine enables learning and attention. Noradrenaline triggers alertness. Different transmitters activate different behaviours — the brain's chemical toolkit. Multiple types, partially mapped. MCP tools are the fast-firing signals (noradrenaline — alertness). Logging a joyful moment is dopamine (reward). The nightly cronjob is melatonine (sleep-cycle maintenance). Homeostasis metrics are serotonin (balance). The full mapping is still emerging. Partially built
Mirror neurons Neurons that fire both when you perform an action and when you watch someone else perform it. They are the neural basis of empathy, imitation, and learning by observation. You wince when someone else stubs their toe — that's mirror neurons. When others talk to Tom (Kees, Céline, Aria, police professionals), the system learns from their questions. Each external conversation reveals capabilities and gaps that internal use doesn't surface. Exploring
Prefrontal cortex The most recently evolved part of the brain. Handles planning, abstract reasoning, impulse control, and complex decision-making. Consumes disproportionately more glucose than any other region. It's powerful but expensive — it tires fast, which is why hard thinking is exhausting. This is what Tom offloads. The prefrontal cortex costs the most glucose. By externalising planning and reasoning to a structured system, the biological brain is freed for what it does best: intuition, pattern recognition, creative leaps. Built

Built = working in the system — Designed / Exploring = architecture exists, implementation in progress — Open = identified, not yet started

The Six-Layer Model

6 · Choroid Plexus
4Action Potentialretrieval
3Diamond Layerenrichment
2Theta-Myelinconsolidation
1Synaptic Stratificationencoding
5 · Double Helix — epistemology across all layers

1. Synaptic Stratification — Ten levels of evidence, from intentional contact (a phone call) to pattern recognition (AI-detected co-occurrence). Every connection carries its proof.

Brain analogy: how synapses form

2. Theta-Myelin — Completeness as insulation. A thin synapse (name only) fires slowly. A thick synapse (name + date + location + context + photo) fires instantly.

Brain analogy: the myelin sheath around nerve fibers

3. Diamond Layer — Eight dimensions per synapse: temporality, directionality, valence, weight, social graph, context, decay, and hub value.

Brain analogy: the diamond-hard coating that makes synapses indestructible

4. Action Potential — How the network fires. Threshold, saltatory conduction, payload, excitation vs. inhibition, summation, refractory period.

Brain analogy: the electrical pulse that makes neurons fire

5. Double Helix — Every synapse carries two strands: evidence (how strong?) and meaning (fact, belief, or hypothesis?). Layer 10 — human confirmation — is the helicase.

DNA analogy: two strands encoding different information, bound at base pairs

6. Choroid Plexus — The immune system of the network. Filtration at the gate, nightly cleansing, immune patrol, homeostasis. Not stacked on top — wrapped around everything.

Brain analogy: the membrane that produces cerebrospinal fluid and guards the blood-brain barrier

The Multiplication Effect

April 2, 2026

A friend wrote a book. I helped shape it — ten to twenty hours of phone calls from Vlieland, the manuscript printed out, every comma marked in red pen. The book exists in the system. The friend exists in the system. But the system didn't know they were connected.

That's when I discovered the multiplication effect.

The gap

The synaptic model had 98,000 direct synapses. Person–person, person–location, person–organisation — all connected. But person–book? Zero. The auteur field in 489 books contained wikilinks to their authors, but those links had never been translated into synapses.

So when I said "Peter Ros wrote a book," the system couldn't find it through the network. It had to search a separate table. The synapse was missing.

The calculation

Adding those 489 book–author connections would create 498 new direct synapses (some books have multiple authors). A modest number. But here's what happens in a network:

Each new synapse doesn't just add one connection. It makes the book reachable through every existing pathway to that author. Peter Ros already had 200+ synapses connecting him to me — through events, phone calls, meetings, shared locations. The moment you add one synapse from Peter to his book, the book becomes reachable through all 200+ of those paths.

I calculated the actual impact: those 498 direct synapses would touch 1.18 million existing pathways. That's a multiplication factor of 2,400.

The full picture

Books are just one of 22 tables with untapped cross-entity links. LinkedIn threads (8,900), visited locations (40,600), purchases, hotels, meals — another 57,000 direct synapses waiting to be harvested. Combined with the existing 98,000, that's 155,000 direct connections.

Applied across the network's average connectivity, that projects to tens of millions of reachable paths. All from simple wikilinks that were already in the data. No AI inference. No new information. Just the architectural consequence of connecting what was already there.

Why this matters

This is the payoff of the synaptic architecture. A document-based system stores a book in a folder. A tag-based system gives it labels. A synapse-based system makes it reachable through every person, location, and event it touches — and through every person, location, and event they touch.

The brain doesn't store a memory in one place. It distributes it across every connection that was active when the memory formed. That's why a smell can trigger a childhood memory. The path exists because the connections exist.

A wikilink is a few bytes. The value it unlocks is unbounded.

The path to AI independence

This multiplication effect reveals something unexpected about AI dependency. The more structured the data becomes, the less AI you need.

Every layer of the six-layer model is a deterministic, computable algorithm — not a language model. Synaptic Stratification is SQL. Theta-Myelin is arithmetic (layers × frequency × completeness). The Diamond Layer derives eight dimensions from existing data. Action Potential is a retrieval algorithm: threshold, sorting, filtering, weighting. Pattern recognition (Layer 9) is statistics on structured data.

Even Tom — the conversational layer that seems like it needs a powerful AI — turns out to be retrieval, not reasoning. "Who should you call?" is a query: high myelin score, low recency, high valence. "This pattern stands out" is a frequency count. "The last time you tried this..." is a historical context lookup. It's query → score → present. Not open-ended thinking.

The AI that remains after full structuring is an interface layer: understanding "I called Jan about the fieldlab" and routing it to the right table. That's intent classification plus entity extraction — a small model. Small enough to run locally on a laptop using tools like Apple MLX, Ollama, or a fine-tuned Phi-4. No cloud. No API calls. No dependency on any tech company.

Right now, I still need a large cloud model to discover how 390 tables relate to each other, which patterns work, where the structure has gaps. The architect is still designing the house. But once the schema is proven and the patterns are mapped, the house stands on its own. The operational layer runs locally.

The 98/2 Kahneman ratio confirms this: 98% of the system is data, 2% is AI. That 2% is translation, not intelligence. And translation is a small-model problem.

The endgame isn't "trust a tech company forever." It's: use AI now to figure out how a Life Lens System should be designed, then export that logic so anyone can run it locally, on their own hardware, independently. The intelligence lives in the schema, not in the runtime.

The Instant Dashboard: Three Layers Proven in Practice

March 31, 2026

The six-layer model was designed on paper. This is the story of how three of those layers proved themselves in a single build session — not through theory, but through a keystroke.

The starting point

I wanted one thing: a hotkey to check if my server was still running. Green dot or red dot. That’s it.

But once I had that screen open, I thought: I’m already looking. Why don’t I see my last bank transactions? Then I don’t need to open my banking app. And if I see transactions, why not see which invoices are still unpaid? Then I don’t need to open my accounting software.

Within a day, I had four tiles side by side — banking, accounting, ticket sales, LinkedIn — all preloaded, all updated every sixty seconds, all accessible with a single keystroke and a tap of the arrow key.

Layer 1 in practice: Cross-source verification

My accounting system said I had dozens of open invoices. My bank had tens of thousands of transactions across multiple accounts. The question: which invoices are actually unpaid?

The system checks three levels of evidence:

After cross-checking, less than a third remained. The rest had been paid but not registered. No AI pattern matching was involved. Just SQL on clean data. Zero hallucination by design.

One invoice looked like it might have been paid because the exact same amount appeared on the same account. But the system traced it to an internal transfer for a completely different purpose. The amount matched, the source didn’t. The system correctly kept it as unpaid. Triple-verified by checking every transaction from that organization across all accounts and all time.

This is what synaptic stratification looks like in practice: not a theoretical evidence scale, but an actual verification engine that distinguishes between “the accounting system thinks it’s unpaid” and “the bank proves it’s unpaid.”

Layer 4 in practice: Instant retrieval

The entire financial health of three companies — open invoices, recent transactions, ticket sales for next year’s conference — is accessible in under one second. Cmd+Opt+T, arrow right, and it’s there.

The data is precomputed every sixty seconds into a 4KB JSON file on the server. At viewing time, there are no API calls, no loading spinners, no authentication prompts. The desktop app reads the cached file and renders it.

This is what action potential means in the model: the network fires along the path of least resistance through the thickest connections. You don’t search. You don’t navigate. You glance.

Three seconds of looking tells you:

Then you press Escape and you’re back to work.

Layer 2 in practice: Thickening synapses on sight

A bank transaction reads “SumUp *Batman Taxi Se.” That’s a thin synapse — raw data with no meaning. You were there, you remember it was a taxi in Amsterdam, but the system doesn’t know.

You click on it. A text field appears. You type “Taxi Batman - 020.” The system updates the mapping, and every past and future transaction from that terminal inherits the name. The synapse just got thicker — not through repetition over time, but through a single act of recognition.

This is myelination in practice. The biological brain wraps myelin around frequently-used neural pathways to speed up signal transmission. The digital system wraps context around raw data to speed up comprehension. Both serve the same purpose: making retrieval faster by making connections richer.

The unexpected discovery: Cyclical comparison

The ticket sales tile doesn’t just show “37 tickets sold for PKM Summit 2027.” It shows: “On day 11 after the previous Summit, the 2025 edition had 52 tickets and the 2026 edition had 54. You have 37.”

This is temporal context that no individual app provides. Eventbrite shows you absolute numbers. The Life Lens System shows you where you stand relative to yourself at the same point in the cycle. It turns a number into a narrative.

What this means

The six-layer model was inspired by neuroscience. But inspiration is not proof. What happened on March 30 was proof: three layers operating simultaneously in a real system, on real data, solving a real problem — in under three seconds, triggered by a single keystroke.

The system doesn’t think. It retrieves. And because the data is clean, structured, and cross-referenced, retrieval is all you need.

Discovery Timeline

Each insight as it emerged. Subscribe via RSS to follow in real-time.

2 Apr 2026 The path to AI independenceEvery layer of the six-layer model is a deterministic algorithm, not a language model. Even the Tom layer is retrieval, not reasoning. The AI that remains after full structuring is an interface layer — small enough to run locally. The intelligence lives in the schema, not in the runtime.

2 Apr 2026 The Multiplication Effect498 book–author synapses touch 1.18 million existing pathways. Factor 2,400x. 22 tables with 57,000 untapped cross-entity links project to tens of millions of reachable paths. All from wikilinks already in the data.

31 Mar 2026 The Instant DashboardThree layers proven in a single build session. Cross-source invoice verification, sub-second retrieval via 4KB JSON, on-sight synapse thickening. Cyclical temporal comparison as unexpected bonus.

31 Mar 2026 50 parallel thinkers10 Tom sessions x 5 sub-agents each = 50 parallel prefrontal cortexes on a single laptop. No biological brain can do this.

31 Mar 2026 Cognitive TelemetryThetaKeys (fingers) + Spokenly (voice) combined. The real word count: 80% spoken, 20% typed. ThetaKeys now shows both.

31 Mar 2026 299,755 spoken words discoveredSpokenly archives every transcription as JSON + WAV. 10,506 transcriptions, 42 hours of audio. The voice was always there, just never counted.

31 Mar 2026 17 brain organs mappedFrom hippocampus to olfactory system. Each mapped onto ThetaOS with what exists, what's designed, and what's open. 29 knowledge cards total.

31 Mar 2026 35 neurotransmitter analogies25 neurotransmitters + 10 neuromodulators mapped to system equivalents. 11 already built. The Diamond Layer IS the neuromodulation system.

30 Mar 2026 The Kahneman RatioDaniel Brouwer pointed out that 98/2 (data/AI) matches Kahneman's System 1/System 2. Not a coincidence — when architecture is faithful to the brain, the ratios emerge on their own.

30 Mar 2026 Obsidian IS the Choroid Plexus35,000 files, 12,680 hand-validated entities, 153,000 cross-references. The gatekeeper was already there.

30 Mar 2026 Choroid PlexusThe enveloping immune system. Filtration, cleansing, homeostasis. Not stacked — wrapped around. (Joy Otten, Troi Noordwijk)

29 Mar 2026 Double HelixEvidence and meaning as two strands. Fact, belief, or hypothesis? Layer 10 is the helicase. (via Klai cross-pollination)

28 Mar 2026 Action PotentialThe network stores everything but can't fire. Six retrieval sub-mechanisms. (Demo Joe & Mika Ros)

28 Mar 2026 Diamond LayerEight dimensions per synapse: temporality, directionality, valence, weight, social graph, context, decay, hub value.

28 Mar 2026 Theta-MyelinCompleteness as insulation. Thin synapse = slow. Thick synapse = instant.

27 Mar 2026 Synaptic StratificationTen levels of evidence. 170,000+ measured synapses. The trust model for connections.

20 Mar 2026 PKM Summit demoFirst public demo of Tom. The synaptic thinking begins here.

14 Mar 2026 First external Tom userKees Verhoeven becomes the first person other than Martijn to talk to Tom.

19 Feb 2026 Tom is bornPersonal AI guide on top of ThetaOS. Coach, advisor, strategist, biographer, archivist.

Dec 2025 ThetaOS + Life Lens SystemThe system gets a name. And reveals itself as more than a tool.

Autumn 2025 Pilot InformatieautonomieReal-world experiment in information sovereignty. The building accelerates.

Summer 2025 Gmail archive parsed96,000 emails. A hidden system revealed in own data.

Summer 2025 File format researchDocuments as containers vs information as structure. The fundamental problem.

Spring 2025 Starten met Obsidian publishedFirst book written with AI. First solo book.

Jan 2025 Personal ontology + strokeMade the implicit explicit. Then forced slowness became a gift.

Journal

The build journal. Context, implications, the story behind each discovery. Subscribe via RSS.

March 31, 2026Fifty Brains on a LaptopOnce your LLS has a brain, it can multiply fifty times. Why fifty times nothing is nothing, and why the foundation is everything.

March 30, 2026The Instant Dashboard: Three Layers Proven in PracticeWhat started as a server health check hotkey grew into a cross-source financial dashboard. The story of how Layers 1, 2, and 4 proved themselves simultaneously on real data.

What Others Say

Aria Khodaverdi published As We May Think Has Become Real on Substack, placing ThetaOS in the intellectual lineage of Vannevar Bush, Doug Engelbart, and Ted Nelson. He called it "what happens when someone actually builds the machine that visionaries described."

Yoram Vieveen, publisher at Van Duuren Media, wrote on LinkedIn: "Martijn is on to it again. Which usually means that in a few years this will be a widely recognised theme."

In March 2026, a room full of senior law enforcement professionals attended a live demo. They asked Tom to build an OSINT profile from public sources, apply the reasoning framework to a cold case, and analyse how the system relates to privacy law. One participant — with a background in neuroscience — contributed a concept that became the sixth layer of the model.

Others Building a Life Lens System

More people are getting inspired to build their own LLS. As I encounter them, I will publish them here.

Celine Clémençonhomefulhobo.com

Currently Exploring

The brain keeps revealing new patterns. I'm actively researching these areas and building knowledge cards for each. As they mature, I'll share them here.

Visual system for ThetaOSNetwork maps, timeline strips, location heatmaps. Apple Vision Pro as spatial interface. How do you see 170,000 synapses at a glance?

Vocal Telemetry42 hours of voice recordings in Spokenly. Speech speed, pauses, tone variation, confidence. Your voice reveals your cognitive state.

The drives of a digital brainCan a system want something? Completeness hunger, connection thirst, balance thermostat. From dead archive to living organism.

Encoding vs retrievalThe hippocampus switches between storing and recalling. How should a database architecture support both modes?

The corpus callosumSeven data silos that don't talk to each other. How to build a permanent bridge instead of a nightly ferry?

Calibration and gut feelingThe cerebellum fine-tunes output. The insula detects when something doesn't feel right. How to build digital intuition?

35 neurotransmitter analogiesDopamine, serotonin, acetylcholine and 32 others mapped to system equivalents. The chemistry of a digital brain.

Scaling the prefrontal cortex50 parallel thinkers today. 100 next year. 200 after that. What happens when thinking scales exponentially but the human stays one?

Local AI independenceThe six-layer model is deterministic. The Tom layer is retrieval, not reasoning. What remains is an interface layer small enough for a local model. Apple MLX, Ollama, fine-tuned Phi-4. Can the entire system run without any cloud AI?

The Thesis

In an era where everyone working with AI is focused on technology — better models, smarter prompts, faster inference — I believe the answer to the I in AI is to be found in an entirely different domain: everything we know about the human brain.

That is the trail I've been running, with results that surprise even me. It is my only explanation for why the things I build with AI seem to go so much faster and more effortlessly than what I see from the experts, the developers, and the prompt engineers around me.

They optimise the machine. I studied the organ the machine is trying to imitate.

What I discovered is that the more you learn from nature, the better your design becomes — if you abstract it far enough into the other domain. In this case: from neuroscience into information architecture. The brain has been solving the problems that computer science is still struggling with, for millions of years. We just weren't looking.

Synaptic stratification, myelin thickness, diamond-layer dimensionality, action potentials, the double helix of evidence and meaning, the choroid plexus as immune system — none of these are metaphors borrowed from neuroscience for decoration. They are design principles that emerged because the brain already solved the problems that knowledge systems are still struggling with.

The brain doesn't store information in documents. It doesn't search with keywords. It doesn't organise in folders. It fires along the path of least resistance through the thickest connections. That's what this system does.

What I've Learned

98% data, 2% AI — the Kahneman ratio. The AI is the translator. The data is the story. Replace the AI tomorrow — the system keeps running. Daniel Brouwer pointed out that this is the same ratio as Kahneman's System 1 (fast, unconscious, automatic — 98%) and System 2 (slow, conscious, analytical — 2%). That's not a coincidence. When your architecture is faithful enough to the brain, the ratios emerge on their own.

Zero hallucinations by design. The system retrieves, never generates. Every answer comes from a database query, not from pattern completion.

The more formal your ontology, the less intelligent your AI needs to be. A simple SQL query on a clean schema outperforms a genius model on messy data.

Grows in intelligence without growing in size. A new wikilink is a few bytes. The value it unlocks is unbounded. Proof: 498 book–author synapses (a simple harvest from one table) touch 1.18 million existing pathways. That's a 2,400x multiplication — not through AI, but through architecture.

Human in the loop, always. Layer 10 is human confirmation. The double helix cannot be read without a human helicase.

The gatekeeper was already there. My Obsidian vault — 35,000 files, 153,000 cross-references, 12,680 hand-validated entities — turned out to be the choroid plexus I thought I still needed to build.

Structure replaces AI. The more structured your data, the less AI you need — until it approaches zero. What remains is a small interface model that can run locally. The architect builds the house; once it stands, anyone can live in it without one.

The PKM Summit

Three years ago, I started the PKM Summit with Lykle de Vries. What began as a European event immediately attracted the global PKM community. From year one, ten to twelve of the fifteen most prominent thinkers and practitioners in personal knowledge management have been directly involved or present: Nick Milo, Nicole van der Hoeven, Zsolt Viczian, Anne-Laure Le Cunff, Tiago Forte, Bob Doto, Jorge Arango, Harold Jarche, and others.

The third edition (March 2026) featured Clive Thompson. For the second consecutive year, people from five continents and more than 21 countries traveled to the Netherlands specifically for two days of deep exchange on this subject. Next year, Roland Allen (The Notebook) will open the fourth edition.

The premise is simple: everyone in the room knows more together than anyone on stage. No big keynotes. Rich exchange. The common denominator in the crowd: fast-thinking, open-minded, flexible, curious, gutsy, and kind-hearted.

The fact that this happens in the Netherlands feels right. Erasmus traveled fifteenth-century Europe with his notebooks, sharing knowledge as a life's work. Five hundred years later, people from 21 countries travel to his country to do the same thing — with slightly better tools.

Next edition: March 12–13, 2027, central Netherlands. Tickets are already selling. pkmsummit.com

Papers

Published on Zenodo (CERN). Not peer-reviewed. Preprints, open to critique. This work is being developed towards a doctoral dissertation.

The Information Problem Was Never the Real Problem Building a Confirmation-Driven Personal Knowledge Graph and the Four Principles That Emerge From Practice
doi:10.5281/zenodo.19202082

The academic paper on ThetaOS. 339 tables, 91,000+ records, four design principles. Practitioner case study contextualised within Otlet, Bush and Bell.

Changelog

This page evolves as the thinking evolves. Major changes are logged here.

April 4, 2026 — Added GoatCounter to all pages — a simple open-source visitor counter without cookies or tracking. Curious whether anyone actually reads this.

April 2, 2026 (v2) — Added "The Path to AI Independence" subsection to The Multiplication Effect. New Discovery Timeline entry, new "Currently Exploring" item (Local AI independence), new "What I've Learned" principle (Structure replaces AI). The logical consequence: every layer is deterministic, even Tom is retrieval not reasoning, the endgame is local. Both EN and NL.

April 2, 2026 — Added "The Multiplication Effect" section with real calculation: 498 direct synapses → 1.18M reachable paths. Discovery Timeline entry. Updated "What I've Learned" with proof. Both EN and NL.

March 31, 2026 (v2) — Added "The Brain Keeps Answering" section and "Currently Exploring" (9 research areas). 17 brain organs mapped, 35 neurotransmitter analogies, Cognitive Telemetry (ThetaKeys + Spokenly), 299,755 spoken words discovered, 50 parallel thinkers. Five new Discovery Timeline entries.

March 31, 2026 — Added first journal entry: "The Instant Dashboard." Discovery Timeline entry. Both EN and NL.

March 30, 2026 (v3) — Added PKM Summit section with Erasmus connection. "The Thesis" (the I in AI is found in the brain). "What Tom Is." "What Others Say." "Why the Brain" with emulation status table and 19 brain mechanism detail pages with EN/NL toggle. Discovery Timeline + Journal with RSS feeds. Dutch translation. Language markers. Doctoral dissertation note. Photo. "Others Building a Life Lens System." Enriched brain mechanism descriptions. Kahneman ratio by Daniel Brouwer. Wikipedia links for Bush, Engelbart, Nelson, Hebb, Otlet, Bell, Erasmus. Links for all PKM Summit speakers.

March 30, 2026 (v2) — Rebuilt from technical showcase to story-driven narrative. Part One / Part Two with stroke as turning point. Featured essay cards, pull quotes, ThetaOS Zenodo paper, Nicole van der Hoeven disclaimer. Removed unrelated blogs.

March 30, 2026 — Initial publication. Six-layer model, journey from 1984 to present, Zenodo papers, key reading section. Built during a Tom session after a live demo at Troi (law enforcement) in Noordwijk.