I've been thinking about how we accumulate knowledge as engineers. Not the kind stored in training data or vector databases, but the structured, curated kind — the understanding you build over years of reading papers, working on projects, and connecting ideas across domains.
Andrej Karpathy proposed something interesting in April 2026: the LLM Wiki pattern. Instead of throwing documents at a RAG pipeline and hoping retrieval finds the right chunk, you use an LLM as a compiler. It reads sources, extracts entities and concepts, builds interlinked wiki pages with provenance tracking, and maintains them incrementally. Obsidian becomes your IDE. The wiki is your codebase. The LLM is your compiler.
I've packaged this into an AgentSkill and battle-tested it by ingesting 30 Agentic AI reference sources — foundational papers (ReAct, Toolformer, CoALA), industry guides (Anthropic, OpenAI, Google), surveys, and security analyses. The result: 104 interlinked wiki pages compiled from 56 raw sources, with full provenance tracking, automated linting, and delta-based manifests.
What's In the Skill
The skill gives you a complete compilation pipeline for an Obsidian vault:
- Vault layout:
raw/for immutable sources,wiki/for compiled pages,_meta/for conventions,.wiki-meta/for machine state - Ingestion workflow: read source → extract entities, concepts, claims → create wiki pages with
[wikilinks]and%%from: source%%provenance markers → track in delta manifest (SHA-256) - Maintenance tooling:
wiki-lint.shfor structural health checks (frontmatter, broken links, orphans, stale pages, tag drift),fix-wikilinks.pyfor Obsidian link resolution,wiki-manifest.shfor incremental tracking,wiki-index.shfor auto-generated TOC
The agent does the heavy lifting. You drop sources into raw/, ask the agent to ingest them, and browse the compiled wiki in Obsidian's graph view. Hubs show central concepts, orphans reveal gaps, clusters show domain structure.
Everything runs locally — bash, python3, no credentials, no network calls. Scripts are portable across macOS and Linux. ClawHub's security review rates it Benign (high confidence).
The skill is available on ClawHub and works with OpenClaw or any AI coding agent that supports skills (Claude Code, OpenCode, Codex, etc.) — it's just a SKILL.md with shell and python scripts, nothing runtime-specific.
The Bet
The compiled wiki has a property that static knowledge bases don't: it improves with use. Good answers become synthesis pages. Contradictions get flagged. Open questions accumulate. The wiki grows not just from ingestion but from interaction.
The real test will be whether it compounds over months — whether the wiki I'm building today on Agentic AI will still be useful and maintainable six months from now, or whether it'll rot like every other knowledge base I've tried.
I suspect the answer depends less on the tooling and more on whether the compilation step forces enough structure to resist entropy. That's the bet.
