One-sentence summary: Millions of people independently learn the same commands and procedures because knowledge is fragmented across forums and individual experience—a massive violation of the "Don't Repeat Yourself" principle at human scale.
Opening Example
Search "MikroTik monitor bandwidth per connection" and you'll find thousands of forum posts, dozens of blog articles, countless Stack Overflow questions. Each one explains the same sequence of commands with slight variations:
/interface monitor-traffic ether1
/tool torch src-address=192.168.1.0/24
/ip firewall connection print
How many people have independently learned these commands? Tens of thousands? Hundreds of thousands? Each person searching, reading, trying, failing, searching again. Each person discovering that /tool torch gives real-time traffic but /interface monitor-traffic gives accumulated statistics. Each person learning that connection tracking has a limit of 100,000 entries by default. Each person figuring out how to parse the output format.
The knowledge exists. It's been discovered. It's been documented. But it's not compiled. It's fragmented across thousands of heads and hundreds of web pages, waiting to be independently rediscovered by the next person who needs it.
This is the knowledge duplication crisis: millions of people learning the same things, over and over, because computers can't compile and share operational knowledge the way they compile and share code.
Core Concept
Software development solved knowledge duplication decades ago. When someone figures out how to parse JSON, they write a library. When someone implements OAuth, they package it. When someone solves a common problem, they create a module that others can import. We don't ask every developer to independently discover how to sort an array or make an HTTP request.
But for operational knowledge, how to actually use systems and accomplish tasks, we've never developed an equivalent mechanism. Each user independently learns what buttons to click, what commands to type, what sequences to perform. The knowledge stays in their head or gets documented in prose that other humans must read and translate back into action.
This violates the DRY (Don't Repeat Yourself) principle at planetary scale. We're running the same learning processes millions of times over. We're storing the same operational knowledge in millions of brains. We're duplicating effort on a scale that would be unthinkable in software engineering.
This chapter explores how knowledge duplication became normalized, why traditional solutions don't solve it, what it costs us, and how Literate Technology (LT) offers a fundamentally different approach: systematic knowledge compilation that makes discovery available to everyone instantly.
The Original Sin: Knowledge as Documentation
Here's where we went wrong: we treated operational knowledge as something to document rather than something to compile.
When you figure out how to accomplish a task on a computer, the traditional approach is to write down the steps. Create a tutorial. Post on a forum. Write documentation. This seems helpful, and it is, but it creates a fundamental problem: the knowledge remains in prose form, requiring another human to read it, understand it, and manually translate it back into action.
Consider a typical Stack Overflow answer:
"To monitor bandwidth on a MikroTik router, first SSH in, then run
/interface monitor-traffic ether1. Press Ctrl+C to stop. For per-connection monitoring, use/tool torch src-address=192.168.1.0/24. To see all connections, run/ip firewall connection print."
This answer is helpful. It shares knowledge. But it doesn't compile that knowledge in a way that makes it directly executable. The next person must:
- Find this answer (search, browse, ask)
- Read and understand it
- Adapt it to their specific situation (which interface? which subnet?)
- Type the commands manually
- Interpret the results
- Remember this for next time, or bookmark it, or save it somewhere
Each step introduces friction. Each step requires human translation. The knowledge transfer happens at the speed of reading and comprehension, not at the speed of execution.
And most critically: when the next person needs to monitor MikroTik bandwidth, they go through the same process. The knowledge doesn't accumulate. It doesn't compound. Each transfer starts from zero.
Forums as Symptoms, Not Solutions
Stack Overflow, Reddit, forums, and Q&A sites are incredibly valuable. They've helped millions of people solve problems. But they're symptoms of the knowledge duplication crisis, not solutions to it.
Think about what a forum post represents: someone encountered a problem, couldn't find the solution compiled anywhere, searched until they found fragmented information, synthesized it, tried it, got it working, and then documented what they learned so others could benefit.
This is noble. It's generous. It's community-minded. And it's extraordinarily inefficient.
Because the next person who encounters the same problem will:
- Encounter the same confusion
- Perform a similar search
- Find the forum post (if they're lucky and use the right keywords)
- Read the explanation
- Manually execute the same steps
- Possibly encounter new edge cases and ask follow-up questions
The knowledge has been shared, but it hasn't been compiled. The solution exists in prose form, requiring human interpretation each time it's accessed.
And worse: the solution might be outdated. The software version changed. The API evolved. The recommended approach is now deprecated. But the forum post remains, attracting new readers who implement solutions that no longer represent best practice.
This is knowledge duplication in both directions: duplication of learning (everyone learns independently) and duplication of obsolescence (everyone independently discovers that old solutions no longer work).
The "Click Here, Then Here" Economy
GUI tutorials are even more striking in their inefficiency. YouTube is full of videos showing you exactly what to click:
"First, click on Settings in the upper right. Then click Network. Then click Advanced. Scroll down to Connection Monitoring. Click the checkbox next to Enable. Now click Apply. Then click OK."
Millions of views. Thousands of comments saying "Thank you, this helped!" Each view represents a person manually performing the same sequence you just watched someone else perform.
This is theater. You're watching a play about configuration, then going home to stage your own production of the same play, following the blocking you just witnessed.
The knowledge being shared is purely procedural: do this physical action, then this one, then this one. There's no compilation. There's no abstraction. There's no way to say "do that thing from the video" and have the computer execute it. You must perform the entire sequence manually, every single time.
And when the interface changes? Every tutorial becomes obsolete. Every video needs to be remade. Every person who learned the old sequence needs to learn the new one. The duplication compounds.
DRY at Human Scale
In software development, the DRY principle (Don't Repeat Yourself) is fundamental. When you find yourself writing the same code twice, you extract it into a function. When multiple projects need the same functionality, you create a library. When everyone solves the same problem, the solution gets compiled into the language or framework.
This is knowledge compilation. The discovery happens once, the solution is encoded in a reusable form, and everyone benefits without re-learning.
But we've never applied DRY to operational knowledge at human scale. Every person who uses a MikroTik router independently learns the CLI commands. Every person who sets up a Rails application independently learns the deployment sequence. Every person who monitors a network independently discovers what metrics matter.
GPT-5 Pro's scholarly analysis names this precisely: "DRY at Human Scale." Instead of millions re-learning the same sequences, knowledge should be compiled once and executed everywhere.
This isn't about lazy users who don't want to learn. It's about recognizing that re-discovering the same knowledge millions of times is extraordinarily wasteful. When the hundredth person learns MikroTik CLI syntax, they're not adding to human knowledge. They're duplicating learning that has already happened ninety-nine times before.
This knowledge duplication crisis is crystallized in what we'll call the capacity formula (fully explored in Chapter 6). In traditional computing, your capacity is limited by skills you personally learned and time you invested. In Literate Technology, capacity shifts to how clearly you can articulate intent, multiplied by the AI's compiled knowledge from millions of prior discoveries. Knowledge compounds instead of duplicating.
The MikroTik Example: Thousands Learning Independently
Let's be specific. MikroTik has sold millions of routers worldwide. RouterOS powers network infrastructure everywhere from home offices to ISPs. It's powerful, stable, and notoriously opaque for beginners.
How many people have independently learned:
- That
/system resource printshows CPU and memory - That
/interface monitor-trafficmonitors bandwidth - That
/tool torchprovides real-time packet analysis - That
/ip firewall connection printlists active connections - That connection limits default to 100,000 and need adjustment for busy networks
- That CPU temperature is in column 3 of resource output, not column 2
- That ping results need parsing differently than resource statistics
Conservatively? Tens of thousands of people. Perhaps hundreds of thousands. Each person discovering these facts independently. Each person reading forum posts, trying commands, making mistakes, learning the quirks.
This knowledge exists. It's been discovered. It's documented in forums, wikis, and tutorials. But it's not compiled in a form that makes it instantly available to the next person without re-learning.
Until November 17, 2025, when an AI with SSH access explored a MikroTik router, discovered its capabilities, and created monitoring scripts in 30 minutes. No MikroTik expertise required. No forum searching. No documentation reading. The compiled knowledge of "how to monitor MikroTik networks" became instantly available through natural language.
"What's the network health?" became sufficient. The AI knew how to SSH in, which commands reveal health metrics, how to parse the output, what thresholds matter. Knowledge that thousands learned independently became compiled and instantly executable.
The Cost of Fragmented Knowledge
Knowledge duplication isn't just inefficient. It has real costs:
1. Time Cost
How many person-hours have been spent learning MikroTik CLI commands? If 50,000 people each spent 10 hours learning the basics, that's 500,000 hours of human time spent duplicating knowledge that could have been compiled once.
For context: 500,000 hours is 57 years of continuous work. We could have sent someone to learn MikroTik CLI comprehensively for an entire career, and still have time left over, if we could somehow transfer that knowledge directly instead of requiring everyone to re-learn it.
2. Error Cost
Fragmented knowledge leads to fragmented understanding. One forum post says to use this command. Another recommends that approach. A third warns about a gotcha. A fourth is outdated but still ranks high in search results.
Each person synthesizes this fragmented information differently. Each person makes slightly different mistakes. Each person encounters slightly different edge cases. The errors don't accumulate into systematic understanding. They stay fragmented, requiring each new person to encounter and solve them independently.
3. Opportunity Cost
Time spent re-learning what others already know is time not spent creating new value. If you spend 10 hours learning MikroTik commands that thousands learned before you, that's 10 hours you're not spending building something new, solving a unique problem, or advancing the state of knowledge.
Multiply this across every system, every interface, every operational task. The opportunity cost of knowledge duplication is staggering.
4. Expertise Barrier
Fragmented knowledge creates expertise gradients that shouldn't exist. The person who spent 100 hours learning MikroTik can accomplish tasks that the person who spent 10 hours cannot, even though both could articulate the same intent.
The expert doesn't have different capabilities. They just have more compiled knowledge in their head. The computer can do the same things for both users. But only the expert knows how to ask.
This creates artificial scarcity. Expertise becomes about memorization rather than judgment. The barrier isn't "understanding what you want to accomplish" but "knowing the incantation to make it happen."
5. Maintenance Cost
When something changes (software updates, API evolution, deprecated features), fragmented knowledge becomes fragmented obsolescence. Every forum post needs updating. Every tutorial needs revision. Every person who learned the old way needs to learn the new way.
But because knowledge is fragmented, updates are incomplete. Old information lingers. People find outdated advice. They try it. It doesn't work. They search more. Find different advice. Try again. The maintenance burden multiplies across millions of independentlearners.
When Documentation Fails
Documentation is supposed to solve this. Write down how things work. Maintain official guides. Keep them updated. Make them searchable.
And good documentation does help. But it doesn't solve knowledge duplication because documentation still requires human translation.
Reading documentation means:
- Finding the right section
- Understanding the explanation
- Adapting it to your use case
- Translating prose into actions
- Debugging when it doesn't work as described
- Remembering it for next time
Even with perfect documentation, each person independently:
- Searches for information
- Reads and comprehends it
- Translates it into action
- Learns from failures
- Builds mental models
The knowledge transfer is one-way, human-paced, and requires re-learning with each new person.
Moreover, documentation has inherent limitations:
Completeness is impossible: No documentation can anticipate every use case, every combination of features, every edge case.
Currency is hard: Keeping documentation perfectly updated with software evolution is a constant battle.
Discoverability varies: You need to know what to search for to find the relevant documentation.
Context is missing: Documentation describes features in isolation, not in the context of your specific goals.
Expertise assumptions: Documentation either assumes too much knowledge (frustrating beginners) or too little (boring experts).
Documentation helps, but it doesn't compile knowledge in a form that eliminates duplication.
The Literate Technology Solution: Systematic Knowledge Compilation
Here's what changes with LT: operational knowledge becomes compiled, not just documented.
When an AI with execution access explores a system, it doesn't just read about capabilities. It discovers them empirically, verifies them through testing, and can immediately apply them to new requests.
On November 17, 2025:
- I asked "Can you work with this GL-BE3600 router?"
- The AI SSH'd in, explored the OpenWRT system, discovered
iwinfocommands - It determined signal strength metrics, channel information, client connections
- It created monitoring scripts and explained what it found
- 30 minutes later, I had working infrastructure monitoring
The knowledge compilation happened in real-time:
- Discovery: What commands exist? What do they output?
- Verification: Run them, parse results, confirm they work
- Application: Create scripts that use this knowledge
- Availability: That knowledge is now compiled and instantly available for similar requests
When the next person asks "monitor my OpenWRT router," the AI doesn't need to rediscover iwinfo commands. That knowledge is compiled. The discovery happened once. Everyone benefits.
This is DRY at human scale. The learning happens once, gets compiled into the AI's operational knowledge, and becomes instantly executable for everyone through natural language.
From Individual Learning to Collective Compilation
Traditional computing:
- Jeremy learns MikroTik CLI commands (10 hours)
- Sarah learns MikroTik CLI commands (10 hours)
- David learns MikroTik CLI commands (10 hours)
- Total: 30 hours, each person learns independently
Literate Technology:
- AI explores MikroTik capabilities (30 minutes, first time)
- Jeremy asks "monitor my network" (instant)
- Sarah asks "monitor my network" (instant)
- David asks "monitor my network" (instant)
- Total: 30 minutes for all three, knowledge compiled once
The difference compounds. With traditional computing, the hundredth person spends the same 10 hours the first person spent. With LT, the hundredth person benefits from discoveries made by the first ninety-nine, spending only the time needed to articulate their specific intent.
This isn't about replacing human learning. Deep expertise still matters. Understanding networking concepts still matters. Judgment still matters. But the translation layer, the memorization burden, the "which command does this" knowledge, that becomes compiled and universally available.
The Meta-Observation
Here's something profound that emerged on November 17: I was writing about knowledge duplication while simultaneously experiencing its solution.
I don't know MikroTik RouterOS comprehensively. I hadn't memorized iwinfo commands. I couldn't recite the exact SSH flags needed or the parsing logic for temperature output. But I could articulate what I wanted: "Monitor network health and display it in my menu bar."
An AI with execution access and compiled knowledge handled the translation. It knew (or could discover) the commands. It knew how to parse the output. It knew what metrics indicate health. It knew how to format for SwiftBar.
The knowledge existed, compiled from thousands of prior explorations, documentations, and interactions. I didn't need to duplicate that learning. I needed to articulate intent clearly, and the compiled knowledge became instantly available.
That's the paradigm shift. From individual learning to collective compilation. From duplicated discovery to instant availability. From memorization to articulation.
What Still Requires Human Learning
LT doesn't eliminate all learning. It eliminates duplication of operational knowledge, the "how to use this system" layer.
You still need to learn:
- Domain concepts: Understanding what network health means, why it matters, what constitutes good vs. bad metrics
- Problem diagnosis: Recognizing when something is wrong and what category of problem it might be
- Goal articulation: Knowing what you want to accomplish and why
- Judgment calls: Deciding whether a solution is appropriate for your specific context
- Trade-off evaluation: Understanding the implications of different approaches
What you don't need to duplicate:
- Command syntax: Which exact characters to type
- Parsing logic: How to extract values from output
- API quirks: The specific idiosyncrasies of each system
- Output formats: How to interpret and transform data
- Error patterns: Common failures and their solutions
The first category is genuinely valuable learning that improves your thinking. The second category is memorization that could be compiled and shared.
Practical Patterns
How to recognize knowledge duplication opportunities:
The search test: If you're searching "how to" followed by a system name and task, you're about to duplicate learning someone already did.
The bookmark test: If you're bookmarking a tutorial because you know you'll need it again but won't remember the steps, that knowledge should be compiled.
The forum test: If you're reading a forum thread where multiple people solved the same problem slightly differently, that's fragmented knowledge waiting for compilation.
The explanation test: If you're writing documentation that says "first do this, then do that," you're documenting steps that could be compiled into executable knowledge.
The teaching test: If you find yourself explaining the same sequence to multiple people, that knowledge is duplicating and should be compiled.
Common Pitfalls
When thinking about knowledge duplication:
Don't confuse learning with duplication: Learning concepts, building understanding, developing judgment—these are valuable and shouldn't be skipped. It's the operational "how to execute" knowledge that shouldn't require duplication.
Don't dismiss documentation: Documentation serves important purposes even with LT. It explains concepts, provides context, helps build mental models. LT doesn't replace documentation; it makes the operational knowledge in documentation executable.
Recognize expertise evolution: Expertise shifts from "knowing how to execute" to "knowing what to request." This is a different skill, not a lesser one.
Avoid elitism about learning: The fact that you spent 100 hours learning a system doesn't mean others should have to. Your time was valuable, but so is theirs. Compilation helps everyone.
Remember context matters: Compiled knowledge needs to adapt to context. "Monitor network" means different things for home routers vs. data center infrastructure. LT systems need to understand and adapt to context, not just replay canned solutions.
Summary
- Operational knowledge has never been systematically compiled, only documented
- Millions of people independently learn the same commands, sequences, and procedures
- Forums and tutorials are symptoms of knowledge duplication, not solutions
- The DRY principle applies to human learning at scale: discovery should happen once, compile everywhere
- Traditional capacity formula: skills × time (linear, individual)
- LT capacity formula: intent clarity × AI capability set (exponential, collective)
- The cost of fragmented knowledge includes time, errors, opportunities, and maintenance
- Documentation helps but doesn't eliminate duplication because it requires human translation
- LT compiles operational knowledge through exploration, verification, and systematic availability
- Expertise shifts from memorization to articulation
- Deep learning still matters; what changes is eliminating duplicated operational learning
The knowledge duplication crisis exists because we treated operational knowledge as something to document and share through prose, requiring each person to independently translate it back into action. We ran the same learning processes millions of times, storing the same knowledge in millions of brains, violating DRY at planetary scale.
Literate Technology offers an alternative: systematic knowledge compilation. Discovery happens through exploration and verification. Knowledge becomes instantly executable through natural language. Each person's learning compounds for everyone else. The translation burden shifts from humans to systems.
This isn't about making people lazy or eliminating learning. It's about recognizing that when the hundredth person learns the same command the ninety-ninth person just learned, we're wasting human potential that could be applied to genuinely new problems.
The question isn't "Should people learn?" It's "What should people learn?" And increasingly, the answer is: concepts, judgment, and articulation. The operational details of how to execute on those judgments? That can be compiled, shared, and instantly available to everyone.
We don't ask every programmer to independently discover sorting algorithms. It's time we stop asking every computer user to independently discover operational sequences.