← Articles
perso

Nine Months Ago I Couldn't Code

2026-03-25 12:44:11 · 01960e00-0001-7000-8000-000000000001

The starting point

Nine months ago I couldn't write a line of code. Couldn't use Linux either. My background is philosophy — formal training in logic, ontology, and the kind of structured thinking that's useful everywhere and employable almost nowhere.

Today I maintain a Go monorepo with 20+ packages: vector search, multi-tenant SQLite pools, distributed queues, QUIC transports, prompt injection detection, RAG document pipelines. Production-grade, with tests, benchmarks, and architectural decisions I can defend.

I have never typed a single line of code. Not one. Linux, deployments, migrations, maintenance — all of it generated. And I never will type code, because typing speed is the first bottleneck to look at: I'm not competitive there, so I don't compete on it.

What actually happened

I started by collecting data and structuring it. Not code — knowledge. What is distributed consensus? What is leader election? What does SQLite actually guarantee under concurrent access? What is the difference between a worker pool and a semaphore?

I asked a language model. Hundreds of times. The same questions, from different angles, until the concepts were clear. The model's advantage isn't that it knows more than a senior engineer — it's that it doesn't hang up after question 350.

Most of the issues I encountered came from dataset drift with actual technologies. Documentation is written for a specific version; Stack Overflow answers assume a context that may no longer exist; blog posts describe architectures that were valid three years ago. The LLM had the same problem — but I could ask it to verify, to compare, to re-examine its own claims against current sources.

The discovery process

Here's what the process looked like, concretely.

I needed high-availability multi-instance coordination. I started with Consul. Too much infrastructure. Then Raft — interesting, but requires an external coordinator. Then etcd — same problem. Each one added a dependency I didn't want and solved a problem I didn't exactly have.

So I described what I actually wanted: N instances, each with its own time slot, no contention, no external coordinator. The LLM told me the pattern already existed — Time Division Multiple Access, from radio multiplexing, standardized for GSM in 1987. Same idea: multiple transmitters share a channel by taking turns in fixed time slots, no collision possible.

That became squeueHA's TDMA mode. A single SQLite primitive — a bounded visibility window on a claimed row — covers leader election, work distribution, and elastic overflow, just by calibrating the timeout. No Consul, no Raft, no external dependency.

I didn't invent the concept. I described a problem until the right abstraction surfaced.

The pattern that repeats

This happened over and over:

  • PostgreSQL → "what's the real problem?" → per-tenant SQLite shards eliminate the caching question entirely
  • Chunk-and-embed RAG → "what's the real problem?" → inference parser with atomic claims, entity resolution by registry lookup
  • PDF extraction → "what's the real problem?" → triple indexation (text + vision + visual similarity), treating each page as three parallel data streams
  • Regex-based injection detection → "what's the real problem?" → normalization pipeline that strips obfuscation layers, then simple string matching on clean text

Every time, the answer was the same: do the research before writing the code. Not Google research — conceptual research. What is the shape of the problem? Does this problem have a name? Has someone solved a structurally identical problem in a different field?

The "AI slop" reaction

When I published this work — on Reddit, on Hacker News, on GitHub — the reaction was mostly silence, occasionally hostile. "AI slop." "Is this satire?" "A monorepo is not something to brag about."

The hostility follows a pattern. Someone says "SQLite is only for prototyping" — and then the author of rqlite shows up in the same thread and demonstrates the opposite. Someone says "20+ packages is not a flex" — without looking at what the packages do or how they're structured.

The assumption is that code generated with LLM assistance is inherently inferior. That's a testable claim: the code is public, the tests pass, the architecture is documented. The response is never "I looked at your code and here's what's wrong." The response is "you used an LLM, therefore it's bad."

This is a category error. The tool is irrelevant; the output is what matters. A bad architect with a keyboard produces bad architecture. A good architect with an LLM produces good architecture. The bottleneck was never the typing.

What I actually learned

The LLM didn't teach me to code. It taught me to think about systems. The questions I learned to ask — "what's the real problem?", "does this pattern have a name?", "what assumption am I making that I haven't examined?" — those are philosophy questions applied to engineering.

The LLM is a dialogue partner that doesn't get tired, doesn't judge the question, and doesn't gate-keep the answer behind years of credentials. It also hallucinates, loses context, and confidently provides wrong attributions (I posted a claim that someone built rqlite when they actually built Litestream — the LLM's mistake, my responsibility for not verifying).

The honest account: it's an extraordinarily powerful tool with real failure modes, and the skill is in knowing when to trust and when to verify. That skill is not coding. It's epistemology.


hazyhaar — open research, sovereign infrastructure github.com/hazyhaar · hazyhaar.fr

journeylearningllmsolo-dev