>_cd /home

Building with Agents

|3 min read||

Building This Site with AI Agents: A Meta Journey

You're reading this on the Little Research Lab. And the Little Research Lab was built by AI agents. Which makes this post a bit like a snake eating its own tail, but with better test coverage.

The Setup

I wanted a simple publishing platform. Somewhere to share research, host PDFs, track who's actually reading what - the usual. But I also wanted to see how far I could push AI-assisted development using a squad of specialized agents instead of one generalist doing everything.

The cast of characters:

- Solution Designer - scoped the project before anyone got too excited

- Business Analyst - produced specs, tasklists, and quality gates with the enthusiasm of someone who genuinely loves acceptance criteria

- Coding Agent - wrote the actual code (the star, arguably)

- QA Reviewer - the one who noticed when things were quietly falling apart

The Process (and the Chaos)

The workflow was deceptively simple: Solution Designer sketches the boundaries, BA produces detailed specs with test assertions, Coding Agent implements, QA reviews. Repeat until done.

What actually happened: the agents produced a 377-line spec document before anyone wrote a single function. There's a section called "Unknown-unknowns checklist" with risk controls. The spec includes DST-safe timezone handling for scheduling. For a blog.

The BA agent created quality gates that would block deployment if any draft content could accidentally leak to the public. The spec mandates "zero audience PII" - no cookies, no tracking pixels, no IP logging. These weren't my requirements. The agents just... added them. Because apparently they've read about GDPR.

When Things Got Interesting

Halfway through, the QA agent flagged a problem: the codebase was 36% compliant with the spec's "atomic component" architecture. There were 61 type errors and 58 linting failures. The shell layer was importing from deprecated service locations.

So the agents ran a four-phase remediation:

  1. 1. Deprecate the old code (with a polite DEPRECATED.md explaining why)

  2. 2. Migrate to the new pattern

  3. 3. Fix imports across the codebase

  4. 4. Clean up the debris

The evolution log shows this as "EV-0002: AC-TA mismatch / incomplete migration / split-brain architecture." Resolved. Evidence: mypy 0 errors, ruff 0 errors.

The agents kept receipts.

What I Learned

AI agents are excellent at generating bureaucracy. I mean this as a compliment. The spec files, decision logs, and quality gates created a paper trail that made debugging straightforward and decisions reversible.

But the real surprise was how opinionated the agents became. They added security features I hadn't requested, structured the code for maintainability I might never need, and documented everything like they were preparing for a code audit.

Building software with AI agents feels less like pair programming and more like managing a small, extremely thorough consultancy that never sleeps, never complains, and occasionally rewrites your architecture because it found a better pattern.

The repo is public now. Judge for yourself.

https://github.com/nealatnaidoo/little-research-lab

// SIGNAL BOOST

Enjoyed this article?

Subscribe to get notified when new articles are published.

No spam. Unsubscribe anytime.

// RELATED TRANSMISSIONS

Related Articles

>_End of Article