Public artefact · v1.0 The methodology behind Domanski.AI

In-Seat
AI.

The methodology behind two solo-built AI products — and what every Role Install at your company is a fraction of.

Production AI without an engineering team. The operating system, codified. Published. The credential, not the product.

// FOUR LAYERS · BREAK ONE · BREAK THE SYSTEM
In-Seat AI isn't zero people. It's zero new hires. The methodology doesn't replace your team — it makes each person on it capable of two or three more, by giving them agents they direct themselves. The name describes what stops growing (the org chart), not what disappears (the humans).
§ 01 · Thesis

Why this exists.

Software businesses are built around the assumption that engineers are the labour. Headcount is the lever. Hiring is the bottleneck. Scaling output means scaling team size, which means scaling capital, which means scaling fundraising or bootstrapping pain.

That assumption broke in 2024–2025.

AI agents can now do most of what an engineering team does — write, review, test, deploy, monitor, iterate — when they are orchestrated by one technically literate operator who understands the system. The bottleneck is no longer hiring. It is direction.

In-Seat AI is the methodology that operationalises that shift. It is how one person runs the work of a team without the team. It is also — at a smaller scale, role by role — how one department head runs the work of two or three juniors without the juniors.

The methodology is open. Read it, build on it, take it with you. The methodology is the credential. The product Domanski.AI sells is the install — me sitting next to your operator, building the agents that fit their specific workflow, training them to own it after.

It is not a tool. It is not a stack. It is the operating system.

§ 02 · Four layers

Each one solves a problem you've already hit with AI.

01
// The human + the brief

Direction

Human · 1

The problem of "I don't know how to set this up." Every agent gets a brief — not a prompt. A one-page contract that names the job, the inputs, the outputs, the receiver. The operator (in a Role Install) or the technical director (in a product like MzansiEdge) writes the briefs. We build the first one together. From then on the operator owns it. The director is the only human in the loop.

02
// The context that survives

Memory

Persistent

The problem of "I lose the thread between sessions." Your agent remembers — your voice, your project, your last decision, the feedback you gave it on Tuesday. Context lives in files the operator can read and edit, not buried in a chat thread that resets when you close the tab. Open the laptop tomorrow and the agent picks up exactly where you stopped. This is also where the methodology turns "a clever conversation" into a system that compounds.

03
// The agents that ship the work

Production

Fungible

The problem of "I'm using 15% of what AI can do." Multi-agent stacks — currently Claude Code — actually do the repetitive 60% of the role: outreach lists, first drafts, research compilations, formatting passes, QA, evidence-gathering. Parallel where independent. Sequential where dependent. Your judgment stays on the 25% that compounds — the creative call, the client conversation, the strategic choice. The production layer is fungible: agents improve every six months; the methodology must not depend on which model is current.

04
// The receipt at the bottom

Proof

Mechanical

The problem of "how do I know this is actually working?" Every run produces a receipt — what the agent did, what it used, what it skipped, what it flagged for review. Proof is not "the agent says it's done." Proof is: the test passed, the deploy succeeded, the metric moved, the artefact passed validation, the second model (Codex) signed off. Gates are mechanical, not aspirational. You audit your own AI labour the same way you'd audit a junior. No black box. This layer is what separates production In-Seat AI from "I let Claude write some code."

05
// The cadence

Recurrence

Three modes

Every agent installed under this methodology runs in one of three modes — set at install time, changed only via the memory-update protocol.

  • Ad-hocA live director opens Cowork, the agent opens with its question, the brief is drafted from the brainstorm, the director approves, the agent dispatches. Default for irregular workflows — a deck per prospect, one-off research synthesis.
  • ScheduledA timer fires the agent. It reads inputs/, applies a frozen brief, produces output, notifies the named operator. Stable contracts; per-run variability lives in the inputs.
  • TriggeredAn upstream event (new file, upstream agent report, inbound email, CRM record change) fires the agent. Same loop as scheduled — read, apply, produce, notify. Use for research-to-call-prep chains, signing-to-recap, agent-to-agent handoffs.

All three preserve the methodology's discipline — same gates, same evidence checks, same audit log. Mode just collapses the live-director steps into a notification when no director is in chat at run time.

§ 03 · Economics

What changes when this works.

A product team of ten ships maybe one significant feature per fortnight. A In-Seat AI operation ships one to three meaningful changes per day.

// CONVENTIONAL · 10-PERSON ENG TEAM
R10–14m /yr
Salaries, benefits, equipment, management overhead — all-in, SA market.
// IN-SEAT AI · MZANSIEDGE SCALE
~R10–80k /mo
Token spend + one director. No coordination tax, no hiring cycle, no holiday cover, no notice periods.

Output is not 10× lower because there are no engineers. It is comparable or higher on the surfaces where the methodology is applied.

This is not a marginal efficiency claim. It is structural.

And the team stays perpetually current. New models, frameworks, and APIs ship weekly. Most engineering orgs are 6–12 months behind on integrations because eng cycles are slow. A In-Seat AI operation integrates new capabilities at the speed they ship — investigation brief on Tuesday, integration shipped by Friday. The methodology absorbs the model churn instead of being slowed by it.

§ 04 · Boundary

What this is NOT.

NOT "AI-assisted development." That is engineers using Copilot. In-Seat AI replaces the engineers on the surface where it is applied.
NOT "Low-code / no-code." That is buying a tool. In-Seat AI is operating a system.
NOT A vendor stack. Claude is the dominant labour layer right now; that will change. The methodology survives the change.
NOT Applicable to every product surface. Some workloads still need humans. The methodology applies where the work is iterative, well-bounded, and verifiable.
NOT Training. The director learns by operating, not by being lectured at.
§ 05 · Worked example

MzansiEdge.

The receipt.

Product. Sports-betting intelligence platform. Generates evidence-backed recommendations, narratives, image cards, and publishes to a Telegram bot plus a fanout publisher across 23 social channels.

Headcount. 1 (Paul).

// The four layers in practice

  • DirectionPaul. ~3 hrs/day on the platform.
  • MemoryBrief library in Notion + per-agent context files. ~40 active brief types.
  • ProductionMulti-agent Claude Code stack. 6+ specialised agents.
  • ProofComposite framework (Edge V2, 7-signal). Threshold gates per tier. Codex audits the build.

// What got replaced

  • Writers2–3 → narrative-gen + verdict agents
  • Analysts1–2 → evidence-pack pipelines
  • Editor / QA1 → validator + arbiter loops
  • Publishing1 → channel automation
  • Designer0.5 → skill-prompted image gen
2–3writers
1–2analysts
1editor
1pub mgr
0.5designer

~5–6 FTEs replaced. Total cost: token spend + one director. This is the proof, not the claim.

§ 06 · Worked example

Mack Brands.

The first Role Install.

Workflow. Distributor pitch decks for a 7-person premium spirits brand house operating across multiple markets and four product lines.

Headcount before. The brand team assembled each distributor deck manually — 4–6 hours per deck, repeated 8–10 times a quarter as new distributor conversations opened.

The install. Built live in the 13 May 2026 Founding Workshop.

// The four layers in practice

  • DirectionThe brand team's strategy lead.
  • MemoryBrief template + Mack voice + brand fact-sheets — all filed where the agent can read them. The agent walks in pre-loaded with what it needs to know.
  • ProductionCowork agent (distributor-deck-builder) reads brand fact-sheets, applies master template, generates 10–15 slide deck in Mack voice.
  • ProofBrand-voice auditor + evidence gate. Refuses to ship a deck with unsourced past-blueprint stats or off-brand phrasing. Strategist signs the receipt.

// What changed

  • Headcount afterSame 7 people.
  • Deck timeHours → minutes per pitch.
  • OwnershipThe brand team owns the agent.
  • ContinuityThe methodology survives Paul leaving.

First Role-Install-flavoured build at a client. The methodology, role by role. Read the case study →

§ 07 · Install

Bespoke AI systems for creative production. Installed into your employees. Not in place of them.

Want this installed in your business?

Three engagement shapes — Founding Workshop (half-day), Role Install (per-operator), Continuous Operator (monthly). Entry pricing on the pricing page. Full conversation on the discovery call.

Book a discovery call See pricing