Skip to content
← Insights

Astaire: Structured Context for AI-Driven Development

By Chris Slothouber

aisoftwaregovernancetooling

AI-driven development is here to stay. Teams are already using AI to write code, debug failures, and refactor old systems. That does not remove human leadership. It changes where human effort matters most.

The low-level typing is easier to delegate now. The hard part is defining what the software is actually supposed to do, how it will be evaluated, and what constraints it has to respect. If those inputs are vague, AI just helps you get the wrong answer faster.

The Real Bottleneck

Reliable AI-assisted development depends on concrete requirements, specs, scenarios, reviews, and verification artifacts. That means the work does not get less structured. In a lot of cases, it gets more structured.

The problem is that large language models do not read for free. Every document they ingest costs time and compute. At small scale that is manageable. At larger scale, especially when a project accumulates hundreds of governance artifacts, search and retrieval start turning into their own tax.

You end up paying for the same friction twice:

  • humans spend time finding the right documents
  • models spend tokens re-reading the same material

What I Built

To reduce that cost, I built Astaire: a structured, persistent knowledge layer that sits between raw documents and LLM reasoning.

It has two complementary subsystems:

  • Claim store for extracted knowledge such as entities, claims, relationships, and contradictions
  • Document registry for fast, indexed storage and retrieval across arbitrary file sets

Astaire is document-type agnostic. Applications register as collections with their own document types, tags, and lifecycle rules, which makes the system useful beyond one narrow governance workflow.

Why It Matters

The goal is not to replace source documents. The goal is to stop paying full price every time a model needs enough context to reason well.

That matters because AI development only scales if context handling scales with it. If every useful answer requires loading raw files over and over again, the economics get worse as the project gets more serious.

With a persistent knowledge layer in the middle, the model can work from compressed structure when that is enough, and pull raw documents only when the task actually needs them.

Outcome

Good AI development depends on better context, not just better prompts. The more serious the project gets, the more that context needs structure, retrieval discipline, and cost control.

What The Benchmarks Looked Like

Using a 72-document governance collection, Astaire produced:

  • 270:1 token compression with an L0 summary capturing the knowledge base in 263 tokens instead of 71,223 raw tokens
  • 44.3% token savings on scoped context assembly with budget enforcement
  • 362x faster tag-based queries compared with filesystem glob plus read
  • 7.3x faster full collection assembly compared with sequential file reads

Those numbers are the difference between an idea that sounds clever in theory and something that actually changes how an AI-heavy workflow behaves under load.

For the full details and benchmark data, see the Astaire benchmarks.

Why This Belongs Here

I built Astaire for the same reason I keep ending up in cross-layer work in the first place: the interesting problems are rarely isolated to one tool or one layer of the stack.

AI changes software development, but it also changes planning, verification, retrieval, governance, and cost discipline. This is one of those systems that only makes sense when you look at the whole loop instead of one step in isolation.

That is the kind of work this blog is for.

Signal Continues