Build with us

The framework page covers what organizational AI requires, this page covers how we guide your team in building it.

There are three versions of the engagement. In each one your engineers design and write the code that goes into your repo — the skills, agents, specs, and review loops that make up your team’s version of the framework. We coach the thinking and review the work as it lands.

Foundations runs six weeks. Practice and Residency both run twelve. All three build the same framework. What varies is how long we stay engaged after the initial build, how closely we review the work, and how many stakeholders outside engineering we work with alongside the team.

The shape of every engagement

The first four weeks are the same across all three. Two workshops a week, each covering one of the five prerequisites: Coordination, Context, Pipeline, Harness, Compounding. Your engineers build pieces of the framework during the workshops and extend them between sessions. Skills, agents, context files, reviewer patterns.

Week four the engagement turns to architecture. Your engineers have been using individual pieces for a month by then, and this week is when they design the system those pieces should build into for your organization. What the pipeline looks like end to end, how the harness layers on, where stakeholder input belongs, how the compounding loops get fed. We work with them so that they keep what works, rework what doesn’t, and identify what’s still missing.

After week four is where the three versions diverge.

Foundations

Foundations lasts 6 weeks, with 2 weeks of support after the single architecture session where we help the team refine their thinking and help them set a solid foundation for them to work on.

This is the best option when the codebase is small enough, there aren’t many stakeholders, and your team has time between sessions to practice what they learned.

Right for

  • Small teams on a contained codebase
  • Teams with room to practice between sessions
  • Organizations with no urgency to be at full speed soon

Not right for:

  • Teams under delivery pressure while this runs
  • Codebases too large or varied for a single architecture session to cover
  • Organizations with multiple stakeholders that need to be included

Practice

Practice works over twelve weeks with the shared build and architecture in weeks one through four. Weeks five through twelve are when the team ships through their system on the tickets in their actual backlog. During that time inevitably things break, complications show up, stakeholders have feedback. Our goal with the remaining 8 weeks is ensuring that you’re set up for success to maintain this on your own going forward. Reviews shifts mostly to being reactive after week six and office hours continue twice a week. We guide, we support, we help unblock when something critical happens as your team learns the ropes of thinking in an AI system first way.

Practice includes one working session with all adjacent stakeholders, product, editorial, revenue, to get their involvement into the coordination piece and to support your team in getting the collaboration across the organization.

By week twelve the framework has been through live fire. Some parts may still need tuning but the follow-up reviews at +4 and +12 weeks are there to ensure things are on the right path.

Right for:

  • Teams that can’t stop shipping for twelve weeks while this runs
  • Organizations that want the framework tested against real work before their team is on its own
  • Teams ready to adjust the architecture as it meets real pressure

Not right for:

  • Teams that need to be shipping at the new velocity by week twelve, not on a path toward it
  • Organizations where many stakeholders may need change management support to work alongside this

Residency

Twelve weeks on a heavier cadence. Same first four weeks as the others, but office hours and reviews run every day for all twelve weeks. Every skill, agent, and architectural change the team commits gets read, reviewed and commented on. We flag drift in the architecture as it happens, rather than after the team flags it.

Stakeholder work is bigger. Practice has one session with adjacent stakeholders while residency brings them all in over the course of the engagement: product, editorial, legal, ops, anyone whose inputs or outputs may need to coordinate with the pipeline. Your engineers lead most of those sessions with us in the room and by close all stakeholders know how to interact with the new systems.

By week twelve the team is running a system they designed, with stakeholders who know how to feed it, and they are already planning the next set of improvements.

Right for

  • Teams under heavy pressure with limited room to work this into their schedule
  • Engagements with multiple stakeholders who need to be included
  • Organizations that need the framework running at full speed as soon as possible.

Not right for:

  • Budgets that don’t stretch to this tier.
  • Organizations that want us running delivery while the team watches, this is still a coaching engagement.

A week inside an engagement

Workshops run twice a week in the first month with each one covering a different prerequisite. These aren’t talks or slideshows, it’s us explaining and guiding your team as they build the artifacts during the session.

Office hours are open working sessions and folks bring us what’s stuck: a skill that isn’t behaving as expected, a reviewer rejecting things it shouldn’t, an agent producing wrong code confidently, challenges with supporting the coordination with stakeholders. These happen a few days a week in Foundations and Practice and every day in Residency.

During the first four weeks for Principal and Residency we also have dedicated paired sessions with each engineer where they get an hour each to work on their problems and where we can address individual challenges.

PR review depth depends on the tier, the first four weeks of Foundations and Practice have a solid amount of reviewing, which decreases after week 4 to a lower number of hours a week until the end of the engagement. Residency has thorough proactive reviews the whole time and we also bring up suggestions as we see the need, suggesting new agents, new ways of shaping the pipeline, etc.

Follow-ups happen at +4 weeks after every engagement, and +12 weeks after Practice and Residency. The team walks us through what they’ve built since close. We give feedback and bring up suggestions for additions or share new ways of working that have come up in the last 3 months.

Who this isn’t for

Teams that haven’t adopted any AI yet. Come back when your developers are using Copilot, Cursor, or Claude daily and the individual gains have stopped translating into team gains.

Teams whose codebase is controlled by another function. If you can’t shape your own architecture or release cadence, the framework won’t reach far enough to matter.

Organizations looking for staff augmentation. We coach teams. We don’t fill seats.

Leaders evaluating tools. The framework is nearly tool-agnostic by design. We can help you implement Claude Code or any other product, but this isn’t a turn key solution we’re providing, our goal is that your team owns this after we leave.

Questions we get

Who does the work during the engagement?
Your engineers do. We coach, review, and in Residency guide with a heavier hand, but the skills, agents, specs, and PRs your team will carry forward are the ones they built. The practice only sticks if the people who’ll carry it forward are the ones who wrote it.

My team already tried Copilot or Cursor and hit a wall. Is this the right fit?
Yes! That’s the most common starting point for teams. They have some good or even great individual tool adoption but it doesn’t aggregate into team-level results on its own.

Practice or Residency?
Two questions. First: how much pressure is your team already under, and how much room do they have to work this into their schedule? Less room, more Residency.
Second: how many people outside engineering have to change how they work for this to hold? One or two, Practice has a session for that. Four or five, Residency runs all of them in over the course of the engagement.

How is this different from an AI training course?
Training is about individual fluency: prompting, tool use, habits. This engagement is about the infrastructure around that. How inputs reach AI agents, how your codebase provides context, changing the team’s mindset to think in ways that leverage AI systems.

None of these plans address what I’m looking for, is there any flexibility?
We’re always interested in better understanding what our clients need, describe your situation, what you need and how you think you can help in the form below and let’s have a discussion.

Get in Touch

Start a conversation

The first call is about where your team is now. What you’re trying to get to. Which of the three descriptions above is closest to your situation. We’ll figure the fit together. If there isn’t one, you’ll still leave the call with a clearer read on what your team needs first.