[How-To] Why Most Architecture Review Boards Suck
And How to Fix or Bypass Them
If you’ve ever tried to ship anything meaningful in a large organization, you’ve probably run into an Architecture Review Board (ARB). And if you’re like most people, your first reaction wasn’t, “Great, I can’t wait to present to the ARB!” It was, “How do I get through this as painlessly as possible?”
I’ve been on both sides of the table, as the person defending a design and as the “architect” being asked to bless or block it. Here’s my honest take: most ARBs suck. They were supposed to make things safer and smarter. Too often, they end up being a logjam, a political arena, or a box-checking ritual that delivers neither speed nor quality.
But the original intent wasn’t wrong. The problem is the process.
1. What ARBs Are Supposed to Do (And Where They Go Wrong)
At their best, ARBs are meant to:
Catch high-impact mistakes before they ship.
Ensure alignment on standards, patterns, and integration points.
Bring in cross-disciplinary perspectives (security, infra, business, etc.).
Create a record of why key decisions were made.
But in practice, ARBs often:
Meet too infrequently (monthly or worse), turning “just in time” review into “just in case” delay.
Attract too many stakeholders, so everyone feels compelled to have an opinion – especially on things outside their expertise.
Focus on documentation and ceremony, not real technical or business risk.
Incentivize teams to “design for the board” instead of designing for users or reliability.
Become a venue for turf wars or for people to avoid accountability (“I told you this would happen in the ARB!”).
The result:
Slow shipping.
Frustrated teams.
Decisions made by committee, often by the people least familiar with the actual problem.
2. Why Do We Keep Doing It This Way?
Mostly, inertia and fear:
Inertia: “This is how we’ve always done it. If we abolish the ARB, what if something goes wrong?”
Fear: “If we don’t have a record of sign-off, who gets blamed if there’s an incident?”
But the world has changed. With modern CI/CD, observability, and AI, you can get most of the value of an ARB – risk management, alignment, documentation – without the logjam.
3. How to Fix (or Bypass) the ARB
Step 1: Move from Calendar to Pipeline
Don’t make teams wait for the next monthly meeting.
Instead, tie reviews to concrete milestones: major design commits, high-risk changes, or integration points.
Use async review whenever possible: RFCs, design docs, or change summaries circulated for comment.
Step 2: Use AI to Pre-Answer Standard Questions
Most ARB meetings waste time on the basics: “Did you consider X? What’s the rollback plan? Who owns this service?”
Use AI to generate a standardized impact analysis and risk summary from the design doc, code diff, and dependencies.
Auto-populate the ARB checklist so humans focus on the real trade-offs.
How AI Can Help Make ARBs Smarter and Faster
AI isn’t just about automating paperwork. In the context of ARBs, it can:
Generate impact analysis and dependency diagrams from architecture docs and code, so reviewers see the real blast radius at a glance.
Summarize design docs and RFCs for busy stakeholders, highlighting key changes, risks, and open questions.
Flag inconsistencies or missing information (“No rollback plan provided,” “Unclear data owner,” etc.) before a human ever reads the proposal.
Suggest relevant standards, patterns, or previous decisions based on the context of the current review, so teams aren’t reinventing the wheel or missing best practices.
The result:
AI reduces the time spent on repetitive checks and surface-level objections, making the human part of the ARB more focused, substantive, and collaborative.
Step 3: Right-Size the Review
Not every change needs the same scrutiny.
Create a risk taxonomy (like in change control):
Standard: No ARB, just automation and logging.
Moderate: Async review, comments required.
High-risk: Synchronous review, but with a clear agenda and pre-read.
Step 4: Make the Review Collaborative, Not Adversarial
The best ARBs feel like a design jam, not a tribunal.
Encourage teams to bring in architects early for brainstorming, not just for approval at the end.
Use the review to share lessons, patterns, and reusable solutions – not just to gatekeep.
Step 5: Document Decisions – But Don’t Let Docs Become the Goal
Keep a lightweight, searchable record of:
What was proposed.
What was decided.
Why.
Use tools like RFCs in GitHub, Notion, or Confluence – not 12-page PDFs or endless email threads.
4. Alternatives That Actually Work
Architecture Guilds:
Regular, open sessions where anyone can bring a design for feedback – not approval. Encourages sharing and learning without the pressure.
RFC + Async Review:
Teams submit a short RFC (Request for Comments). Stakeholders comment in-line. Only escalate to a meeting if there’s a real disagreement.
AI-Driven Impact Analysis:
Use LLMs to auto-generate diagrams, dependency maps, and risk summaries so reviewers can focus on substance.
5. Pitfalls to Avoid
Turning the async review into a slow-motion ARB (“You can’t ship until everyone’s commented”).
Letting AI-generated summaries become a substitute for real understanding.
Failing to get buy-in from leadership or key teams (they’ll route around the new process).
Overcomplicating the tooling, keep it simple and lightweight.
6. Checklist: A Better ARB Process
Reviews are triggered by risk and milestones, not the calendar.
AI or automation handles standard questions and documentation.
High-risk changes get focused, time-boxed review with clear pre-reads.
AI tools generate impact analysis, summarize key changes, and flag missing info before review, so humans can focus on real trade-offs.
If any reviewer or group objects to more than 35% of proposals, they help lead a retrospective to identify root causes and improve the process or standards.
If the objections are consistently valid, that’s a signal your documentation, onboarding, or standards need work, not just that the ARB is a gatekeeper.
Teams can get feedback early, not just at the end.
Decisions and rationale are recorded in a searchable, lightweight format.
The process feels like collaboration, not a gauntlet.
If you can check most of these, you’re on your way to an ARB that actually helps teams ship better systems, without becoming a logjam.
If you have a war story about an ARB gone wrong (or right), I’d love to hear it. Otherwise, let’s keep raising the bar, and lowering the barriers.



Good points so far, but
If your ARB consistently rejects or significantly changes proposals, does that mean the board is doing its job protecting the organization—or does it reveal that your engineering teams fundamentally don't understand or trust your architecture principles, suggesting the real problem is upstream in how you communicate and embed standards?