Buyer's Playbook

How to get board approval for AI in 2026

By Ashit Vora12 min read

What Matters

  • -Boards reject AI proposals for one of three reasons - vague ROI, big-bang budget asks, or no failure plan. Fix all three before you walk in.
  • -Cost avoidance beats productivity gains in every CFO conversation. "We won't need to hire 4 more people" is worth more than "our team will save 10 hours a week."
  • -A phased 12-week structure turns a risky $400K decision into a low-risk $60K decision with a clear off-ramp.
  • -Unit economics close budgets. "$1.50 per ticket vs. $22 per ticket" is verifiable. "$500K annual savings" is debatable.
  • -Include the failure scenario - it builds credibility, not doubt. Boards trust people who've thought through what happens if it doesn't work.

You've been living with this AI project for months. You know the use case, you know the vendor, you know the ROI math. And you're about to walk into a 30-minute board meeting where all of that can fall apart in the first question.

Here's what that question usually sounds like: "What happens if it doesn't work?"

Most AI proposals die not because the business case is weak - but because the presenter hasn't prepared for the three questions that every board asks, in every company, every time someone pitches technology spend.

This is a playbook for that meeting. Written for the COO, VP of Operations, or CTO who has to get budget approved and needs to walk in with a case that survives contact with a skeptical CFO.

78%AI proposals that fail board approval

Not because the idea is wrong - because the framing is. Vague ROI, no failure plan, and big-bang asks kill more AI projects than bad technology.

The three questions every board asks

Before you build your deck, answer these three questions in writing. If you can't answer them in two sentences each, you're not ready.

Question 1: What's the ROI and how will you measure it?

The trap: answering this with productivity claims. "Our team will save 10 hours a week" is not an ROI number. A CFO will ask "so what?" and you'll spend the next 5 minutes trying to turn hours into dollars. Don't set yourself up for that.

The answer that works: unit economics plus a baseline. "Right now it costs us $22 to resolve a support ticket. With this agent, it costs $1.50. We handle 4,000 tickets a month. That's $82,000 in monthly cost avoidance, and we can measure the before and after on day one."

Question 2: What happens if it doesn't work?

This is the question most presenters dread and none of them should. Boards don't ask this to be obstructionist. They ask because they've approved projects that failed and they want to know you've thought about it.

The answer: a specific, bounded failure scenario. "If Phase 1 doesn't hit our 60% automation target in 12 weeks, we stop, assess, and present findings before committing to Phase 2. Total exposure: $60K. We revert to the current process, which hasn't changed."

Question 3: Who owns this after launch?

AI projects die in the handoff between the vendor and the internal team. Boards know this. If your answer is "the vendor will handle it," you've already lost the vote.

The answer: name a person, give them a title, describe their involvement. "Sarah from Ops owns this. She's been in every sprint review and will manage the vendor relationship post-launch. We've budgeted 4 hours a week for this role."

What boards approve vs. what they reject

ROI framing
Unit economics are verifiable - aggregate projections invite debate
Gets approved
Cost per task: before vs. after
Gets killed
Annual productivity gain estimate
Budget ask
Small first ask with a kill switch feels like a controlled bet
Gets approved
Phased: $60K for a 12-week pilot
Gets killed
Full program: $400K upfront
Failure plan
Planning for failure signals maturity, not doubt
Gets approved
Phase 1 exit criteria defined upfront
Gets killed
No failure scenario included
Ownership
Boards fund people, not vendors
Gets approved
Named internal owner + schedule
Gets killed
Vendor-managed post-launch
Benchmark
'Others have done it' is safer than 'we'll pioneer it'
Gets approved
Competitor already doing this
Gets killed
We'll be first movers

The difference between approval and rejection is almost never the quality of the idea - it's the quality of the risk framing.

Why productivity gains lose and cost avoidance wins

This is the single most important reframe you can make before the meeting.

"Our team will save 10 hours a week per person" sounds impressive until the CFO does the math. 10 hours x 20 people x $40/hour = $8,000/week. But are you reducing headcount? No. Are you avoiding a hire? Maybe. Are you reallocating those 200 hours to revenue-generating work with a measurable output? Probably not.

So what actually happens? Your team has 10 hours a week to do other things. That is not a budget line item. It does not show up in the P&L. And when the CFO asks "how do we realize this savings?" you don't have an answer.

Cost avoidance is different. "We currently pay $18,000/month to a BPO for invoice processing. This agent replaces that contract." That's a line item the CFO can cut. It shows up in the next month's financials. There's no ambiguity about whether the savings are real.

Three forms of cost avoidance that boards fund:

  • Eliminating a vendor contract or outsourcing spend
  • Avoiding a hire you'd otherwise have to make (headcount avoidance)
  • Reducing error-driven rework costs (every error has a dollar figure attached)

Productivity gains only count if:

  • They result in headcount reduction (you're reducing spend)
  • They allow the team to take on measurably more revenue-generating work (with targets attached)
  • They prevent a hire you can document would have been needed

If your case doesn't fit one of these three conditions, reframe it as cost avoidance or don't present it as financial ROI.

The structure that gets approved: phase-gated investment

The single biggest reason AI proposals fail is the ask size. A $400K commitment for an unproven project requires a full capital expenditure review, board-level sign-off in most companies, and months of due diligence.

A $60K pilot with a 12-week timeline and a defined outcome requires a budget owner's approval in most companies. Same project - different entry point.

Phase-gated AI investment structure

1
Phase 1: Prove it (12 weeks)

$40K-$80K. One workflow. One measurable outcome. Defined success criteria before kickoff. Kill switch if targets aren't met.

Get in the door
2
Decision point

Review Phase 1 data. Did it hit the automation rate and unit economics targets? Yes: proceed. No: stop, present findings, reassess.

Earn the next dollar
3
Phase 2: Scale it (12 weeks)

$60K-$120K. Full production deployment. Monitoring, edge case handling, and integration hardening. 90-day ROI measurement period.

Prove it at scale
4
Phase 3: Expand (ongoing)

$100K-$200K+. Additional workflows built on proven infrastructure. Each new use case costs 30-50% less than the first.

Compound the returns

Phase 1 is the only thing you're asking for in the meeting. The board approves Phase 1 because:

  • The dollar amount is within a single approver's authority at most companies
  • The timeline is short enough to see results before the next quarterly review
  • The defined success criteria mean there's an objective outcome, not a judgment call
  • The kill switch limits total downside exposure

You mention Phase 2 and 3 in the deck - one slide, two bullets. You are not asking for that money today. You're showing the board what winning looks like so they understand what they're actually funding.

Building the deck: five slides that close

Keep it short. A board deck for a $60K pilot should not be longer than 8 slides. Here's the structure that works:

Slide 1: The problem (with numbers) Not "our support team is overwhelmed." Specific: "We handle 4,000 support tickets per month at $22 per ticket. Volume has grown 18% year over year. Without intervention, we add 3 headcount by Q3 at $75K each."

Slide 2: The solution (one workflow, one outcome) Not "AI transformation." Specific: "An AI agent that handles Tier 1 tickets - order status, returns, password resets. 60-70% automation rate, reducing per-ticket cost from $22 to $1.50-$2.00."

Slide 3: The business case (unit economics) One table. Before and after. Monthly cost today vs. monthly cost with AI. Break-even timeline. Don't show a 3-year NPV model - it looks like you're papering over weak unit economics.

Slide 4: The plan (phase-gated) Phase 1 ask with timeline, exit criteria, and cost. Mention Phase 2 and 3 briefly. Show the decision point between phases.

Slide 5: Risk and failure scenario What happens if Phase 1 doesn't hit targets. Total exposure. Reversion plan. Who owns the decision to proceed or stop.

Then a final slide with a comparable - a company in your industry that has done this, with results. "Shopify's support AI handles 65% of tickets without human involvement" lands better than any ROI projection.

The benchmark play: who's already doing this

Boards fund things that other successful companies are doing. Not because they're followers - because it validates the risk. "We'd be the first to try this" is a red flag. "These three companies in our industry are already doing this, and here are their results" is a green light.

Before the meeting, collect three examples. They don't have to be direct competitors. They can be companies in adjacent industries, similar company sizes, or well-known names that a board member would recognize.

What to include for each example:

  • Company name and industry
  • What they automated
  • Measurable outcome (resolution rate, cost reduction, time saved that translated to headcount avoided)
  • Source (published case study, press release, or credible third-party report)

If you can't find three examples with real numbers, you can use category data. "Enterprise companies deploying AI agents in support report average 60-70% automation rates and $15-$20 per ticket savings, per [McKinsey's 2025 State of AI report]." Published benchmarks from credible sources work when specific company examples aren't available.

Framing your benchmark examples

Direct competitor
Strongest framing

A company your board knows - ideally a named competitor - is already using AI for this. The implication: if we don't move, they widen the gap.

Best for

Competitive industries where board tracks competitor moves closely

Watch for

Don't use a competitor who has publicly struggled with AI - it cuts both ways

Adjacent industry leader
Safe framing

A company in a related industry (similar size, similar process) has deployed and published results. Validates the approach without the competitive pressure angle.

Best for

Companies where direct competitor data isn't available or relevant

Watch for

Make sure the process is analogous - 'Amazon does AI' doesn't validate your specific use case

Category benchmark
Baseline framing

Published research from McKinsey, Gartner, or Forrester showing average results for companies in your use case category. Less specific but credible.

Best for

Novel use cases where company-level examples are scarce

Watch for

Lead with specific numbers from the research - don't just name-drop the source

What to say when they ask "why now?"

You'll get this question. The honest answer is also the right one: the cost of waiting is real and quantifiable.

"Every month we don't do this, we spend $22 per ticket on a process that costs $1.50 with AI. At 4,000 tickets per month, that's $82,000 in avoidable cost per month. We've already missed $246,000 in the past quarter. Phase 1 costs $60K. We recover that in the first month after go-live."

That math does two things. It makes "not now" feel expensive. And it makes the $60K ask feel small relative to the opportunity. Boards approve things when the cost of inaction is higher than the cost of action.

A second angle: the team. If you have 2-3 people manually doing the work the AI would handle, say it plainly. "We have two analysts spending 60% of their time on this. They were hired to do higher-value work. This frees them to do that. If we keep growing, we'd need to hire a third analyst in six months. This prevents that hire."

After the meeting: the follow-up that closes

If the board approves Phase 1 in the meeting, great. If they ask for more information or table the decision, you have one job: remove the objection they named, and nothing else.

The mistake most people make is over-responding. They go back with 20 more slides trying to anticipate every possible question. That creates more questions.

Instead: name the specific objection, address it with one slide or one document, and re-request the decision. "You asked about what happens to the support team if automation hits 70%. Here's the redeployment plan for those two roles - both move to account management where we're currently understaffed. Can we get approval to start Phase 1?"

Short, specific, asks for a decision.

At 1Raft, every project we start has already been through this process with the client's internal team. We help build the business case before we write a line of code - not because it's good for sales, but because an AI project without board-level alignment almost always stalls at launch. The 12-week sprint structure was designed specifically to fit a phase-gated approval process. Phase 1 is the pilot. Phase 2 is the rollout. The board never has to approve more than one phase at a time.

If you're building your AI business case right now, here's the full ROI framework we use - including the TCO model that vendors don't show you and the unit economics structure that actually closes budgets.

Share this article