Skip to content
Gosai Digital
  • Services
  • Use Cases
  • Case Studies
  • Process
  • Resources
  • About
Book a call
←Back to resources
Resource guideFoundationalAI & Automation
By Gosai Digital·January 2026·10 min read
ResourcesArticle

Scoping AI Projects: The Framework That Kills Pilot Graveyards

95% of AI pilots fail to deliver measurable ROI. The difference between the 5% that succeed and the graveyard isn't technology - it's scoping. Here's the framework we use to ensure AI projects actually ship.

10 min read
January 2026

The Pilot Graveyard is Real

Recent research paints a grim picture of enterprise AI adoption

95%

of AI pilots fail to deliver ROI

88%

of AI pilots never reach production

42%

of companies abandoned AI initiatives in 2025

9 months

average time to scale (vs 90 days for mid-market)

Why AI Pilots Fail

None of the core failure modes are technical. Not one.

AI does not fail at scale because the model stops working. It fails because the environment it enters is fundamentally different from the one in which the pilot succeeded.

- Enterprise AI Implementation Research

The Framework

The 5-Question Scoping Framework

Before writing a single line of code or evaluating any vendor, answer these five questions. If you can't answer them clearly, you're not ready to start.

1

What decision are we improving?

AI doesn't exist in a vacuum. Every successful AI project improves a specific decision that humans currently make. Not a process. Not a workflow. A decision.

Too Vague

"We want to use AI for customer service"

Decision-Focused

"We want AI to decide which support tickets can be auto-resolved vs. need human review"

The decision framing forces clarity. It identifies who currently makes this decision, how often, and what data they use. This becomes the foundation for everything else.

2

What does "good" look like?

Before building anything, define what success means in concrete, measurable terms. Not "better customer experience" - actual numbers.

If stakeholders can't agree on what "good" looks like before you start, they definitely won't agree after you ship. This alignment is non-negotiable.

3

What's the human-in-the-loop shape?

AI scales because it's trusted. Trust comes from the right human oversight model - not too much (defeats the purpose), not too little (creates risk).

Human Approves

AI recommends, human decides. Good for high-stakes or early-stage trust building.

Exception-Based

AI acts autonomously, escalates edge cases. Best for high-volume, clear-cut decisions.

Audit-Based

AI acts fully autonomously, humans review samples periodically. For mature, proven systems.

Document who reviews, what triggers escalation, and what the response SLA is. This isn't bureaucracy - it's what separates pilots from production systems.

The kill criteria conversation is the hardest one you'll have. It's also the most valuable.

4

What are the integration requirements?

Technical limitations cause 43% of AI project failures. An AI project is 70% a data project. Map your integration reality before you commit.

Many pilots work in controlled environments but fail at scale because integration was an afterthought. If you need data team availability, book it now.

5

What are the kill criteria?

The 5% that succeed have something the 95% don't: the discipline to kill projects that aren't working. Define your off-ramps before you start.

Accuracy Floor

"If accuracy drops below 85% after 2 weeks, we pause and reassess."

Budget Cap

"Total pilot investment capped at $50K. No budget extensions without go/no-go review."

Time Box

"Decision to scale or kill by Week 8. No exceptions."

Sunk cost fallacy kills more AI projects than bad technology. Pre-commit to your exit criteria when emotions aren't involved.

Red Flags During Scoping

If you hear any of these during scoping conversations, pump the brakes. These are early warning signs of a project headed for the graveyard.

Planning Red Flags

"We need this to be fully autonomous from day one"

"Legal will review it once it's built"

"We don't have the data yet, but we'll get it"

Ownership Red Flags

"Everyone's excited about this" (but no clear owner)

Excitement without accountability is how projects drift. Ask: "Who loses their bonus if this fails?" If no one raises their hand, no one owns it.

"The board wants us doing something with AI"

"We'll scale it after we prove the concept"

"The team doing it today will adopt it" (without asking them)

The Minimum Viable Scope

Cut. Then cut again. The 90-day scalers are ruthless about scope.

Over-scoped Pilot

  • Handle all customer inquiries
  • Multi-language support
  • Integration with 5 systems
  • Sentiment analysis + escalation
  • Full analytics dashboard

Minimum Viable Pilot

  • Handle password reset requests only
  • English only
  • CRM integration only
  • Manual escalation to human
  • Simple success/fail tracking

AI Project Readiness Scorecard

Before greenlighting any AI project, run through this checklist. Score each item 0-2 (0 = missing, 1 = partial, 2 = complete). A score below 14 means you're not ready.

The Bottom Line

AI projects don't fail because of technology. They fail because of scope. The 5% that succeed invest heavily in scoping before they write a single line of code or evaluate a single vendor.

The framework above isn't bureaucracy - it's discipline. It forces the hard conversations early, when changing course is cheap. It surfaces the integration challenges before they become budget overruns. It builds the trust infrastructure that lets AI actually scale.

If you can't answer the five questions clearly, you're not ready to start. And that's okay - better to know now than after burning $500K on a pilot that was never going to work.

Need Help Scoping Your AI Project?

We run structured scoping workshops that answer these five questions in 2-3 weeks. No commitment to build - just clarity on whether and how to proceed.

Continue reading

Related resources

Keep moving through the same operating model with a few nearby articles from the same topic cluster.

AI & Automation4 min read

The AI Implementation Reality Check: Why Pilots Fail and What to Do Instead

Most AI pilots fail not because the model is bad, but because nobody defined the operating model—decision ownership, data contracts, evaluation, and adoption—before building.

Foundational

January 1, 2026

Read article
AI & Automation7 min read

Scoping custom AI projects: 5 questions that prevent scope creep

*By Gosai Digital · January 2026 · Based on 40+ enterprise AI engagements*

Foundational

January 1, 2026

Read article
AI & Automation12 min read

Measuring AI ROI: Beyond the Hype

AI projects fail when businesses can't demonstrate clear ROI. Learn the four pillars of AI measurement—cost savings, revenue impact, time recovered, and customer satisfaction—plus practical frameworks for proving value.

Foundational

January 1, 2026

Read article

Resource updates

Get notified when new guides go live.

Practical notes on Salesforce, staffing workflows, and operational cleanup. No newsletter bloat.

Gosai Digital

Senior Salesforce architecture, admin, and development on a fractional retainer.

Services

  • Services
  • Use Cases
  • Case Studies
  • Process

Company

  • About
  • Contact
  • Resources

More

  • FAQ
  • Pricing

© 2026 Gosai Digital. All rights reserved.

PrivacyTerms
Share:
Strategy

No Clear Business Objective

This is the killer. Pilots driven by tech teams without clear business outcomes. "Let's try AI" is not a strategy.

Case in point:

A $2B retailer spent 18 months and $1.2M on an "AI-powered customer insights platform." When asked what decision it would improve, the answer was "we'll figure that out once we see what the AI finds." The project was killed 3 months before launch - no one could explain what it was supposed to do.

No Success Metrics

Nearly one-third of CIOs had no clear metrics for their AI POCs. If you can't measure success, you can't prove value.

Treated as Add-On

AI pilots fail because agents are treated like add-ons instead of being embedded into existing workflows.

Governance Gaps

Most pilots work in controlled environments, but at scale, legal shuts them down because there's no framework for compliance.

No Cross-Functional Ownership

Pilots grow into products only when cross-functional ownership is built from day one.

"Users don't reject AI because it's occasionally wrong. They reject it because they can't tell when or why it might be wrong."

- On Trust & Explainability

1

What decision are we improving?

2

What does "good" look like?

3

What's the human-in-the-loop shape?

4

What are the integration requirements?

5

What are the kill criteria?

Success Metric Template

Primary Metric: The one number that defines success (e.g., "Reduce ticket resolution time from 24 hours to 4 hours")
Guardrail Metrics: Things that must NOT get worse (e.g., "Customer satisfaction stays above 4.2")
Timeframe: When will we evaluate? (e.g., "After 30 days in production with 1000+ decisions")

Integration Checklist

What data sources does AI need to access?
What are the GDPR/privacy constraints?
What systems need to trigger AI actions?
What systems need to receive AI outputs?
Who owns each integration point?
What's the latency requirement?

"We'll figure out the metrics later"

Translation: We don't know what success looks like, so we'll call anything that ships a win. Six months later, someone will ask "what did we actually get?" and no one will have an answer.

The Scope Cutting Exercise

1

List every feature/capability you want

2

For each, ask: "Can we prove value without this?"

3

If yes, move it to Phase 2

4

Repeat until you can't cut anymore

CriteriaScore
Decision being improved is clearly defined/2
Primary success metric is measurable/2
Guardrail metrics are established/2
Human-in-the-loop model is documented/2
Data sources are accessible/2
Integration owners are identified/2
Legal/compliance has reviewed/2
Kill criteria are pre-defined/2
Executive sponsor is committed/2
End users have been consulted/2
Total (14+ to proceed)/20
Book a Scoping Workshop
Learn About Our Approach