Skip to main content
Field Deployments

What an AI Opportunity Matrix Looks Like in Practice

A real output from a discovery workshop with a 200-location franchise network-what got prioritized, what didn't make the cut, and why the matrix is harder to build than it looks.

5 min read

Most AI opportunity frameworks look the same: a 2x2 quadrant, "high value" on the Y axis, "low effort" on the X axis, and a handful of use cases scattered across the cells. Tidy, generic, and not particularly useful.

What follows is closer to what actually comes out of a real discovery workshop with a 200-location franchise network-a home services brand with operations spread across 14 regional clusters, 3 corporate-owned locations, and 197 franchisees at varying stages of technological maturity. The names and identifying details have been changed. The structure, the trade-offs, and the uncomfortable decisions are real.

What the workshop surfaces that a survey won't

Expecting to prioritize four or five AI tools, the franchisor arrived at the workshop already anchored. By day two, the team had mapped 23 candidate opportunities-from predictive parts ordering at the technician level to AI-assisted compliance tracking for the franchisor's field support team.

Every stakeholder in the room had a different version of what "the problem" actually was.

Operations focused on scheduling inefficiency: technicians rerouted mid-day, double-booked, or dispatched without the right parts. Marketing focused on lead conversion: inbound calls during peak hours going to voicemail at 60% of locations. Finance worried about franchisor visibility: no reliable way to see which locations were underperforming on customer satisfaction until the review scores were already public.

A survey would have collected all three. The workshop forced the conversation about which problem, if solved, would make the other two easier.

Most organizations today are excited about AI but paralyzed by choice. Every department has ideas. Leadership wants a unified strategy, but the picture is chaotic: disconnected ideas, unclear ROI, mismatched priorities, and no shared criteria for deciding which opportunities matter.
— Fractional CAIO, AI Opportunity Matrix methodology

How the matrix got built

The team used two axes: operational impact (effect on revenue, customer experience, or franchisor visibility at the network level) and deployment feasibility (data readiness, integration complexity, franchisee adoption likelihood). A third factor-franchisee resistance risk-was tracked separately as a modifier that could shift any use case's priority regardless of where it landed on the quadrant.

After two days, 23 opportunities collapsed into four tiers.

Tier 1: Build now. Two use cases survived every filter: AI call handling for inbound leads, and automated compliance reporting for the field support team. Both had clear data pipelines, minimal new infrastructure requirements, and strong franchisor control over rollout. Combined, they addressed the lead conversion gap and the visibility problem simultaneously.

Tier 2: Build next. Predictive scheduling and dispatch optimization landed here. High operational impact, but dependent on clean location-level data that roughly 40% of the network didn't yet have. Starting here would mean building on an unstable foundation.

Tier 3: Evaluate in 12 months. Parts forecasting and inventory optimization. Data existed at some locations, not network-wide. Valuable at scale, but the dependency on franchisee buy-in and supplier integrations made it a 12-to-18-month conversation, not a 90-day one.

Tier 4: Don't build. Six of the original 23 opportunities were cut entirely. Not because they weren't interesting, but because the underlying problems they were solving were either exaggerated (the data didn't support the pain point), dependent on capabilities the network wouldn't have for years, or solving something that could be addressed more cheaply without AI at all.

Insight

Tier 4 is where most AI roadmaps fail their organizations. Listing 23 opportunities without a forcing function to eliminate some of them produces a backlog, not a strategy. The value of the workshop is as much in what gets cut as in what gets prioritized.

What didn't make the cut, and why

The most contested cut was an AI-powered training and onboarding tool for new franchisees. It was championed by the franchise development team, had genuine ROI potential, and generated real enthusiasm in the room.

It didn't make Tier 1 or Tier 2 for three reasons. First, the development team didn't have the content infrastructure to feed the system-the training materials existed as PDFs and recorded webinars, not structured data. Second, franchisee adoption would be voluntary during onboarding, which meant inconsistent coverage from day one. Third, the opportunity wasn't upstream of the network's current pain: locations that were struggling with customer satisfaction weren't struggling because of onboarding quality; they were struggling because of how inbound demand was being handled.

The same logic applied to a proposal for AI-generated local marketing content. Strong idea for a single-unit operator. For a 200-location network with brand standards and regional variation in customer demographics, the approval workflow alone would have consumed the efficiency gains.

78%

of franchisors plan AI expansion to all locations by 2026

Gitnux Franchise Industry Report 2026

Why multi-location networks face a different kind of matrix

A single-unit business building an AI opportunity matrix is making a technology decision. A 200-location franchise network is making an organizational decision that happens to involve technology.

Every use case in the matrix has to answer a question that doesn't exist for a solo operator: who owns this across the network? If the franchisor mandates the tool, there's a rollout cost and a change management problem. If it's franchisee-optional, there's a data fragmentation problem within six months. If it requires the franchisee to change their daily workflow without a direct and visible benefit to them, the adoption rate predicts itself.

Opportunities that made Tier 1 for this network both passed what the team started calling the "mandate test": could the franchisor deploy this as part of the operating system, the same way they enforce brand standards, without triggering franchisee resistance significant enough to undermine the results?

AI call handling passed. Franchisees with the highest call abandonment rates had the most to gain, and the franchisor controlled the phone infrastructure. Automated compliance reporting passed because it reduced work for field support teams, not franchisees-nobody was being asked to do something new.

Predictive scheduling didn't pass the mandate test at the time of the workshop. It required franchisees to change how they assigned technicians, and the network didn't yet have the trust-building groundwork in place to make that transition smooth. It landed in Tier 2 rather than Tier 1 because the organization wasn't ready, even though the technology was.

Example

A 47-location regional service franchise moved AI-powered document intelligence into Tier 1 after mapping their Monday morning workflow: field coordinators were spending the first two hours of the week manually processing supplier invoices, maintenance logs, and compliance forms from the weekend. Once that bottleneck appeared on the matrix, the ROI calculation was straightforward.

What the output actually looks like

The matrix isn't a static document. The workshop produced a working artifact: a ranked list of opportunities with explicit rationale for each placement, the assumptions that would need to change to move a Tier 2 opportunity to Tier 1, and a 90-day action plan covering the two Tier 1 builds.

What proved most useful was the assumptions log-a record of what the team believed to be true at the time of prioritization that could prove wrong. For predictive scheduling, the key assumption was that 60% of the network would have clean location-level scheduling data within 18 months. If that number didn't materialize, the opportunity would stay in Tier 2 regardless of the business case.

That kind of explicit documentation is rare in AI planning exercises. It's also what turns a matrix from a slide deck that ages poorly into a living decision framework.

Key takeaways

  • The most valuable output of a discovery workshop is often what gets cut, not what gets prioritized
  • Multi-location complexity means every AI opportunity needs to pass an organizational test, not just a technology test
  • The "mandate test" - can the franchisor deploy this as part of the operating system? - is a reliable filter for franchise network AI decisions
  • A matrix without an assumptions log is a snapshot; with one, it becomes a decision framework that evolves with the network

Get Started

Ready to find the AI opportunities in your franchise network?

We'll help you identify where AI can drive real operational impact, and deploy it.