TD SYNNEX  CPB Engagement Ops Guide
The partner’s field manual for Copilot activation, agent builds, and managed AgentOps — grounded in Microsoft best practices and built for SMB CSP delivery.
TD SYNNEX · Copilot Practice Builder · Partner Enablement Series · April 2026

This brief is the research foundation and narrative explanation behind the two worklist tabs in the CPB Workbook — Tab 8 (Copilot Activation Worklist) and Tab 9 (Agent Build Engagement Worklist). It explains the logic behind every phase and gate, grounds each framework in published Microsoft guidance and MSP field research, and shows real customer examples mapped to where they enter the checklists. Read this once before your first engagement. Run the workbook tabs during delivery.

Where This Guide Fits in the CPB Toolkit
Step 1 The Playbook Strategy, tiers, monetization models, and the financial case. Read to orient your practice.
Step 2 Workbook Tabs 1–5 Model your numbers. Design the practice. Stress-test with Bear/Base/Bull. Make the internal case.
Step 3 Workbook Tabs 6–7 Kicker math and RACI before scoping. Sprint templates when pricing an engagement.
Step 4 Workbook Tabs 8–9 Live project trackers during delivery. One tab open per active engagement. Every gate gets a status.
You Are Here Engagement Ops Guide The research and reasoning behind Tabs 6–9. Read once before your first engagement. ✓ This Document
Workbook Tabs Referenced in This Guide — Have These Open While You Read
Tab 6 — Engagement Mechanics
Kicker Math + RACI
Covered in Section 3 (Pre-Engagement) and throughout Tab 8 gate notes
Open before scoping any outcome-based engagement
Tab 7 — Sprint Templates
Service Menu + Sprint Shapes
Referenced in Section 3 (Gates 2–3 adoption intervention) and Section 5 real examples
Open when pricing a Tier 1 service or scoping a sprint
Tab 8 — Activation Worklist
Pre-Gate through Gate 5
The subject of Section 3 in full — every gate explained, with the four common failure modes
Open as live tracker during every Activation Sprint
Tab 9 — Agent Build Worklist
Phases 1–5 (Discover through AgentOps)
The subject of Section 4 in full — every phase explained, including the 90% gate and AgentOps runbook
Open as live tracker during every agent build engagement
Contents
  1. Before You Open Either Tab: Three Things to Get Right First
  2. Where This Framework Comes From
  3. Why Engagement Structure Determines Revenue Outcome
  4. Tab 8: The Copilot Activation Worklist — Phase by Phase
  5. Tab 9: The Agent Build Engagement Worklist — Phase by Phase
  6. Real Examples: What These Engagements Look Like in Practice
  7. RACI — Accountability Reference for New Partners
  8. What Executing Tabs 8 and 9 Builds Toward Financially
  9. Reference List — 13 Clickable Sources

Before You Open Either Tab: Three Things to Get Right First

The research behind Tabs 8 and 9 consistently surfaces three priorities that are disproportionately important relative to everything else in the checklists. If you only internalize three things from this document, make it these.

1
Use Copilot Internally First
The credibility foundation everything else rests on

Partners who use Copilot in their own operations close Copilot deals at 3× the rate of partners who don’t. Microsoft FY26 Partner Data The mechanism is direct: your most compelling demo is always the workflow you built for yourself. When you show a customer your actual Copilot-assisted operations, they stop evaluating and start imagining. That imagination is the close.

Minimum bar: Your own internal Copilot deployment running for at least 60 days before your first customer engagement. Build one agent for your own workflow — a proposal generator, a ticket summarizer, an internal knowledge base agent. You will encounter the same governance issues your customer will face. Your demo will be more credible than any slide deck.
2
Run the SharePoint Oversharing Report First
The risk prevention step that protects every engagement

The SharePoint Data Access Governance (DAG) oversharing report is a 15-minute task. It prevents the most common Copilot deployment crisis in the SMB market: a user prompting Copilot and surfacing a confidential document they were technically allowed to access but were never intended to find. Microsoft DAG Blueprint · Syskit · Orchestry

The non-negotiable: Run the Site Permissions report (SharePoint Admin Center → Reports → Data Access Governance) before any Copilot license is assigned. Remediate “Everyone except external users” and “Anyone with the link” on sensitive sites (HR, Finance, Legal) before Gate 1 closes. SharePoint Advanced Management (SAM) — which covers this — is included with an active M365 Copilot license at no additional cost (verify current entitlement terms).
3
Name the Business Champion Before You Provision the First License
The adoption foundation that makes every gate achievable

The single most reliable predictor of hitting the 30% active-user threshold at Gate 3 is whether a named Champion was identified before Day 1. Champions are not IT staff — they are business users in roles where Copilot genuinely compresses daily friction: the office manager who lives in Outlook, the sales rep who writes proposals from scratch, the operations lead running five recurring meetings a week. Microsoft M365 Copilot Adoption Planning Checklist

These are the people whose early wins become the internal proof of concept that sells the rest of the organization. Identify them before Day 1. Brief them separately from the broader rollout. Give them the prompt library first. Make sure their results are shared in the community channel by Week 4.

How to find them: A 30-minute conversation with the business owner — not IT — asking: “Who in your team is most frustrated by repetitive document or meeting work?” That person is your Day 1 Champion. This conversation happens before the SOW is signed. It is the first act of a structured engagement.
↑ Back to Contents

1 · Where This Framework Comes From

The frameworks in Tabs 8 and 9 are a synthesis of Microsoft’s own published deployment guidance, documented MSP partner field experience, and the CPB Playbook’s engagement model. They are not invented. Where a claim in this document is drawn from a specific source, a citation badge appears inline. Clickable links to every source are in the Reference List at the end.

Microsoft Official Guidance
  • M365 Copilot Adoption Planning Checklist — four-phase deployment model and Champion program structure
  • M365 Copilot for SMB — SMB Success Kit, 30% threshold framework, flight crew model
  • M365 Agents Deployment Checklist — admin roles, Application Lifecycle Management (ALM) environment strategy, agent governance
  • Copilot Studio Agent Development Lifecycle — the five-phase model Tab 9 is built on
  • Copilot Studio Agent Evaluation Checklist — golden prompt methodology and >90% pass rate standard
  • SharePoint Advanced Management guidance — DAG reports, Site Access Reviews, RAC policy
  • Secure & Governed Data Foundation Blueprint — phased oversharing remediation framework
Partner & MSP Field Research
  • Syskit Copilot Readiness Assessment Framework — five-pillar permissions audit model
  • Orchestry M365 Copilot Readiness Checklist — ROT content risk; storage estate as skipped readiness dimension
  • AvePoint MSP Copilot Technical Readiness Checklist — four-step MSP AI preparation model
  • inforcer MSP Copilot Readiness Assessment Guide — M365 utilization audit approach
  • E2E Agentic Bridge Copilot Readiness Assessment — five-pillar permission hygiene framework
  • Adoptify.ai Enterprise Copilot Deployment Checklist — AdaptOps phased rollout model
CPB Playbook & Case Studies
  • CPB Playbook v9 (cpb.html) — five-gate sprint structure, RACI, four non-negotiables, tier sprint map
  • Microsoft Inside Track — Deploying M365 Copilot in Five Chapters — hero scenario framework and maturity model
  • Microsoft FY26 Partner Data — 3× close rate for internally-deployed Copilot partners
  • Newman’s Own, Mike Morse Law Firm, Morula Health — Microsoft publicly documented customer stories used in Section 5
↑ Back to Contents

2 · Why Engagement Structure Determines Revenue Outcome

The single biggest predictor of whether a Copilot engagement converts into recurring managed services revenue is not the technology — it is whether the partner ran a structured engagement or an unstructured one.

The Unstructured Path
What most new partners default to
  • Provision licenses on Day 1, hope users figure it out
  • No baseline KPI documented — nothing to measure ROI against
  • No Champion identified — IT is the only counterpart
  • No MBR cadence — customer goes quiet after 30 days
  • Renewal conversation at month 11 with no supporting evidence
  • Result: license churn, no retainer, no agent build conversation
The Structured Path (Tabs 8–9)
What the playbook prescribes
  • Champion identified and briefed before first license goes live
  • Day 0 baseline KPI captured before any Copilot access is granted
  • Five gate checkpoints create shared accountability with the customer
  • Usage analytics dashboard running by Week 6
  • MRR conversion conversation at Day 85 — before urgency fades
  • Result: retainer signed at sprint close, agent build scoped at Gate 3
The mechanism: Structure creates evidence. Evidence creates trust. Trust creates the retainer. Without the five-gate checkpoint structure, the partner has no recurring reason to be in front of the customer, and the customer has no reason to believe the renewal is worth paying for. With it, both sides can see exactly what is being delivered at every stage — and what comes next.
— CPB Playbook, Section 10 (Outcome-Based Primer)
faster Copilot deal close rate
Partners who deploy Copilot internally · Microsoft FY26 Partner Data
40–60%
more Phase 3 rework
When the Design phase is skipped before Build · CPB Playbook field data
>90%
golden prompt pass rate required
Before Production deploy · Microsoft Copilot Studio Agent Evaluation Checklist
↑ Back to Contents

3 · Tab 8: The Copilot Activation Worklist — Phase by Phase

Tab 8 is the operational checklist for a Tier 1 Activation Sprint — the 90-day structured engagement that converts a Copilot licensing conversation into a monthly managed retainer. It is a live project tracker, not a reference document. One person owns it. Every gate item gets a status. Nothing advances until the gate is green.

Pre-Gate
Week 0
Four non-negotiables confirmed. SOW signed. Day 0 KPI baseline captured.
Gate 1
Week 2
Readiness Lock. Oversharing remediated. Champions licensed. Tenant validated.
Gate 2
Week 6
Activation confirmed. Full rollout complete. Analytics dashboard live.
Gate 3
Day 45
Adoption evaluation. Active-user % vs. 30% threshold. Agent signal assessed.
Gate 4
Day 75
Impact evidence. KPI measured vs. Day 0 baseline. ROI Report drafted.
Gate 5
Day 90
Kicker validated. Tier 1 MRR retainer signed. 12-month roadmap set.

Pre-Engagement: The Four Steps Partners Consistently Skip

These are structural preconditions that make every downstream gate achievable. CPB Playbook Tab 6 · Microsoft Adoption Checklist

⚠ Skip #1 — No baseline KPI before SOW signature

Without a timestamped baseline KPI before SOW signature, you have nothing to measure against at Gate 5. The kicker cannot be earned or validated, and you will spend Gate 5 arguing over what the number used to be.

The fix: Document the KPI (date, method, responsible validator) in the shared audit folder before any license is provisioned. A 30-minute task that protects a $1,500–$5,000 kicker payment.

⚠ Skip #2 — No oversharing remediation before go-live

Copilot does not create new permissions — it makes existing permissions instantly discoverable. A document shared to “Everyone except external users” years ago becomes surfaceable by any licensed user the moment Copilot is turned on. Microsoft DAG Blueprint · Syskit

The fix: Run the Site Permissions report in SharePoint Admin Center before Gate 1. Remediate high-risk sites. SharePoint Advanced Management (SAM) — included with an active M365 Copilot license — is the tool for this.

⚠ Skip #3 — Piloting exclusively with IT

IT staff don’t represent the use cases that drive adoption. Piloting with IT produces weak use cases, no internal advocates, and no social proof. The pilot ends. The rest of the company never hears about it. Microsoft Adoption Planning Checklist

The fix: Identify 2–3 Champion profiles before Day 1 — the office manager, the sales rep, the operations lead. Their wins become the internal proof of concept.

⚠ Skip #4 — No counter-metric named

Without a counter-metric for every KPI, outcome-based engagements are gameable. A team that knows cycle time is being measured can cut corners and still “hit” the KPI. The customer’s Finance team will raise this at Gate 5. CPB Playbook Tab 6 Non-Negotiables

The fix: Name one counter-metric per KPI in the SOW appendix. If cycle time is the KPI, error rate is the counter-metric. Both confirmed in writing at Gate 5.

Gate 1 — Readiness Lock: The Oversharing Remediation Toolkit

SharePoint Advanced Management (SAM) — included at no extra cost with an active Microsoft 365 Copilot license per Microsoft’s November 2024 announcement — provides four specific tools for pre-deployment remediation. Microsoft SAM Guidance

Entitlement requires an active M365 Copilot license on the tenant. Verify current licensing terms with your TD SYNNEX rep before scoping customer engagements, as Microsoft bundling terms are subject to change.
Timing note on sensitivity labels: Labels are a governance layer, not a technical prerequisite for Copilot. For regulated SMB verticals (healthcare, financial services, legal), plan Gate 1 to include label configuration. For unregulated SMBs, at minimum apply default labels to HR, Finance, and Legal libraries before go-live.

Gates 2–3 — Adoption: The 30% Threshold and What to Do When You Miss It

Microsoft’s Copilot adoption framework establishes a concrete intervention trigger. Microsoft M365 Copilot SMB Success Kit Tab 8 Gate 3 formalizes it as a named product response, not a vague instruction.

Below 30% active users within the first 30 days is a documented adoption risk pattern. The named response is a Low-Adoption Intervention Sprint ($2,000–$3,500 from the Tier 1 Service Menu in Tab 7) — a 30-day targeted re-engagement program for departments below threshold. This is billable scope, not goodwill. The customer gets a named response to a named problem.

SignalBenchmarkResponse If Below BenchmarkTab 8 Reference
Active user %>30% by Day 30Low-Adoption Intervention Sprint (Tab 7 Service Menu, $2,000–$3,500)Gate 3
Prompt volumeRising week-over-weekPrompt library refresh — add 10–15 new use-case prompts for lagging rolesOngoing
Champion engagement≥1 share/week in community channel1:1 re-briefing; add champion recognition incentiveGate 2
Dept. spread<20pp variation across departmentsRole-specific re-training session for any department >20pp below averageGate 3
Sentiment pulse≥3.5 / 5.0 from Gate 2 survey1:1 office hours with low-sentiment users; escalate blockers to ChampionGate 2

Gate 4 — Building the ROI Report That Makes Renewal Automatic

The Gate 4 Copilot ROI Report ($1,500–$2,500, Tier 1 Service Menu) transforms the renewal conversation from a pricing discussion into an ROI discussion. It is also the evidence base required for MCI reimbursement applications submitted at Gate 5.

What goes in the ROI Report

What the ROI Report does commercially

Gate 5 — MRR Conversion: What to Say and When to Say It

Gate 5 Talk Track
MRR Conversion Language
Tier 1 Retainer
Day 85–90

“Here is what the last 90 days delivered. [ROI Report summary.] Here is what continues every month for $12/user/month: your usage analytics dashboard, a monthly 45-minute review where we look at what’s working and what to improve, your prompt library kept current as you find new use cases, and a monthly audit log review so nothing goes sideways with your data governance. You’ve already seen what that looks like — it’s what we’ve been doing for three months. The question is whether you want to keep it going.”

Critical timing: Have this conversation at Day 85, not Day 90. At Day 90 the customer expects to be asked. At Day 85 you are still in evidence-delivery mode — a far stronger position when the retainer topic comes up naturally.
↑ Back to Contents

4 · Tab 9: The Agent Build Engagement Worklist — Phase by Phase

Tab 9 is the operational checklist for a Copilot Studio agent build engagement — from the Agent Discovery Workshop through Design, Build & Test, Deploy & Train, and into ongoing AgentOps. The five-phase structure mirrors Microsoft’s official Copilot Studio agent development lifecycle. Microsoft Copilot Studio Agent Development Lifecycle It applies to both simple declarative agents ($3,500–$8,000) and complex agents with actions and connectors ($12,000–$28,000).

Phase-name mapping · CPB uses operational names that align to Microsoft’s lifecycle phases as follows: Phase 1 Discover → Microsoft Discover; Phase 2 Design → Microsoft Experimentation (prompt design, test-set authoring, solution architecture); Phase 3 Build & Test → Microsoft Build; Phase 4 Deploy & Train → Microsoft Deploy; Phase 5 AgentOps → Microsoft Steady State. When cross-referencing Microsoft documentation, use the official names.
1
Phase 1 — Discover
Agent Discovery Workshop ($3,500–$6,000) · Weeks 1–2 · Output: prioritized candidate list + signed SOW

The most important thing about Phase 1: it is a billable engagement, not a pre-sales activity. Partners who treat discovery as free work train customers to expect free work. CPB Playbook Section 10

What the Discovery Workshop Produces

Prioritized agent candidate list (3–5 candidates, ranked by ROI, data availability, build complexity)
Build the wrong agent first and you lose customer confidence and the AgentOps retainer.
Workflow map for the top candidate (10–15 steps tagged for data sensitivity, decision points, applicability)
This becomes the Phase 2 design spec. Skip it here and Phase 2 takes twice as long.
RBAC and data source inventory (every SharePoint site, OneDrive folder, Teams channel, or external system the agent needs)
A missing data source discovered in Phase 3 is a scope change that costs everyone.
Complexity classification (Agent Builder / simple vs. Copilot Studio / complex) and initial fee range
Misclassifying complexity means under-pricing the SOW and absorbing the overrun.

Check Templates Before Building Microsoft Agent Store

Common SMB use cases are already templated in the Microsoft Agent Store and Copilot Studio. Check before scoping any custom build:

  • HR onboarding agent
  • IT help desk / employee self-service agent
  • Customer service assistant
  • SharePoint site knowledge agent
  • Meeting summary and follow-up agent
Template advantage: Starting from a template reduces Phase 3 build time by 30–50%. The savings go to your EBITDA, not the customer’s invoice.
Outcome-based agents — Phase 1 non-negotiable: If this engagement includes a Workflow Outcome Sprint (kicker on cycle time or deflection %), the process baseline must be documented before any agent code is written. Tab 6 applies: (1) capture baseline, (2) define counterfactual, (3) name counter-metric, (4) cap kicker at 10%/25% of base fee. None of these can be added retroactively.
2
Phase 2 — Design
Included in build fee · Weeks 2–4 · 8–12 partner hours · Output: customer-signed design specification

Partners who skip Design and go straight to Build report 40–60% more rework in the test phase. CPB Playbook field data Customer sign-off on the design specification is the scope fence that protects you when scope creep surfaces in Phase 3.

Agent Builder vs. Copilot Studio Microsoft Agent Builder Docs

Use Agent Builder if…Use Copilot Studio if…
Knowledge-only, no external actionsAgent must take actions (create records, update CRM, send email)
SharePoint/OneDrive as sole data sourceExternal connectors required (Salesforce, ServiceNow, etc.)
Users already hold M365 Copilot licenseAutonomous capabilities needed beyond conversation
2–4 week build · $3,500–$8,0006–10 week build · $12,000–$28,000

The Golden Prompt Test Set (Write in Phase 2) Microsoft Agent Evaluation Checklist

Write 15–20 representative user questions with expected answers before building anything. These become:

  • The Phase 3 pass/fail acceptance criteria (target: >90% pass rate)
  • The monthly Phase 5 regression test suite in AgentOps
  • The suggested starter prompts on the agent’s welcome screen
The discipline: If you cannot write 15 representative questions the agent should answer correctly, you don’t yet understand the use case well enough to build it.

ALM Environment Strategy Microsoft Power Platform ALM

Minimum two environments: Development and Production. For regulated verticals, add a Test environment. This is the mechanism that lets you roll back a bad deployment without disrupting the customer’s operations.

Development
Build and test here. Makers work in unmanaged solution.
Test (optional)
Structured UAT with customer. Managed solution import from Dev.
Production
Live environment. Managed solution. All gates must be green first.
3
Phase 3 — Build & Test
Weeks 4–8 (simple) / 4–14 (complex) · Gated by >90% pass rate before Production deploy

The agent does not leave Development until the golden prompt test set achieves a >90% pass rate. Below 90% means returning to Phase 2 instructions — not building more features. Microsoft Copilot Studio Agent Evaluation Checklist

Test Types to Run

Golden prompt test set — pass rate = (passed ÷ total) × 100. Document false positives and true negatives separately.
Edge case testing — graceful decline on out-of-scope requests is a pass. Hallucination is a fail requiring instruction tuning.
RBAC validation — confirm the agent cannot surface content the calling user doesn’t have permission to access. Test explicitly, never assume.
DLP policy validation — confirm Power Platform DLP policies block disallowed connectors before Production.
Regression evaluation — re-run the full golden prompt set after every significant change to instructions or knowledge sources.

How to Interpret Pass Rate Results

>90% — Proceed to Phase 4. Document version and date as the Phase 5 regression baseline.
75–90% — Review false positives first. Usually an instruction tuning issue, not a knowledge problem. Adjust instructions before adding content.
<75% — Return to Phase 2. The design specification has gaps. Building more features does not fix a design problem.
Hold the gate: Customers will pressure you to deploy early. “It’s good enough” at 70% becomes a support crisis and a damaged retainer within 60 days. The gate is binary.
4
Phase 4 — Deploy & Train
Included in build fee · Weeks 8–10 (simple) · Output: agent live in Production + AgentOps retainer signed

Phase 4 has one commercial objective: get the AgentOps retainer signed before the build fee is paid. This is the moment the customer is most engaged and most willing to commit. Waiting until the agent has been live for 30 days is too late — the urgency fades.

The AgentOps Runbook — The Deliverable That Justifies the Monthly Retainer

The runbook is what separates a partner who builds agents from a partner who manages agents. It makes AgentCare ($350/agent/month at Tier 2) tangible to a customer who might otherwise think of agent management as “checking that it still works.”

Agent version history — every significant change logged with date, description, reason
Known limitations — documented scope boundaries; what the agent will decline and why
Escalation path — L1/L2/L3 triage protocol; who gets notified when the agent fails
Monthly evaluation cadence — who runs the golden prompt set; threshold that triggers a tuning sprint
Knowledge source refresh schedule — which SharePoint libraries are reviewed monthly; how stale content is removed
Connector health monitoring — how Power Automate flow failures surface; who owns the alert
Rollback procedure — step-by-step reversal to prior managed solution version
Model update response protocol — when Microsoft updates Copilot Studio’s underlying model, who runs regression evaluation within what SLA

Choose the AgentOps Pricing Model at Phase 4 Handoff

Stacked Model
Default recommendation
Per-user base + AgentCare ($350/agent/mo) + Credit Wrap (30% Azure markup). ~$1,725/mo per 50-user Tier 2 customer. Transparent — best for customers who want to understand the invoice.
Flat Model
Simpler sales motion
$20–$25/user/month all-in at Tier 2. One invoice line. Best for partners who prefer traditional MSP pricing and customers who want one number.
QBIC Overlay
Outcome-based
Quarterly Business Impact Contract. One board-level KPI with a kicker floor/ceiling. Best for Tier 3 customers where the AI program has executive visibility.
5↻
Phase 5 — AgentOps (Steady State)
$350–$400/agent/month AgentCare · ~55% gross margin · Where the practice compounds

Every Phase 1–4 investment is the cost of acquiring an AgentCare retainer customer. At $350/agent/month with 55% gross margin, a partner with 10 customers each running 3 agents generates $126,000/year in AgentCare ARR at approximately $69,300 gross profit — before per-user base, Credit Wrap, or any project work.

The compounding logic: Each agent in steady state generates its own expansion revenue. The monthly evaluation run surfaces intent clusters the agent cannot currently handle — each one is an enhancement sprint opportunity. Agents drift as business processes and Microsoft’s underlying models evolve. That drift is the structural justification for the retainer. Microsoft Copilot Studio Agent Evaluation Checklist
CadenceActivityOutputWhat It Signals to the Customer
MonthlyGolden prompt evaluation runPass rate vs. baseline; tuning tickets if <5pp drop“We catch drift before it affects your users”
MonthlyKnowledge source refreshStale content removed; new content indexed; connector configs verified“The agent’s answers stay current as your business changes”
MonthlyAnalytics MBR (Copilot Studio dashboard)Session count, resolution rate, top intents, escalation rate, satisfaction“You can see exactly what the agent is doing”
MonthlyAgentOps MBR with Business Champion45-min review; enhancement backlog maintained“You always know what comes next”
QuarterlyQBIC kicker validation (if applicable)KPI delta vs. prior quarter; Executive Sponsor sign-off; kicker payment“Your AI investment has a measured return”
QuarterlyCapability expansion reviewNew feature assessment; enhancement proposals scoped and priced“You are always at the leading edge”
AnnuallyArchitecture reviewPlatform fit reassessment; ALM audit; DLP/RBAC check; next-agent identification“Your AI infrastructure stays current and secure”
↑ Back to Contents

5 · Real Examples: What These Engagements Look Like in Practice

The following three case studies are publicly documented by Microsoft. Each is mapped to where it enters the Tab 8 and Tab 9 checklists, showing the specific engagement structure a partner would apply and where the revenue flows.

Legal Services · ~50 Attorneys
Mike Morse Law Firm
Tier 1 → Tier 2 progression
LMS built in-house
Zero
outside vendors for LMS build

Michigan’s largest personal injury practice uses Copilot across its full operation — meeting summaries, document drafting, case law research, email correspondence, Excel analysis. Their most notable engagement: building an entire Learning Management System from scratch using Copilot, generating slides, graphics, training scripts, and process documentation with zero outside vendors.

This is the Tier 1 → Tier 2 progression in its most recognizable form. Broad Copilot adoption across the firm (Tab 8) surfaces one specific workflow that is repetitive, document-heavy, and benefits from a structured knowledge base: LMS content creation. That workflow becomes a Tab 9 agent build.

“Digital transformation is getting the mundane out of the way of human work.”
John Georgatos, CIO — Mike Morse Law Firm · Microsoft Customer Story
Checklist Mapping
Where this customer enters Tabs 8 and 9, and what the partner does at each stage
Tab 8 · Gate 3
The LMS workflow surfaces as an agent build candidate at Day 45. The Delivery Lead asks: “Is there a repetitive task you wish Copilot would just handle automatically?” The CLO and training lead point to LMS content creation. Document the signal in the Gate 3 audit folder and carry it into the Gate 4 agent build proposal.
Tab 8 · Gate 4
The agent build proposal is presented alongside the Gate 4 ROI Report at Day 75. “Here is what Copilot did for your team. Here is what a dedicated LMS content agent could do — and here is what the scoping engagement looks like ($3,500–$6,000 Agent Discovery Workshop).” The ROI Report makes the upsell conversation feel natural, not transactional.
Tab 9 · Phase 1
Agent Discovery Workshop scopes the LMS agent. The workflow map covers: how training content is currently created, which SharePoint libraries store materials, which roles generate content, and how often it needs updating. Output: SOW for a declarative agent build using Agent Builder, with SharePoint as the sole knowledge source.
Revenue
Agent Builder tier: $3,500–$8,000 build fee → $350/agent/month AgentCare. The AgentCare retainer is straightforward to justify: every time the firm updates its training curriculum, the agent’s knowledge source needs refreshing. The monthly evaluation run confirms the agent is producing accurate outputs for new hires. Both activities are documented in the AgentOps runbook.
Food & Consumer Goods · 50 Employees
Newman’s Own
Tier 1 Activation Sprint
Three-department Champion model
marketing campaigns per month

50 employees competing against multinational brands. Newman’s Own adopted Copilot to run leaner across three departments simultaneously. The marketing team triples monthly campaign output — briefs that took three hours now complete in 30–60 minutes. The logistics team cut daily publication review from a full morning to 30 minutes. The Chief Legal Officer uses Copilot for research, citations, and contract cleanup.

This is a textbook Tab 8 Activation Sprint with one nuance new partners often miss: three distinct Champion profiles across three departments, each with a different use case and a different ROI story. The Gate 4 ROI Report here is unusually compelling — it quantifies productivity gains across three functions, making the Tier 1 retainer renewal nearly self-evident.

Checklist Mapping
Where this customer enters Tabs 8 and 9, and what the partner does at each stage
Tab 8 · Pre-Gate
Three Champions identified before Day 1 — one per department. The Pre-Gate Champion conversation with the CEO identifies three distinct profiles: Social Media Manager (Marketing), Operations Lead (Logistics), and CLO (Legal). Each gets a role-specific prompt library at Gate 1 kick-off. This multi-department setup is what makes the Gate 4 ROI Report quantifiable across three distinct win stories, not just one.
Tab 8 · Gate 4
Three-function ROI Report makes renewal self-evident. The Gate 4 report quantifies: (1) campaign brief time: 3 hrs → 30–60 min × campaign volume; (2) publication review: full morning → 30 min daily; (3) legal research: multi-hour → fraction. When the Executive Sponsor sees three independent productivity wins across three departments, the Tier 1 retainer renewal at Gate 5 is a formality.
Tab 8 · Gate 3
The CLO’s research workflow surfaces as an agent build candidate. “Research, citations, contract cleanup” is the pattern that becomes a Tab 9 engagement. A declarative agent with the firm’s contract templates, legal references, and regulatory documents as knowledge sources — scoped at Gate 3, proposed at Gate 4, and workshopped at Gate 5 as the follow-on engagement.
Revenue
Tier 1 retainer ($12/user/month) → Legal Research agent build → AgentCare. The follow-on Tab 9 engagement is a Legal Research agent: Agent Builder tier, $3,500–$8,000 build, $350/agent/month AgentCare for the monthly knowledge refresh as case law, contract templates, and regulatory references evolve.
Life Sciences · Regulated Vertical
Morula Health
Workflow Outcome Sprint candidate
Regulated content · Cycle time KPI
Weeks → Days
content creation cycle time

A small UK life sciences agency producing scientific and regulatory content under strict accuracy requirements. Using Copilot in Word to summarize complex scientific data tables, content creation time dropped from weeks to days while accuracy standards were maintained. Employees shifted from formatting and summarization to analysis and quality review.

This is why regulated verticals are often the most compelling Workflow Outcome Sprint candidates: the counter-metric (accuracy and regulatory compliance) is inherently measurable, the regulatory environment controls for external variables making the counterfactual clean, and the KPI compression (weeks → days) is dramatic enough to justify a kicker conversation at Gate 5.

Checklist Mapping
Where this customer enters Tabs 8 and 9, and what the partner does at each stage
Tab 8 · Pre-Gate
Regulated vertical: Gate 1 must include compliance sign-off and sensitivity label configuration. Before provisioning any licenses, the Pre-Gate checklist requires: (1) cycle-time KPI captured with timestamp and measurement method; (2) counter-metric named — in this case, accuracy rate or regulatory rejection rate; (3) compliance reviewed and sign-off obtained; (4) sensitivity labels configured for scientific data libraries. All four items in the audit folder before Gate 1 begins.
Tab 8 · Gate 3–4
KPI delta measured at Gate 4 against the Day 0 baseline. If weeks-to-days compression is confirmed and the accuracy counter-metric has not degraded, the Workflow Outcome Sprint kicker is earnable at Gate 5. The Tab 6 kicker calculator applies: base fee × 25% ceiling = maximum kicker payment. Both the KPI delta and the counter-metric must be validated in writing by the Executive Sponsor. Never on self-reported data.
Tab 8 · Gate 5
Gate 5 scopes the next 60-day Workflow Outcome Sprint. The sprint (from Tab 7) targets further cycle-time reduction with a fresh kicker: $7,500 base fee + floor ($750) / ceiling ($1,875). The sprint is signed at Gate 5 close-out and converts into Tier 1 MRR on Day 91. This is the Tab 7 Workflow Sprint model applied exactly as designed to a regulated content workflow.
Revenue
Activation Sprint → Workflow Outcome Sprint → Tier 1 MRR. Three-stage progression: Tab 8 Activation Sprint ($20,000 base, kicker at Gate 5) → Tab 7 Workflow Outcome Sprint ($7,500 base + kicker) → Tier 1 MRR retainer ($12/user/month). Each stage closes into the next with the MRR retainer already agreed before the current sprint ends. No cold start at any transition.
↑ Back to Contents

6 · RACI — Accountability Reference for New Partners

RACI stands for Responsible, Accountable, Consulted, and Informed — a standard project management framework for defining who does what in a multi-role engagement. In the CPB context, it functions as a deal-prep accountability map: for each role, it defines what they do, when they are needed, and what the absence of that role means for the engagement. Before any engagement starts, every named role in this table should have a specific person’s name next to it. If a role is unfilled and the no-go signal applies, do not start the engagement until it is resolved.

RoleTab 8 — Activation SprintTab 9 — Agent BuildNo-Go Signal If Unfilled
Executive Sponsor(Customer side) Signs outcome-based SOW. Validates KPI at Gate 5. Final authority on kicker payment. Signs agent build SOW. Validates Phase 4 AgentOps model. QBIC kicker authority. Missing = do not price outcome-based. Project-only pricing is still viable without a sponsor.
Business Champion(Customer side) Day-to-day counterpart. Clears internal friction. Present at every gate review. Seeds the community channel with wins. Primary UAT participant in Phase 3. Monthly AgentOps MBR contact. Identifies next-agent opportunities. No Champion = no valid gate reviews = no reliable adoption data. Block the engagement until named.
IT / Security(Customer side) Tenant access. Purview policies. RBAC. Conditional Access. Gate 1 sign-off required. Phase 2 RBAC and DLP design. Phase 3 guardrail validation. Phase 4 Production deploy approval. Unresolved policy blocks = delay SOW. Regulated verticals: security must approve before Gate 1 closes.
Delivery Lead(Partner side) Runs all five gates. Owns the shared audit folder. Kicker math. MRR conversion conversation at Gate 5. Runs all five phases. SOW scope fence. Phase 2 design sign-off. AgentOps runbook owner. No named Delivery Lead = do not start. A single accountable person is non-negotiable.
AI Coach(Partner side) Adoption clinics. Prompt library builds and updates. Community channel seeding. Low-adoption intervention. MBR delivery. Suggested starter prompt design. Golden prompt test authoring. End-user training in Phase 4. Monthly MBR. Required for Tier 2/3. For Tier 1 only, the Delivery Lead can cover this role.
AI Specialist / L3(Partner side) Technical readiness. SharePoint DAG reports. Sensitivity label configuration. Gate 1 tenant validation. Phase 2 architecture. Phase 3 build and test. ALM management. Phase 5 evaluation cadence. Required for Tier 2/3 engagements. For Tier 1 only, ServiceSolv can co-deliver this role.
Data / Security Eng(Partner side) Gate 1 oversharing remediation. Purview DLP policies. Sensitivity label configuration at scale. Phase 2 RBAC and DLP design. Phase 3 guardrail validation. Phase 5 connector health monitoring. Required when agents touch sensitive data: HIPAA-regulated, financial records, legal documents.
vCIO / AE(Partner side) Executive Sponsor relationship. Gate 5 kicker math and MRR conversion conversation. MCI application. Phase 1 SOW scoping. Phase 4 AgentOps model presentation. QBIC kicker conversations. Must be present for any kicker conversation. The vCIO is the sponsor’s peer — not the technician’s.
TD SYNNEX ServiceSolv(Distribution bench) Co-delivery of L1/L2 support during Tier 1 retainer until partner builds or hires an AI Specialist. Co-delivery of L1/L2 AgentOps support for any role the partner cannot staff internally. Not a no-go — ServiceSolv is the bridge until you build in-house AI delivery capacity.
↑ Back to Contents

7 · What Executing Tabs 8 and 9 Builds Toward Financially

The following is the base-case Tab 2 model from the CPB Workbook — not a projection, but the designed output of running structured engagements consistently across a 12-month period.

Revenue StreamTab 2 Inputs (Base Case)Annual ARRGross MarginGross Profit
Tier 1 Retainer15 customers · 50 users · $12/user/mo$108,00048%$51,840
Tier 2 Retainer5 customers · 50 users · $12/user/mo base$36,00045%$16,200
AgentCare — Tier 25 customers · 3 agents · $350/agent/mo$63,00055%$34,650
Tier 3 Retainer1 customer · 75 users · $15/user/mo$13,50041%$5,535
AgentCare — Tier 31 customer · 6 agents · $400/agent/mo$28,80055%$15,840
Agent 365 Overlay1 customer · 75 users · $10/user/mo$9,00050%$4,500
Credit Wrap (T2+T3)6 customers · $250–500 Azure/mo · 30% markup$6,30085%$5,355
Total Recurring Retainer ARR$264,600~50%$134,120
Copilot Readiness Assessments12/year · $3,500 avg$42,00060%$25,200
Agent Discovery Workshops8/year · $4,500 avg$36,00060%$21,600
Agent Builds (simple)12/year · $6,500 avg$78,00035%$27,300
Agent Builds (complex)3/year · $18,000 avg$54,00028%$15,120
MCI ReimbursementsConservative estimate · submit at Gate 5$30,000100%$30,000
Total Project Revenue$240,000~49%$119,220
Total Copilot Practice Revenue$504,600~50%$253,340
The structural point: Every dollar in the retainer ARR column flows directly from executing Tabs 8 and 9 with discipline. Tab 8 Activation Sprint → Tier 1 MRR. Tab 9 Agent Build → AgentCare. Tab 8 Gate 3 agent signal → Agent Discovery Workshop. The project revenue (bottom half) funds retainer growth. Neither half works without the other — and neither works without the engagement structure.
MCI reimbursement figures reflect the Microsoft Commerce Incentives program as structured at time of writing. Program rates, eligibility criteria, and reimbursable activities change annually. Verify current terms with your TD SYNNEX rep or the Microsoft Commerce Incentives guide before modeling against live customer engagements.
↑ Back to Contents

Reference List

All sources cited in this document. Click any link to access the primary source directly.

↑ Back to Contents