TD SYNNEX CPB Engagement Ops Guide
The partner’s field manual for Copilot activation, agent builds, and managed AgentOps — grounded in Microsoft best practices and built for SMB CSP delivery.
TD SYNNEX · Copilot Practice Builder · Partner Enablement Series · April 2026
How to Use This Document
This brief is the research foundation and narrative explanation behind the two worklist tabs in the CPB Workbook — Tab 8 (Copilot Activation Worklist) and Tab 9 (Agent Build Engagement Worklist). It explains the logic behind every phase and gate, grounds each framework in published Microsoft guidance and MSP field research, and shows real customer examples mapped to where they enter the checklists. Read this once before your first engagement. Run the workbook tabs during delivery.
Where This Guide Fits in the CPB Toolkit
Step 1
The Playbook
Strategy, tiers, monetization models, and the financial case. Read to orient your practice.
›
Step 2
Workbook Tabs 1–5
Model your numbers. Design the practice. Stress-test with Bear/Base/Bull. Make the internal case.
›
Step 3
Workbook Tabs 6–7
Kicker math and RACI before scoping. Sprint templates when pricing an engagement.
›
Step 4
Workbook Tabs 8–9
Live project trackers during delivery. One tab open per active engagement. Every gate gets a status.
›
You Are Here
Engagement Ops Guide
The research and reasoning behind Tabs 6–9. Read once before your first engagement.
✓ This Document
Workbook Tabs Referenced in This Guide — Have These Open While You Read
Tab 6 — Engagement Mechanics
Kicker Math + RACI
Covered in Section 3 (Pre-Engagement) and throughout Tab 8 gate notes
Open before scoping any outcome-based engagement
Tab 7 — Sprint Templates
Service Menu + Sprint Shapes
Referenced in Section 3 (Gates 2–3 adoption intervention) and Section 5 real examples
Open when pricing a Tier 1 service or scoping a sprint
Tab 8 — Activation Worklist
Pre-Gate through Gate 5
The subject of Section 3 in full — every gate explained, with the four common failure modes
Open as live tracker during every Activation Sprint
Tab 9 — Agent Build Worklist
Phases 1–5 (Discover through AgentOps)
The subject of Section 4 in full — every phase explained, including the 90% gate and AgentOps runbook
Open as live tracker during every agent build engagement
Before You Open Either Tab: Three Things to Get Right First
The research behind Tabs 8 and 9 consistently surfaces three priorities that are disproportionately important relative to everything else in the checklists. If you only internalize three things from this document, make it these.
Partners who use Copilot in their own operations close Copilot deals at 3× the rate of partners who don’t. Microsoft FY26 Partner Data The mechanism is direct: your most compelling demo is always the workflow you built for yourself. When you show a customer your actual Copilot-assisted operations, they stop evaluating and start imagining. That imagination is the close.
Minimum bar: Your own internal Copilot deployment running for at least 60 days before your first customer engagement. Build one agent for your own workflow — a proposal generator, a ticket summarizer, an internal knowledge base agent. You will encounter the same governance issues your customer will face. Your demo will be more credible than any slide deck.
The SharePoint Data Access Governance (DAG) oversharing report is a 15-minute task. It prevents the most common Copilot deployment crisis in the SMB market: a user prompting Copilot and surfacing a confidential document they were technically allowed to access but were never intended to find. Microsoft DAG Blueprint · Syskit · Orchestry
The non-negotiable: Run the Site Permissions report (SharePoint Admin Center → Reports → Data Access Governance) before any Copilot license is assigned. Remediate “Everyone except external users” and “Anyone with the link” on sensitive sites (HR, Finance, Legal) before Gate 1 closes. SharePoint Advanced Management (SAM) — which covers this — is included with an active M365 Copilot license at no additional cost (verify current entitlement terms).
The single most reliable predictor of hitting the 30% active-user threshold at Gate 3 is whether a named Champion was identified before Day 1. Champions are not IT staff — they are business users in roles where Copilot genuinely compresses daily friction: the office manager who lives in Outlook, the sales rep who writes proposals from scratch, the operations lead running five recurring meetings a week. Microsoft M365 Copilot Adoption Planning Checklist
These are the people whose early wins become the internal proof of concept that sells the rest of the organization. Identify them before Day 1. Brief them separately from the broader rollout. Give them the prompt library first. Make sure their results are shared in the community channel by Week 4.
How to find them: A 30-minute conversation with the business owner — not IT — asking: “Who in your team is most frustrated by repetitive document or meeting work?” That person is your Day 1 Champion. This conversation happens before the SOW is signed. It is the first act of a structured engagement.
1 · Where This Framework Comes From
The frameworks in Tabs 8 and 9 are a synthesis of Microsoft’s own published deployment guidance, documented MSP partner field experience, and the CPB Playbook’s engagement model. They are not invented. Where a claim in this document is drawn from a specific source, a citation badge appears inline. Clickable links to every source are in the Reference List at the end.
Microsoft Official Guidance
- M365 Copilot Adoption Planning Checklist — four-phase deployment model and Champion program structure
- M365 Copilot for SMB — SMB Success Kit, 30% threshold framework, flight crew model
- M365 Agents Deployment Checklist — admin roles, Application Lifecycle Management (ALM) environment strategy, agent governance
- Copilot Studio Agent Development Lifecycle — the five-phase model Tab 9 is built on
- Copilot Studio Agent Evaluation Checklist — golden prompt methodology and >90% pass rate standard
- SharePoint Advanced Management guidance — DAG reports, Site Access Reviews, RAC policy
- Secure & Governed Data Foundation Blueprint — phased oversharing remediation framework
Partner & MSP Field Research
- Syskit Copilot Readiness Assessment Framework — five-pillar permissions audit model
- Orchestry M365 Copilot Readiness Checklist — ROT content risk; storage estate as skipped readiness dimension
- AvePoint MSP Copilot Technical Readiness Checklist — four-step MSP AI preparation model
- inforcer MSP Copilot Readiness Assessment Guide — M365 utilization audit approach
- E2E Agentic Bridge Copilot Readiness Assessment — five-pillar permission hygiene framework
- Adoptify.ai Enterprise Copilot Deployment Checklist — AdaptOps phased rollout model
CPB Playbook & Case Studies
- CPB Playbook v9 (cpb.html) — five-gate sprint structure, RACI, four non-negotiables, tier sprint map
- Microsoft Inside Track — Deploying M365 Copilot in Five Chapters — hero scenario framework and maturity model
- Microsoft FY26 Partner Data — 3× close rate for internally-deployed Copilot partners
- Newman’s Own, Mike Morse Law Firm, Morula Health — Microsoft publicly documented customer stories used in Section 5
2 · Why Engagement Structure Determines Revenue Outcome
The single biggest predictor of whether a Copilot engagement converts into recurring managed services revenue is not the technology — it is whether the partner ran a structured engagement or an unstructured one.
- Provision licenses on Day 1, hope users figure it out
- No baseline KPI documented — nothing to measure ROI against
- No Champion identified — IT is the only counterpart
- No MBR cadence — customer goes quiet after 30 days
- Renewal conversation at month 11 with no supporting evidence
- Result: license churn, no retainer, no agent build conversation
- Champion identified and briefed before first license goes live
- Day 0 baseline KPI captured before any Copilot access is granted
- Five gate checkpoints create shared accountability with the customer
- Usage analytics dashboard running by Week 6
- MRR conversion conversation at Day 85 — before urgency fades
- Result: retainer signed at sprint close, agent build scoped at Gate 3
The mechanism: Structure creates evidence. Evidence creates trust. Trust creates the retainer. Without the five-gate checkpoint structure, the partner has no recurring reason to be in front of the customer, and the customer has no reason to believe the renewal is worth paying for. With it, both sides can see exactly what is being delivered at every stage — and what comes next.
— CPB Playbook, Section 10 (Outcome-Based Primer)
3×
faster Copilot deal close rate
Partners who deploy Copilot internally · Microsoft FY26 Partner Data
40–60%
more Phase 3 rework
When the Design phase is skipped before Build · CPB Playbook field data
>90%
golden prompt pass rate required
Before Production deploy · Microsoft Copilot Studio Agent Evaluation Checklist
3 · Tab 8: The Copilot Activation Worklist — Phase by Phase
What Tab 8 Is
Tab 8 is the operational checklist for a Tier 1 Activation Sprint — the 90-day structured engagement that converts a Copilot licensing conversation into a monthly managed retainer. It is a live project tracker, not a reference document. One person owns it. Every gate item gets a status. Nothing advances until the gate is green.
Pre-Gate
Week 0
Four non-negotiables confirmed. SOW signed. Day 0 KPI baseline captured.
Gate 1
Week 2
Readiness Lock. Oversharing remediated. Champions licensed. Tenant validated.
Gate 2
Week 6
Activation confirmed. Full rollout complete. Analytics dashboard live.
Gate 3
Day 45
Adoption evaluation. Active-user % vs. 30% threshold. Agent signal assessed.
Gate 4
Day 75
Impact evidence. KPI measured vs. Day 0 baseline. ROI Report drafted.
Gate 5
Day 90
Kicker validated. Tier 1 MRR retainer signed. 12-month roadmap set.
Pre-Engagement: The Four Steps Partners Consistently Skip
These are structural preconditions that make every downstream gate achievable. CPB Playbook Tab 6 · Microsoft Adoption Checklist
⚠ Skip #1 — No baseline KPI before SOW signature
Without a timestamped baseline KPI before SOW signature, you have nothing to measure against at Gate 5. The kicker cannot be earned or validated, and you will spend Gate 5 arguing over what the number used to be.
The fix: Document the KPI (date, method, responsible validator) in the shared audit folder before any license is provisioned. A 30-minute task that protects a $1,500–$5,000 kicker payment.
⚠ Skip #2 — No oversharing remediation before go-live
Copilot does not create new permissions — it makes existing permissions instantly discoverable. A document shared to “Everyone except external users” years ago becomes surfaceable by any licensed user the moment Copilot is turned on. Microsoft DAG Blueprint · Syskit
The fix: Run the Site Permissions report in SharePoint Admin Center before Gate 1. Remediate high-risk sites. SharePoint Advanced Management (SAM) — included with an active M365 Copilot license — is the tool for this.
⚠ Skip #3 — Piloting exclusively with IT
IT staff don’t represent the use cases that drive adoption. Piloting with IT produces weak use cases, no internal advocates, and no social proof. The pilot ends. The rest of the company never hears about it. Microsoft Adoption Planning Checklist
The fix: Identify 2–3 Champion profiles before Day 1 — the office manager, the sales rep, the operations lead. Their wins become the internal proof of concept.
⚠ Skip #4 — No counter-metric named
Without a counter-metric for every KPI, outcome-based engagements are gameable. A team that knows cycle time is being measured can cut corners and still “hit” the KPI. The customer’s Finance team will raise this at Gate 5. CPB Playbook Tab 6 Non-Negotiables
The fix: Name one counter-metric per KPI in the SOW appendix. If cycle time is the KPI, error rate is the counter-metric. Both confirmed in writing at Gate 5.
Gate 1 — Readiness Lock: The Oversharing Remediation Toolkit
SharePoint Advanced Management (SAM) — included at no extra cost with an active Microsoft 365 Copilot license per Microsoft’s November 2024 announcement — provides four specific tools for pre-deployment remediation.† Microsoft SAM Guidance
† Entitlement requires an active M365 Copilot license on the tenant. Verify current licensing terms with your TD SYNNEX rep before scoping customer engagements, as Microsoft bundling terms are subject to change.
- Site permissions across your organization report — tenant-wide view of all sites with broad-access permissions. Run this first at every new customer.
- Permissioned User Report — for your Copilot pilot cohort, shows exactly which sites each user can access. Run on every Champion before their license goes live.
- Site Access Reviews — delegates remediation to the customer’s site owners at scale, without the partner touching every document.
- Restricted Access Control (RAC) policy — for high-risk sites (HR, Finance, Legal): blocks Copilot from surfacing content even to users who technically have permission, while governance cleanup is in progress.
Timing note on sensitivity labels: Labels are a governance layer, not a technical prerequisite for Copilot. For regulated SMB verticals (healthcare, financial services, legal), plan Gate 1 to include label configuration. For unregulated SMBs, at minimum apply default labels to HR, Finance, and Legal libraries before go-live.
Gates 2–3 — Adoption: The 30% Threshold and What to Do When You Miss It
Microsoft’s Copilot adoption framework establishes a concrete intervention trigger. Microsoft M365 Copilot SMB Success Kit Tab 8 Gate 3 formalizes it as a named product response, not a vague instruction.
The 30% Threshold
Below 30% active users within the first 30 days is a documented adoption risk pattern. The named response is a Low-Adoption Intervention Sprint ($2,000–$3,500 from the Tier 1 Service Menu in Tab 7) — a 30-day targeted re-engagement program for departments below threshold. This is billable scope, not goodwill. The customer gets a named response to a named problem.
| Signal | Benchmark | Response If Below Benchmark | Tab 8 Reference |
| Active user % | >30% by Day 30 | Low-Adoption Intervention Sprint (Tab 7 Service Menu, $2,000–$3,500) | Gate 3 |
| Prompt volume | Rising week-over-week | Prompt library refresh — add 10–15 new use-case prompts for lagging roles | Ongoing |
| Champion engagement | ≥1 share/week in community channel | 1:1 re-briefing; add champion recognition incentive | Gate 2 |
| Dept. spread | <20pp variation across departments | Role-specific re-training session for any department >20pp below average | Gate 3 |
| Sentiment pulse | ≥3.5 / 5.0 from Gate 2 survey | 1:1 office hours with low-sentiment users; escalate blockers to Champion | Gate 2 |
Gate 4 — Building the ROI Report That Makes Renewal Automatic
The Gate 4 Copilot ROI Report ($1,500–$2,500, Tier 1 Service Menu) transforms the renewal conversation from a pricing discussion into an ROI discussion. It is also the evidence base required for MCI reimbursement applications submitted at Gate 5.
What goes in the ROI Report
- Active user rate — current vs. Day 0 baseline
- Time savings per user per week — Microsoft benchmark: 3–4 hrs/user/wk for professional services Forrester TEI
- Dollar value of time saved — loaded labor rate × hrs saved × active users × 52
- Top 3 use cases with results — named Champions, before/after examples
- Satisfaction score — from Gate 3 pulse survey
- KPI delta — primary KPI vs. Day 0 baseline (outcome-based only)
What the ROI Report does commercially
- Reframes renewal from “do we keep paying?” to “look at what we built together”
- Gives the Executive Sponsor a board-ready artifact justifying their original decision
- Creates the natural bridge to the Tier 2 agent build: “Here is what Copilot did. Here is what a dedicated agent could do.”
- Positions you as a strategic advisor, not a license reseller
- Required evidence for MCI reimbursement application at Gate 5
Gate 5 — MRR Conversion: What to Say and When to Say It
Gate 5 Talk Track
MRR Conversion Language
Tier 1 Retainer
Day 85–90
“Here is what the last 90 days delivered. [ROI Report summary.] Here is what continues every month for $12/user/month: your usage analytics dashboard, a monthly 45-minute review where we look at what’s working and what to improve, your prompt library kept current as you find new use cases, and a monthly audit log review so nothing goes sideways with your data governance. You’ve already seen what that looks like — it’s what we’ve been doing for three months. The question is whether you want to keep it going.”
Critical timing: Have this conversation at Day 85, not Day 90. At Day 90 the customer expects to be asked. At Day 85 you are still in evidence-delivery mode — a far stronger position when the retainer topic comes up naturally.
4 · Tab 9: The Agent Build Engagement Worklist — Phase by Phase
What Tab 9 Is
Tab 9 is the operational checklist for a Copilot Studio agent build engagement — from the Agent Discovery Workshop through Design, Build & Test, Deploy & Train, and into ongoing AgentOps. The five-phase structure mirrors Microsoft’s official Copilot Studio agent development lifecycle. Microsoft Copilot Studio Agent Development Lifecycle It applies to both simple declarative agents ($3,500–$8,000) and complex agents with actions and connectors ($12,000–$28,000).
Phase-name mapping · CPB uses operational names that align to Microsoft’s lifecycle phases as follows: Phase 1 Discover → Microsoft Discover; Phase 2 Design → Microsoft Experimentation (prompt design, test-set authoring, solution architecture); Phase 3 Build & Test → Microsoft Build; Phase 4 Deploy & Train → Microsoft Deploy; Phase 5 AgentOps → Microsoft Steady State. When cross-referencing Microsoft documentation, use the official names.
The most important thing about Phase 1: it is a billable engagement, not a pre-sales activity. Partners who treat discovery as free work train customers to expect free work. CPB Playbook Section 10
What the Discovery Workshop Produces
Prioritized agent candidate list (3–5 candidates, ranked by ROI, data availability, build complexity)
Build the wrong agent first and you lose customer confidence and the AgentOps retainer.
Workflow map for the top candidate (10–15 steps tagged for data sensitivity, decision points, applicability)
This becomes the Phase 2 design spec. Skip it here and Phase 2 takes twice as long.
RBAC and data source inventory (every SharePoint site, OneDrive folder, Teams channel, or external system the agent needs)
A missing data source discovered in Phase 3 is a scope change that costs everyone.
Complexity classification (Agent Builder / simple vs. Copilot Studio / complex) and initial fee range
Misclassifying complexity means under-pricing the SOW and absorbing the overrun.
Check Templates Before Building Microsoft Agent Store
Common SMB use cases are already templated in the Microsoft Agent Store and Copilot Studio. Check before scoping any custom build:
- HR onboarding agent
- IT help desk / employee self-service agent
- Customer service assistant
- SharePoint site knowledge agent
- Meeting summary and follow-up agent
Template advantage: Starting from a template reduces Phase 3 build time by 30–50%. The savings go to your EBITDA, not the customer’s invoice.
Outcome-based agents — Phase 1 non-negotiable: If this engagement includes a Workflow Outcome Sprint (kicker on cycle time or deflection %), the process baseline must be documented before any agent code is written. Tab 6 applies: (1) capture baseline, (2) define counterfactual, (3) name counter-metric, (4) cap kicker at 10%/25% of base fee. None of these can be added retroactively.
Partners who skip Design and go straight to Build report 40–60% more rework in the test phase. CPB Playbook field data Customer sign-off on the design specification is the scope fence that protects you when scope creep surfaces in Phase 3.
Agent Builder vs. Copilot Studio Microsoft Agent Builder Docs
| Use Agent Builder if… | Use Copilot Studio if… |
| Knowledge-only, no external actions | Agent must take actions (create records, update CRM, send email) |
| SharePoint/OneDrive as sole data source | External connectors required (Salesforce, ServiceNow, etc.) |
| Users already hold M365 Copilot license | Autonomous capabilities needed beyond conversation |
| 2–4 week build · $3,500–$8,000 | 6–10 week build · $12,000–$28,000 |
The Golden Prompt Test Set (Write in Phase 2) Microsoft Agent Evaluation Checklist
Write 15–20 representative user questions with expected answers before building anything. These become:
- The Phase 3 pass/fail acceptance criteria (target: >90% pass rate)
- The monthly Phase 5 regression test suite in AgentOps
- The suggested starter prompts on the agent’s welcome screen
The discipline: If you cannot write 15 representative questions the agent should answer correctly, you don’t yet understand the use case well enough to build it.
ALM Environment Strategy Microsoft Power Platform ALM
Minimum two environments: Development and Production. For regulated verticals, add a Test environment. This is the mechanism that lets you roll back a bad deployment without disrupting the customer’s operations.
Development
Build and test here. Makers work in unmanaged solution.
Test (optional)
Structured UAT with customer. Managed solution import from Dev.
Production
Live environment. Managed solution. All gates must be green first.
The agent does not leave Development until the golden prompt test set achieves a >90% pass rate. Below 90% means returning to Phase 2 instructions — not building more features. Microsoft Copilot Studio Agent Evaluation Checklist
Test Types to Run
Golden prompt test set — pass rate = (passed ÷ total) × 100. Document false positives and true negatives separately.
Edge case testing — graceful decline on out-of-scope requests is a pass. Hallucination is a fail requiring instruction tuning.
RBAC validation — confirm the agent cannot surface content the calling user doesn’t have permission to access. Test explicitly, never assume.
DLP policy validation — confirm Power Platform DLP policies block disallowed connectors before Production.
Regression evaluation — re-run the full golden prompt set after every significant change to instructions or knowledge sources.
How to Interpret Pass Rate Results
>90% — Proceed to Phase 4. Document version and date as the Phase 5 regression baseline.
75–90% — Review false positives first. Usually an instruction tuning issue, not a knowledge problem. Adjust instructions before adding content.
<75% — Return to Phase 2. The design specification has gaps. Building more features does not fix a design problem.
Hold the gate: Customers will pressure you to deploy early. “It’s good enough” at 70% becomes a support crisis and a damaged retainer within 60 days. The gate is binary.
Phase 4 has one commercial objective: get the AgentOps retainer signed before the build fee is paid. This is the moment the customer is most engaged and most willing to commit. Waiting until the agent has been live for 30 days is too late — the urgency fades.
The AgentOps Runbook — The Deliverable That Justifies the Monthly Retainer
The runbook is what separates a partner who builds agents from a partner who manages agents. It makes AgentCare ($350/agent/month at Tier 2) tangible to a customer who might otherwise think of agent management as “checking that it still works.”
Agent version history — every significant change logged with date, description, reason
Known limitations — documented scope boundaries; what the agent will decline and why
Escalation path — L1/L2/L3 triage protocol; who gets notified when the agent fails
Monthly evaluation cadence — who runs the golden prompt set; threshold that triggers a tuning sprint
Knowledge source refresh schedule — which SharePoint libraries are reviewed monthly; how stale content is removed
Connector health monitoring — how Power Automate flow failures surface; who owns the alert
Rollback procedure — step-by-step reversal to prior managed solution version
Model update response protocol — when Microsoft updates Copilot Studio’s underlying model, who runs regression evaluation within what SLA
Choose the AgentOps Pricing Model at Phase 4 Handoff
Stacked Model
Default recommendation
Per-user base + AgentCare ($350/agent/mo) + Credit Wrap (30% Azure markup). ~$1,725/mo per 50-user Tier 2 customer. Transparent — best for customers who want to understand the invoice.
Flat Model
Simpler sales motion
$20–$25/user/month all-in at Tier 2. One invoice line. Best for partners who prefer traditional MSP pricing and customers who want one number.
QBIC Overlay
Outcome-based
Quarterly Business Impact Contract. One board-level KPI with a kicker floor/ceiling. Best for Tier 3 customers where the AI program has executive visibility.
Every Phase 1–4 investment is the cost of acquiring an AgentCare retainer customer. At $350/agent/month with 55% gross margin, a partner with 10 customers each running 3 agents generates $126,000/year in AgentCare ARR at approximately $69,300 gross profit — before per-user base, Credit Wrap, or any project work.
The compounding logic: Each agent in steady state generates its own expansion revenue. The monthly evaluation run surfaces intent clusters the agent cannot currently handle — each one is an enhancement sprint opportunity. Agents drift as business processes and Microsoft’s underlying models evolve. That drift is the structural justification for the retainer. Microsoft Copilot Studio Agent Evaluation Checklist
| Cadence | Activity | Output | What It Signals to the Customer |
| Monthly | Golden prompt evaluation run | Pass rate vs. baseline; tuning tickets if <5pp drop | “We catch drift before it affects your users” |
| Monthly | Knowledge source refresh | Stale content removed; new content indexed; connector configs verified | “The agent’s answers stay current as your business changes” |
| Monthly | Analytics MBR (Copilot Studio dashboard) | Session count, resolution rate, top intents, escalation rate, satisfaction | “You can see exactly what the agent is doing” |
| Monthly | AgentOps MBR with Business Champion | 45-min review; enhancement backlog maintained | “You always know what comes next” |
| Quarterly | QBIC kicker validation (if applicable) | KPI delta vs. prior quarter; Executive Sponsor sign-off; kicker payment | “Your AI investment has a measured return” |
| Quarterly | Capability expansion review | New feature assessment; enhancement proposals scoped and priced | “You are always at the leading edge” |
| Annually | Architecture review | Platform fit reassessment; ALM audit; DLP/RBAC check; next-agent identification | “Your AI infrastructure stays current and secure” |
5 · Real Examples: What These Engagements Look Like in Practice
The following three case studies are publicly documented by Microsoft. Each is mapped to where it enters the Tab 8 and Tab 9 checklists, showing the specific engagement structure a partner would apply and where the revenue flows.
Legal Services · ~50 Attorneys
Mike Morse Law Firm
Tier 1 → Tier 2 progression
LMS built in-house
Zero
outside vendors for LMS build
Michigan’s largest personal injury practice uses Copilot across its full operation — meeting summaries, document drafting, case law research, email correspondence, Excel analysis. Their most notable engagement: building an entire Learning Management System from scratch using Copilot, generating slides, graphics, training scripts, and process documentation with zero outside vendors.
This is the Tier 1 → Tier 2 progression in its most recognizable form. Broad Copilot adoption across the firm (Tab 8) surfaces one specific workflow that is repetitive, document-heavy, and benefits from a structured knowledge base: LMS content creation. That workflow becomes a Tab 9 agent build.
“Digital transformation is getting the mundane out of the way of human work.”
John Georgatos, CIO — Mike Morse Law Firm · Microsoft Customer Story
Checklist Mapping
Where this customer enters Tabs 8 and 9, and what the partner does at each stage
Tab 8 · Gate 3The LMS workflow surfaces as an agent build candidate at Day 45. The Delivery Lead asks: “Is there a repetitive task you wish Copilot would just handle automatically?” The CLO and training lead point to LMS content creation. Document the signal in the Gate 3 audit folder and carry it into the Gate 4 agent build proposal.
Tab 8 · Gate 4The agent build proposal is presented alongside the Gate 4 ROI Report at Day 75. “Here is what Copilot did for your team. Here is what a dedicated LMS content agent could do — and here is what the scoping engagement looks like ($3,500–$6,000 Agent Discovery Workshop).” The ROI Report makes the upsell conversation feel natural, not transactional.
Tab 9 · Phase 1Agent Discovery Workshop scopes the LMS agent. The workflow map covers: how training content is currently created, which SharePoint libraries store materials, which roles generate content, and how often it needs updating. Output: SOW for a declarative agent build using Agent Builder, with SharePoint as the sole knowledge source.
RevenueAgent Builder tier: $3,500–$8,000 build fee → $350/agent/month AgentCare. The AgentCare retainer is straightforward to justify: every time the firm updates its training curriculum, the agent’s knowledge source needs refreshing. The monthly evaluation run confirms the agent is producing accurate outputs for new hires. Both activities are documented in the AgentOps runbook.
Food & Consumer Goods · 50 Employees
Newman’s Own
Tier 1 Activation Sprint
Three-department Champion model
3×
marketing campaigns per month
50 employees competing against multinational brands. Newman’s Own adopted Copilot to run leaner across three departments simultaneously. The marketing team triples monthly campaign output — briefs that took three hours now complete in 30–60 minutes. The logistics team cut daily publication review from a full morning to 30 minutes. The Chief Legal Officer uses Copilot for research, citations, and contract cleanup.
This is a textbook Tab 8 Activation Sprint with one nuance new partners often miss: three distinct Champion profiles across three departments, each with a different use case and a different ROI story. The Gate 4 ROI Report here is unusually compelling — it quantifies productivity gains across three functions, making the Tier 1 retainer renewal nearly self-evident.
Checklist Mapping
Where this customer enters Tabs 8 and 9, and what the partner does at each stage
Tab 8 · Pre-GateThree Champions identified before Day 1 — one per department. The Pre-Gate Champion conversation with the CEO identifies three distinct profiles: Social Media Manager (Marketing), Operations Lead (Logistics), and CLO (Legal). Each gets a role-specific prompt library at Gate 1 kick-off. This multi-department setup is what makes the Gate 4 ROI Report quantifiable across three distinct win stories, not just one.
Tab 8 · Gate 4Three-function ROI Report makes renewal self-evident. The Gate 4 report quantifies: (1) campaign brief time: 3 hrs → 30–60 min × campaign volume; (2) publication review: full morning → 30 min daily; (3) legal research: multi-hour → fraction. When the Executive Sponsor sees three independent productivity wins across three departments, the Tier 1 retainer renewal at Gate 5 is a formality.
Tab 8 · Gate 3The CLO’s research workflow surfaces as an agent build candidate. “Research, citations, contract cleanup” is the pattern that becomes a Tab 9 engagement. A declarative agent with the firm’s contract templates, legal references, and regulatory documents as knowledge sources — scoped at Gate 3, proposed at Gate 4, and workshopped at Gate 5 as the follow-on engagement.
RevenueTier 1 retainer ($12/user/month) → Legal Research agent build → AgentCare. The follow-on Tab 9 engagement is a Legal Research agent: Agent Builder tier, $3,500–$8,000 build, $350/agent/month AgentCare for the monthly knowledge refresh as case law, contract templates, and regulatory references evolve.
Life Sciences · Regulated Vertical
Morula Health
Workflow Outcome Sprint candidate
Regulated content · Cycle time KPI
Weeks → Days
content creation cycle time
A small UK life sciences agency producing scientific and regulatory content under strict accuracy requirements. Using Copilot in Word to summarize complex scientific data tables, content creation time dropped from weeks to days while accuracy standards were maintained. Employees shifted from formatting and summarization to analysis and quality review.
This is why regulated verticals are often the most compelling Workflow Outcome Sprint candidates: the counter-metric (accuracy and regulatory compliance) is inherently measurable, the regulatory environment controls for external variables making the counterfactual clean, and the KPI compression (weeks → days) is dramatic enough to justify a kicker conversation at Gate 5.
Checklist Mapping
Where this customer enters Tabs 8 and 9, and what the partner does at each stage
Tab 8 · Pre-GateRegulated vertical: Gate 1 must include compliance sign-off and sensitivity label configuration. Before provisioning any licenses, the Pre-Gate checklist requires: (1) cycle-time KPI captured with timestamp and measurement method; (2) counter-metric named — in this case, accuracy rate or regulatory rejection rate; (3) compliance reviewed and sign-off obtained; (4) sensitivity labels configured for scientific data libraries. All four items in the audit folder before Gate 1 begins.
Tab 8 · Gate 3–4KPI delta measured at Gate 4 against the Day 0 baseline. If weeks-to-days compression is confirmed and the accuracy counter-metric has not degraded, the Workflow Outcome Sprint kicker is earnable at Gate 5. The Tab 6 kicker calculator applies: base fee × 25% ceiling = maximum kicker payment. Both the KPI delta and the counter-metric must be validated in writing by the Executive Sponsor. Never on self-reported data.
Tab 8 · Gate 5Gate 5 scopes the next 60-day Workflow Outcome Sprint. The sprint (from Tab 7) targets further cycle-time reduction with a fresh kicker: $7,500 base fee + floor ($750) / ceiling ($1,875). The sprint is signed at Gate 5 close-out and converts into Tier 1 MRR on Day 91. This is the Tab 7 Workflow Sprint model applied exactly as designed to a regulated content workflow.
RevenueActivation Sprint → Workflow Outcome Sprint → Tier 1 MRR. Three-stage progression: Tab 8 Activation Sprint ($20,000 base, kicker at Gate 5) → Tab 7 Workflow Outcome Sprint ($7,500 base + kicker) → Tier 1 MRR retainer ($12/user/month). Each stage closes into the next with the MRR retainer already agreed before the current sprint ends. No cold start at any transition.
6 · RACI — Accountability Reference for New Partners
What RACI Means
RACI stands for Responsible, Accountable, Consulted, and Informed — a standard project management framework for defining who does what in a multi-role engagement. In the CPB context, it functions as a deal-prep accountability map: for each role, it defines what they do, when they are needed, and what the absence of that role means for the engagement. Before any engagement starts, every named role in this table should have a specific person’s name next to it. If a role is unfilled and the no-go signal applies, do not start the engagement until it is resolved.
| Role | Tab 8 — Activation Sprint | Tab 9 — Agent Build | No-Go Signal If Unfilled |
| Executive Sponsor(Customer side) |
Signs outcome-based SOW. Validates KPI at Gate 5. Final authority on kicker payment. |
Signs agent build SOW. Validates Phase 4 AgentOps model. QBIC kicker authority. |
Missing = do not price outcome-based. Project-only pricing is still viable without a sponsor. |
| Business Champion(Customer side) |
Day-to-day counterpart. Clears internal friction. Present at every gate review. Seeds the community channel with wins. |
Primary UAT participant in Phase 3. Monthly AgentOps MBR contact. Identifies next-agent opportunities. |
No Champion = no valid gate reviews = no reliable adoption data. Block the engagement until named. |
| IT / Security(Customer side) |
Tenant access. Purview policies. RBAC. Conditional Access. Gate 1 sign-off required. |
Phase 2 RBAC and DLP design. Phase 3 guardrail validation. Phase 4 Production deploy approval. |
Unresolved policy blocks = delay SOW. Regulated verticals: security must approve before Gate 1 closes. |
| Delivery Lead(Partner side) |
Runs all five gates. Owns the shared audit folder. Kicker math. MRR conversion conversation at Gate 5. |
Runs all five phases. SOW scope fence. Phase 2 design sign-off. AgentOps runbook owner. |
No named Delivery Lead = do not start. A single accountable person is non-negotiable. |
| AI Coach(Partner side) |
Adoption clinics. Prompt library builds and updates. Community channel seeding. Low-adoption intervention. MBR delivery. |
Suggested starter prompt design. Golden prompt test authoring. End-user training in Phase 4. Monthly MBR. |
Required for Tier 2/3. For Tier 1 only, the Delivery Lead can cover this role. |
| AI Specialist / L3(Partner side) |
Technical readiness. SharePoint DAG reports. Sensitivity label configuration. Gate 1 tenant validation. |
Phase 2 architecture. Phase 3 build and test. ALM management. Phase 5 evaluation cadence. |
Required for Tier 2/3 engagements. For Tier 1 only, ServiceSolv can co-deliver this role. |
| Data / Security Eng(Partner side) |
Gate 1 oversharing remediation. Purview DLP policies. Sensitivity label configuration at scale. |
Phase 2 RBAC and DLP design. Phase 3 guardrail validation. Phase 5 connector health monitoring. |
Required when agents touch sensitive data: HIPAA-regulated, financial records, legal documents. |
| vCIO / AE(Partner side) |
Executive Sponsor relationship. Gate 5 kicker math and MRR conversion conversation. MCI application. |
Phase 1 SOW scoping. Phase 4 AgentOps model presentation. QBIC kicker conversations. |
Must be present for any kicker conversation. The vCIO is the sponsor’s peer — not the technician’s. |
| TD SYNNEX ServiceSolv(Distribution bench) |
Co-delivery of L1/L2 support during Tier 1 retainer until partner builds or hires an AI Specialist. |
Co-delivery of L1/L2 AgentOps support for any role the partner cannot staff internally. |
Not a no-go — ServiceSolv is the bridge until you build in-house AI delivery capacity. |
7 · What Executing Tabs 8 and 9 Builds Toward Financially
The following is the base-case Tab 2 model from the CPB Workbook — not a projection, but the designed output of running structured engagements consistently across a 12-month period.
| Revenue Stream | Tab 2 Inputs (Base Case) | Annual ARR | Gross Margin | Gross Profit |
| Tier 1 Retainer | 15 customers · 50 users · $12/user/mo | $108,000 | 48% | $51,840 |
| Tier 2 Retainer | 5 customers · 50 users · $12/user/mo base | $36,000 | 45% | $16,200 |
| AgentCare — Tier 2 | 5 customers · 3 agents · $350/agent/mo | $63,000 | 55% | $34,650 |
| Tier 3 Retainer | 1 customer · 75 users · $15/user/mo | $13,500 | 41% | $5,535 |
| AgentCare — Tier 3 | 1 customer · 6 agents · $400/agent/mo | $28,800 | 55% | $15,840 |
| Agent 365 Overlay | 1 customer · 75 users · $10/user/mo | $9,000 | 50% | $4,500 |
| Credit Wrap (T2+T3) | 6 customers · $250–500 Azure/mo · 30% markup | $6,300 | 85% | $5,355 |
| Total Recurring Retainer ARR | $264,600 | ~50% | $134,120 |
| Copilot Readiness Assessments | 12/year · $3,500 avg | $42,000 | 60% | $25,200 |
| Agent Discovery Workshops | 8/year · $4,500 avg | $36,000 | 60% | $21,600 |
| Agent Builds (simple) | 12/year · $6,500 avg | $78,000 | 35% | $27,300 |
| Agent Builds (complex) | 3/year · $18,000 avg | $54,000 | 28% | $15,120 |
| MCI Reimbursements‡ | Conservative estimate · submit at Gate 5 | $30,000 | 100% | $30,000 |
| Total Project Revenue | $240,000 | ~49% | $119,220 |
| Total Copilot Practice Revenue | $504,600 | ~50% | $253,340 |
The structural point: Every dollar in the retainer ARR column flows directly from executing Tabs 8 and 9 with discipline. Tab 8 Activation Sprint → Tier 1 MRR. Tab 9 Agent Build → AgentCare. Tab 8 Gate 3 agent signal → Agent Discovery Workshop. The project revenue (bottom half) funds retainer growth. Neither half works without the other — and neither works without the engagement structure.
‡ MCI reimbursement figures reflect the Microsoft Commerce Incentives program as structured at time of writing. Program rates, eligibility criteria, and reimbursable activities change annually. Verify current terms with your TD SYNNEX rep or the
Microsoft Commerce Incentives guide before modeling against live customer engagements.
Reference List
All sources cited in this document. Click any link to access the primary source directly.
- MicrosoftM365 Copilot Adoption Planning Checklist — Four-phase deployment model, Champion program, 30% threshold framework
adoption.microsoft.com/en-us/copilot/essential-guide/plan/
- MicrosoftM365 Copilot for SMB — SMB Success Kit, flight crew framework, adoption resources
adoption.microsoft.com/en-us/copilot/smb/
- MicrosoftM365 Agents Deployment Checklist — Admin roles, agent policies, ALM environment strategy, governance framework
learn.microsoft.com/en-us/copilot/microsoft-365/agent-essentials/m365-agents-checklist
- MicrosoftCopilot Studio Agent Development Lifecycle — The five-phase model (Discover → Experimentation → Build → Deploy → Steady State) that Tab 9 is built on
learn.microsoft.com/en-us/microsoft-copilot-studio/guidance/architecture/deployment-lifecycle
- MicrosoftCopilot Studio Agent Evaluation Checklist — Golden prompt methodology, >90% pass rate standard, continuous evaluation framework
learn.microsoft.com/en-us/microsoft-copilot-studio/guidance/evaluation-checklist
- MicrosoftSharePoint Advanced Management — Copilot Readiness — DAG reports, Site Access Reviews, RAC policy, oversharing remediation tools
learn.microsoft.com/en-us/sharepoint/get-ready-copilot-sharepoint-advanced-management
- MicrosoftSecure & Governed Data Foundation Deployment Blueprint — Phased oversharing remediation framework and Purview governance pre-flight
learn.microsoft.com/en-us/microsoft-365/copilot/secure-govern-copilot-foundational-deployment-guidance
- MicrosoftDeploying M365 Copilot in Five Chapters (Inside Track) — Hero scenario framework, maturity model, champion engagement approach
microsoft.com/insidetrack/blog/deploying-microsoft-365-copilot-in-five-chapters/
- SyskitCopilot Readiness Assessment Framework — Five-pillar permissions audit; oversharing as the highest pre-deployment risk
syskit.com/blog/copilot-readiness-assessment/
- OrchestryM365 Copilot Readiness Checklist — ROT content risk framework; storage estate as the most-skipped readiness dimension
orchestry.com/insight/microsoft-365-copilot-readiness-checklist
- AvePointMSP Copilot Technical Readiness Checklist — Four-step MSP AI preparation model for SMB clients
avepoint.com/ebooks/msp-copilot-readiness-checklist
- inforcerHow to Assess Microsoft Copilot Readiness as an MSP — M365 utilization audit approach; security posture assessment framework
inforcer.com/insights/how-to-assess-copilot-readiness
- E2E Agentic BridgeCopilot Readiness Assessment — The Complete Checklist — Five-pillar permission hygiene framework before deployment
e2eagenticbridge.com/blog/copilot-readiness-assessment-checklist