TD SYNNEX  The Copilot Cowork Power User Guide
From Chat to Delegation: Your A-to-Z Field Manual for Microsoft 365 Copilot Cowork
Prepared by: Ken Lince — Sr. Director, Cloud Engineering, TD SYNNEX  ·  Last updated: April 22, 2026

Copilot Cowork doesn’t chat with you — it goes off and does the work while you’re in a meeting, in a car, or asleep.

But only if you use it like a colleague, not like a search box. This guide tells you how.

1. What Is Copilot Cowork?

Copilot Cowork is a new Microsoft 365 Copilot surface, announced on March 9, 2026 by Charles Lamanna, that lets you delegate multi-step work to an AI agent instead of chatting with it turn-by-turn. It runs server-side inside your Microsoft 365 tenant, reads and writes across Outlook, Teams, Word, Excel, PowerPoint, SharePoint, and OneDrive through Microsoft Graph, and is explicitly powered by Anthropic’s Claude models as a subprocessor.

It is not a rename of Copilot Chat. Chat answers one question at a time. Cowork takes a task like “review last week, clean up my calendar, draft regrets for conflicts, and block 3 hours of focus time Tuesday” and actually does it — with an approval preview before anything hits your inbox or calendar.

13
Built-in skills (Email, Calendar, Teams, Word, Excel, PowerPoint, SharePoint, OneDrive, Deep Research, and more)
Source: Microsoft Learn
20
Max custom SKILL.md files per user in /Documents/Cowork/Skills/
Source: Microsoft Learn
250k
Characters per message input limit — attach whole folders of PDFs and contracts in a single turn
Source: Microsoft Learn
$99
Per user/month list price for M365 E7, the broad-availability SKU (Frontier preview today; GA May 1, 2026)
Source: GeekWire, Mar 9 2026

What makes it distinct from Copilot Chat

Where to find it

Cowork lives at m365.cloud.microsoft and in the Microsoft 365 desktop apps for Windows and Mac. It is not available on mobile today. Your admin must be enrolled in the Microsoft Frontier program for Cowork to appear.

↑ Back to Table of Contents

2. The Big Idea: Cowork Delegates, Chat Answers

The single most important mindset shift for a new Cowork user is to stop thinking of it as a chatbot.

Chat (the old model)

You ask → it answers → you iterate.

You stay in the loop on every turn. The conversation is the work. When you close the window, the work stops.

Best for quick questions, drafting a paragraph, summarizing a single file, or explaining a concept.

Cowork (the new model)

You delegate → it works → you review.

You describe the outcome. Cowork plans, executes, asks for clarification only when it genuinely can’t proceed, and lands finished artifacts in OneDrive or your inbox.

Best for multi-step tasks: meeting prep, weekly reports, customer briefings, calendar cleanup, recurring briefings, research-then-draft jobs.

Practitioner perspective

Cowork runs server-side — “it keeps working even when I close my laptop” — is the single biggest practical change vs chat-bound Copilot.

— Pascal Brunner-Nikolla, Microsoft MVP, Copilot Cowork first look

Practically, this means the unit of work is a Task, not a chat turn. You’ll get the best results by writing prompts the way you’d brief a colleague: state the goal, the constraints, the data sources, the deliverable, and where you want it saved.

↑ Back to Table of Contents

3. How It Works — The Mechanic That Changes Everything

If you learn only one thing from this guide, make it this. It explains 90% of the “my AI got dumb” experiences users report.

The one-sentence truth

Every time you press Enter, Cowork (and every Claude-backed product) re-reads your entire conversation from turn one before answering your new message. The model has no memory between turns. What you perceive as “remembering” is actually the client app re-sending the whole transcript every single time.

“The Messages API is stateless, which means that you always send the full conversational history to the API.” — Anthropic, Using the Messages API (platform.claude.com)

Four consequences that follow from that one fact

~5×
Cost per turn at 150k tokens vs 30k tokens, same prompt, on Sonnet 4.6
SitePoint · Mickiewicz, Mar 2026
39%
Average drop in task performance from single-turn to multi-turn across top LLMs
Laban et al., Microsoft Research + Salesforce, 2025
O(n²)
How attention cost scales with conversation length — doubling a chat ~4× the attention work
Vaswani et al., NeurIPS 2017
~75%
Context fill point at which Claude Code auto-compacts (the inferred default Cowork uses too)
Anthropic, Apr 15 2026; Morph LLM
  1. Cost grows super-linearly with length. Every new message re-prices every previous message.
  2. Latency tracks cost. The model re-processes the whole prefix each call. Prompt caching cuts the re-processing bill by 90% on cached prefixes, but the tokens still ride along.
  3. Quality degrades — “context rot.” Anthropic’s own term: “model performance degrades as context grows because attention gets spread across more tokens, and older, irrelevant content starts to distract from the current task.”
  4. Longer threads are worse on every axis. Cost, speed, and output quality — despite the intuition that “keeping everything in one chat helps the agent remember.”

The cost curve, made concrete

A realistic 20-turn Sonnet 4.6 conversation (500-token system prompt, ~600 tokens of user input per turn, ~1,200 tokens of assistant output per turn, no caching):

TurnCumulative input tokensInput cost (no cache)With 90% cache read after turn 1Savings
11,100$0.0033$0.0033
510,100$0.0303$0.004485%
1022,600$0.0678$0.008388%
1535,100$0.1053$0.012288%
2047,600$0.1428$0.016289%

Scaled to a marathon session: a single turn at 150k tokens costs roughly $0.45 in input alone versus $0.09 at 30k tokens — a 5× difference for the same one-line prompt (SitePoint, Mar 2026).

What this means for you as a Cowork user

Every minute you spend piling more context into a single Task raises the cost, slows the response, and lowers the quality of what comes back. The fix is counter-intuitive but universal: new objective → new Task. Stable rules go in SKILL.md. Heavy data goes in an attached OneDrive file. Intermediate exploration goes to Deep Research. You don’t lose memory by starting fresh — there was no memory to lose.

↑ Back to Table of Contents

4. The Models Under the Hood

Microsoft and Anthropic have both confirmed on the record that Anthropic Claude models power Cowork and the broader Microsoft 365 Copilot experience. You don’t have to pick a model — Microsoft routes to the right one for the task — but it helps to know what’s running.

Model familyBest forContext windowTradeoff
Claude Opus 4.5 / 4.6 / 4.7Hardest reasoning — deep research, multi-document synthesis, strategic drafts200,000 tokensSlowest and most expensive; overkill for quick tasks
Claude Sonnet 4.5 / 4.6The workhorse — drafting, rewrites, code review, most Cowork tasks200,000 tokensThe right default for most delegation
Claude Haiku 4.5Fast, cheap, routine classification / summarization / lookups200,000 tokensLess depth — don’t hand it strategy work
OpenAI GPT-series & Microsoft MAIAvailable in M365 Copilot; Microsoft chooses per taskVariesModel routing is opaque — you don’t always see which model answered
Regional note

In EU / EFTA / UK tenants, the Anthropic subprocessor is off by default and must be opted in by your admin. Cowork is not available in GCC, GCC High, DoD, or sovereign clouds. Source: IDM Magazine, Mar 26 2026; Microsoft Learn FAQ.

The honest critique

“Will it continue to use lower-end models or older models without telling you the way Copilot does?”

— Ethan Mollick (Wharton), reacting to the Cowork launch. Reported by GeekWire, Mar 9 2026.

Mollick’s question is unresolved. Microsoft has not published a model-routing telemetry view. Treat Cowork as “Claude-family powered, with Microsoft’s discretion to downshift.” If absolute model transparency matters for a use case (regulated drafting, legal review), document the output and the date and revalidate periodically.

↑ Back to Table of Contents

5. The Power User Habit Loop

Every power-user pattern across Claude Code, Claude.ai, and Copilot Cowork reduces to six moves. Internalize these and Cowork starts feeling like a colleague instead of a quirky chatbot.

1
New objective → new Task

If the goal changes, the Task changes. Don’t reuse yesterday’s Task for today’s question. Anthropic’s own rule for Claude Code applies exactly: “when you start a new task, you should also start a new session.”

2
Stable rules → SKILL.md, not chat

Tone, structure, never-do rules, section templates, output destinations — all belong in a SKILL.md file that Cowork auto-discovers. Instructions repeated in chat get lost during compaction; instructions in SKILL.md are re-loaded every session.

3
Heavy data → OneDrive file, attached once

The 250k-character message limit is plenty for a pointer to state. Attach the spec/report/dashboard at the start of the Task — it sits in the stable prefix and caches well. Drip-feeding file content across messages scatters it across the growing messages tail and is the worst of both worlds.

4
Intermediate exploration → Deep Research (the sub-agent)

If you just need a conclusion, not a transcript, delegate it. Deep Research runs in its own fresh context and returns a summary — the intermediate tool noise never inflates the parent Task’s context.

5
Confidence drops → start fresh

When Cowork repeats itself, forgets a constraint you set, hallucinates a file path, or slows down noticeably — stop and restart. Paste a short hand-written summary of what you’d decided into the new Task. This is the single highest-leverage habit in the guide.

6
Approvals → batch with “Don’t ask again”

Every re-confirmation adds a turn to the conversation. If Cowork is going to send 11 similar emails, approve the first, choose “don’t ask again for this conversation,” and let the rest flow. You retain oversight at the Task level.

↑ Back to Table of Contents

6. Your First Week: A Starter Plan

This plan is aimed at the Microsoft 365 professional who has “used Copilot a few times.” Do these in order; each builds the muscle for the next.

Day 1 — Your morning briefing

Open Cowork and say (in plain English):

“Every morning at 8:30 AM, send me a daily briefing covering (1) unread email from the last 16 hours that looks important or from anyone who reports to me, (2) my first three meetings with a one-line context per meeting pulled from prior emails, and (3) anything marked follow-up from yesterday. Save the output to my Cowork conversations and send a Teams message to myself with the highlights.”

Check Tasks → Scheduled. You now have a durable recurring agent. This alone is worth the license.

Day 2 — Meeting prep

Pick tomorrow’s most important meeting. Tell Cowork:

“For my [time] meeting with [attendees], put together a briefing packet: last six months of email with these people, any open opportunities from CRM if you have access, prior meeting notes, and three suggested talking points. Save as a Word doc in OneDrive at /Meetings/[date]-[name].docx.”

Day 3 — Calendar cleanup

Try this on a messy week:

“Review my next 10 business days. Find any meetings I can decline or delegate without impact, protect at least three 90-minute focus blocks, and draft regrets (polite, in my voice) for meetings I’m declining. Show me the full plan as a preview before making any changes.”

Day 4 — Your first SKILL.md

Pick one thing you write the same way every time (a weekly status, a customer brief, a deal review). Write it as a skill (see Section 12 for the template). From that day forward, any Task that needs that format just works.

Day 5 — Deep Research pass

Pick a company, topic, or competitor you’d normally spend an afternoon researching. Delegate it:

“Use Deep Research to produce a 3-page brief on [topic]: key facts, recent developments, three strategic implications for us. Save to OneDrive at /Research/[topic].docx. Don’t ask me follow-ups unless you hit a genuine ambiguity.”
The expected outcome of your first week

By Friday: one durable daily agent, one briefing template you trust, one decluttered calendar, one SKILL.md doing invisible work, and one research artifact you didn’t have to write. That’s the baseline. Everything else in this guide compounds from there.

↑ Back to Table of Contents

7. Cowork vs the Claude Family — What You Actually Get

The comparison most people have in their head — “Cowork vs Claude.ai” — is already out of date. As of April 2026, Anthropic ships five distinct products bundled into one desktop install, and Microsoft 365 Copilot Cowork is a sixth, separately licensed surface. Before you can decide which one to use, you have to know they all exist.

The Anthropic product family, April 2026

A single Claude desktop download gives you Claude Chat, Claude Code, Claude Cowork (desktop), Claude Design, and the Office add-ins (Excel, PowerPoint, Word, Chrome, Slack) — all on one subscription. Copilot Cowork lives outside this bundle, inside your Microsoft 365 tenant, under an E7 seat license.

The six-way landscape

Product What it’s for Where it runs Licensing
Claude Chat
claude.ai · web, desktop, iOS, Android
One-question-at-a-time assistant. Artifacts for interactive HTML/SVG/code. Projects, Memory, Research, Skills. Anthropic cloud Free · Pro $17/mo · Max $100–$200/mo · Team · Enterprise
Claude Code
CLI · VS Code · JetBrains · desktop · web · iOS
Agentic coding tool: reads/writes files, runs shells, edits git repos, spawns subagents, runs plan-mode loops, schedules Routines. Your machine + Anthropic cloud (Routines are cloud-run) Included on Pro, Max, Team, Enterprise (as of Apr 22 2026; a pricing test for new Pro users is in flight)
Claude Cowork desktop
macOS · Windows (GA Feb 10, 2026)
Non-developer agentic desktop app for knowledge work: file organization, document prep, research synthesis, data extraction. “Claude Code power for knowledge work.” Local machine; plan-and-approve gates; 38+ connectors; Computer Use in preview Included on all paid Claude plans
Claude Design
NEW · launched April 17, 2026
Research preview, Opus 4.7–powered. Prototypes, slides, one-pagers from a text prompt. Builds team design systems from your codebase + design files. Exports PDF, URL, PPTX, or push to Canva. Anthropic cloud (desktop UI) Pro, Max, Team, Enterprise (off by default for Enterprise; admin-enabled)
Claude Office add-ins
Excel · PowerPoint · Word · Chrome · Slack
Embed Claude directly in the Office file you’re editing. Claude for Excel writes formulas with cell-level citations and builds financial models. Inside the Office app you already use Included on Pro, Max, Team, Enterprise
Microsoft 365 Copilot Cowork
m365.cloud.microsoft · M365 desktop apps
Agentic execution layer inside your M365 tenant. 13 built-in skills for Email, Calendar, Teams, Word, Excel, PowerPoint, PDF, SharePoint, Deep Research, and more. Scheduled prompts. SKILL.md customization. Server-side in your Microsoft 365 tenant (Graph + Purview) M365 Frontier preview today · E7 at $99/user/mo · GA May 1, 2026

Sources: Anthropic — Download Claude; Anthropic — Claude Cowork product page; TechCrunch — Anthropic launches Claude Design, Apr 17 2026; 9to5Mac — Claude Code Routines, Apr 14 2026; Anthropic — Claude for Excel; Microsoft Learn — Cowork FAQ.

What changed in the last 90 days (and why it matters)

Source: Microsoft Tech Community — Opus 4.7 in M365 Copilot, Apr 16 2026.

The critical capability comparison

Dimension Copilot Cowork Claude Chat Claude Code Claude Cowork desktop Claude Design
Reads your tenant email/calendar/SharePointYes via GraphNoOnly via MCP connectorsOnly via connectorsNo
Respects Microsoft 365 ACLsYes by designNoNoNoNo
Reads your local filesNoFile upload onlyYes full FSYes authorized foldersFile upload only
Runs shell commandsNoNoYes BashVia Computer Use (preview)No
Native Word/Excel/PPT/Teams actionsYes first-classVia Artifacts (sandbox)Via Skills (PPTX/DOCX/XLSX)Writes real filesExports PPTX/PDF
Scheduled recurring tasksYes nativeNoYes Routines (Apr 2026)No (practitioner cron)No
Custom skills / playbooks20 × SKILL.md in OneDriveAnthropic SkillsFile-based skills + subagentsSKILL.mdDesign systems
Who picks the modelMicrosoft routesYou pickYou pick per sessionAnthropic routes; Extended Thinking opt-inFixed: Opus 4.7
Enterprise audit / PurviewYesNoNoNoNo
MobileNoYesiOS companionDesktop onlyDesktop only
Delete files / foldersBlocked by policyN/AYes via BashYes with approvalN/A

Pick the right tool — A through E

A
Pick Copilot Cowork when…

The task reads or writes data inside your M365 tenant; enterprise audit is mandatory; the output has to respect existing SharePoint/OneDrive/Outlook ACLs; or you need a scheduled agent that your admin and Purview can see. This is the only product in the landscape that treats your tenant as a first-class operating environment.

B
Pick Claude Code when…

You’re editing a git repository, running tests, debugging, refactoring, spawning subagents on a codebase, or need any form of shell access. Bash and a full local filesystem are what Claude Code has and nothing else in this table does.

C
Pick Claude Cowork desktop when…

The work is on your personal machine — local folders, Dropbox, personal Gmail, research PDFs, a side project — and you want agentic plan-and-approve execution without shipping any of it through your employer’s tenant.

D
Pick Claude Chat / Claude Design when…

You need a single-turn answer, a quick draft, an explanation, an interactive Artifact (HTML/SVG prototype), or a visual mock (Claude Design, if you have access). Works well on mobile. Not designed for multi-step agentic execution.

E
Pick Copilot Chat when…

You want a fast in-tenant answer, no write actions, no scheduling, no cross-app orchestration — the question fits in one turn and you don’t need Cowork’s approval-gate overhead. This is the Work IQ–grounded chat you already know.

The honest framing

There is no “winner” in this table. Each product wins the slot it was designed for. The mistake to avoid: trying to do tenant work in Claude Code (no Graph, no ACLs, no audit) or trying to do code work in Copilot Cowork (no shell, no local files, no git). That’s the next two sections.

↑ Back to Table of Contents

8. Same Prompt, Different Answer — Why

If you paste the same message into Claude Chat, Claude Code, Claude Cowork desktop, and Microsoft 365 Copilot Cowork, you will get four different answers. Not slightly different. Structurally different. This confuses new users — “aren’t they all Claude?” — and it shapes what each tool is actually good for.

The divergence is not a bug, and it is not randomness. Each product assembles a different prompt package around your text before the model sees it. Seven mechanisms stack up.

The mental model that matters

You don’t send a prompt to Claude. You send a prompt to a product, and the product assembles (system prompt) + (tool definitions) + (injected context) + (your message) before sending it to Claude. The four products assemble very different packages. That’s the whole ballgame.

The seven mechanisms of divergence

1
A different system prompt is prepended

Anthropic publishes claude.ai’s system prompt verbatim — it sets identity, formatting rules, the current date, and safety behaviors. Claude Code’s preset is documented in structure (“tool usage, code style, working directory, git status”) but not in full. Copilot Cowork’s is Microsoft-authored, establishes a Copilot identity (not Claude), enumerates the 13 skills, and installs the approval-gate and no-delete policies. Source: Anthropic — System Prompts release notes; Anthropic — Modifying system prompts.

2
A different tool set is available

Claude.ai has Artifacts, web search, and file upload — no shell. Claude Code has Read, Write, Edit, Bash, Grep, Glob, WebSearch, subagents. Copilot Cowork has exactly 13 Microsoft-authored skills — no shell, no local filesystem, no arbitrary code execution. A prompt that requires a tool the product doesn’t have will either be refused, worked around with a script, or redirected to a native skill. That rewiring changes the output entirely.

3
Different context is silently injected

Claude.ai injects the current date and not much else — it doesn’t know who you work for. Claude Code injects cwd, OS, git branch, platform, auto-memory paths. Copilot Cowork injects your name, email, title, department, manager, direct reports, company, timezone, current local time, and Work IQ signal over recent email/meetings/Teams — before the model sees a single character of your message. Same message, four very different effective context windows.

4
A different model routing decision is made

In Claude.ai and Claude Code you pick the model (Opus, Sonnet, Haiku). In Copilot Cowork Microsoft routes — and has confirmed that routing can pull from Claude, OpenAI, or Microsoft’s own MAI family depending on task, load, and cost. Opus 4.7 is in Cowork’s selector as of April 16, 2026, but which model handled any given turn is not user-visible.

5
Different safety and guardrail layers are stacked

Claude.ai runs Anthropic’s usage policy. Claude Code adds code-safety (don’t commit secrets, confirm destructive shell commands). Copilot Cowork stacks a Microsoft guardrail layer on top: no delete ever, per-action approval gates with risk indicators, no encrypted-file reads, no local filesystem, ACL inheritance, Purview audit, and refusal categories (employee evaluation, profiling by protected characteristics) that claude.ai doesn’t share. Ask all four “help me decide whether to terminate Alex” — you’ll get four different answers, and Cowork is the most likely to decline.

6
Different temperature and sampling defaults

Anthropic’s API docs: “even with temperature of 0.0, the results will not be fully deterministic.” Each product picks its own default, and none publish it. Empirically Claude Code runs lower temperature (more consistent code), Cowork variants prioritize tool-call precision, and Claude.ai runs with enough variation that two retries on the same prompt aren’t identical.

7
The effective prompt is therefore not the same prompt

Stack mechanisms 1 through 6 together and “the same prompt” reaches four very different runtime assemblies. Saying “Claude said X” without naming the product is incomplete. The product is the whole ballgame.

A worked example: “Write me a Python script that reads my inbox and summarizes this morning’s unread emails.”

Same 17 words. Four radically different deliverables.

Product What you actually get What you have to do next Time to value
Claude.ai A complete .py file in an Artifact. Uses Microsoft Graph SDK or imaplib. Placeholder auth — tenant_id, client_id, client_secret as env vars. TODO comments where you configure OAuth. Register an OAuth app, install deps, run the script locally, iterate on auth errors. Hours
Claude Code Claude Code writes main.py + requirements.txt + .env.example to disk, installs deps via Bash, prompts you once for an access token (via az login or device-code flow), then runs python main.py and prints the summary to your terminal. A working script on disk plus a live result. Paste a token once. Review output. Minutes
Claude Cowork desktop Proposes a plan: “(1) connect via Gmail/Outlook connector, (2) filter today’s unread, (3) produce inbox-summary-2026-04-22.docx in your folder.” Waits for approval. Executes. No Python involved — the connector path delivers the intent faster than a script would. Approve the plan. Read the doc. Seconds
Copilot Cowork Doesn’t write the script at all. Uses the Email skill directly — reads your inbox via Microsoft Graph (already authenticated as you), filters unread from this morning, produces the summary in chat or a Word doc in OneDrive. If you insist on a standalone script, it will likely redirect: “I can do this directly — would you like me to, or do you specifically need a standalone program?” Read the summary. Seconds
Three practical consequences

1. Don’t generalize benchmarks. A prompt that shines in Claude.ai may flop in Copilot Cowork because the Email skill short-circuits the reasoning path you intended.

2. Match the tool to the task. Tenant data → Copilot Cowork. Local code and terminals → Claude Code. Creative/reasoning with no specific environment → Claude Chat. Local-but-not-tenant knowledge work → Claude Cowork desktop.

3. Read the system prompt where you can. Anthropic publishes claude.ai’s at docs.claude.com/en/release-notes/system-prompts. Microsoft does not publish Copilot Cowork’s. That asymmetry is itself a signal.

Sources: Anthropic — System Prompts; Anthropic — Modifying system prompts; Anthropic — Prompt caching; Anthropic Messages API (temperature); Microsoft Learn — Cowork FAQ; Microsoft Tech Community — Opus 4.7, Apr 16 2026; GeekWire — Mollick on silent downshifts, Mar 9 2026.

↑ Back to Table of Contents

9. Cowork and Code — An Honest Look

“Copilot writes code today, so does that mean Cowork is a coding tool?” is the single most common question from developers evaluating the product. The short answer is no, and Microsoft doesn’t claim it is. The long answer has useful nuance.

Cowork is not a coding product. None of its 13 built-in skills is developer-facing. There is no terminal, no git integration, no local filesystem, and no MVP has published a Cowork coding demo.

What Cowork can do around a developer’s work is interesting — and that’s where the fit is.

What Cowork actually does with code

CapabilityStatusEvidence
Reads code files you attach with syntax highlighting Yes Supports .py .js .ts .java .c .cpp .go .rb .rs, .ipynb, and the full config-file family. Source: Microsoft Learn — Use Cowork.
Emits code snippets as chat text Informally Cowork is Claude-backed and will produce code if asked. But Microsoft has not certified this, not listed supported languages, and given it no place in the product’s marketed skills.
Writes a .py or .ps1 file into OneDrive Plausible, not documented Cowork saves outputs to OneDrive/Cowork. No primary source confirms code-file generation as a first-class destination.
Authors a Python-in-Excel =PY() cell No Python in Excel cell authoring is a Copilot in Excel feature, not a Cowork feature. Source: Microsoft Tech Community.
Builds a Power Automate flow No Power Automate has its own natural-language Copilot. Cowork does not invoke it. Source: Copilot in Power Automate overview.
Generates Office Scripts (TypeScript) for Excel Web No Office Scripts has its own editor and Copilot integration; no Cowork surface documented.
Authors VBA macros into .xlsm No Not documented. Secondary press occasionally conflates “Microsoft 365 Copilot” with Cowork; the actual VBA-authoring path lives inside Copilot in Excel.
Edits a local git repository No No local file access, no shell. Use Claude Code or GitHub Copilot.
Runs scripts server-side Undefined One opaque Microsoft Learn sentence: “Some background operations, like script execution, run without displaying individual steps.” Microsoft has not elaborated. Treat as an internal implementation detail, not a user feature. Source: Use Cowork.

The Microsoft dev-tool landscape — where Cowork sits

ToolPrimary audienceWhat it’s for
GitHub CopilotDevelopersIDE-embedded code completion, chat, agents, PR reviews. This is the dev tool.
Copilot in ExcelAnalysts, business usersAuthors formulas, Python cells, charts inside a workbook.
Copilot in Power AutomatePower usersGenerates cloud flows from natural-language descriptions.
Copilot StudioMakers, ITBuilds custom Copilot agents and has a Python code interpreter in a sandbox.
Copilot CoworkKnowledge workersAgentic execution across Office + Graph. Not a code tool, by design.

Same brand family, disjoint product surfaces. MVP Christian Buckley frames Cowork as “Claude Code for knowledge workers” — the analogy, not the substitute. The analogy lands because both are long-running, plan-and-execute agents; it misleads only if you read it as “Cowork replaces Claude Code.” It does not.

Can Cowork help a developer at all? Yes — just not by writing code.

The honest fit for Cowork in a developer’s workflow is all the non-code work around a project. That’s where it genuinely subsidizes what Claude Code and GitHub Copilot already do.

Conceptual demo: a developer’s week, split across tools

Claude Code handles (the code)

Repo work on your laptop. Writing, testing, refactoring, running CI, generating migrations, reviewing PRs, spawning subagents for large refactors, scheduling Routines to run nightly code quality checks.

Copilot Cowork handles (everything else)

Sprint-review deck from the week’s commits. Weekly status to your skip. Stakeholder update email. Meeting prep for the architecture review. SharePoint write-ups of the design you already coded. Bug-triage digest from Teams and email. Scheduled Monday digest of last week’s releases.

The “engineering manager’s Monday morning” demo script

This is a Cowork workflow a developer or engineering manager can run today, with no code written, that demonstrates the actual fit.

Every Monday at 8:00 AM, pull together my engineering team’s week-ahead brief:

1. READ last week’s releases by scanning my team’s Teams channel and the
   closed GitHub issues referenced in email notifications.
2. LIST open bugs assigned to my direct reports with priority labels
   from the last 14 days of email.
3. SUMMARIZE three blockers mentioned in my last 7 days of 1:1 meeting notes
   (saved in OneDrive at /1-1s).
4. DRAFT a short Teams post for my #eng-weekly channel with the top three
   priorities for this week. Show me a preview before posting; don’t send.
5. SAVE the full brief as /Reports/Weekly/eng-brief-YYYY-MM-DD.docx.

Constraints:
- Never include anything from private 1:1 meetings by name.
- Use past tense for last week, future tense for this week.
- If you can’t find data for a section, say so — don’t invent.

What makes this a good Cowork demo: zero lines of code, four Graph data sources, one scheduled recurrence, one approval gate (the Teams post), one durable artifact (the Word doc). A developer gets back 30–60 minutes every Monday and stays in Claude Code for the actual engineering work.

The one-line verdict for developers

Keep your paid Claude for the code. Use Cowork for the meetings, decks, status updates, and stakeholder communication that surround the code. The two don’t compete — they stack.

Sources: Microsoft Learn — Cowork FAQ; Microsoft Learn — Use Cowork; buckleyPLANET — Christian Buckley (MVP), Apr 2026; GitHub — GitHub Copilot docs; Microsoft Learn — Copilot Studio code interpreter FAQ.

↑ Back to Table of Contents

10. For the Existing Claude Power User

If you already pay for Claude Pro or Max — you’ve used Projects, you live in Artifacts, you run Claude Code daily, maybe you’ve tried Claude Design this week — this section is the straight talk on what to expect moving some of your workflow into Copilot Cowork.

Portability is low. Don’t expect your prompts to transfer.

Section 8 is the mechanics. The shorthand: the system prompt, tools, context injection, model routing, and safety stack are all different in Cowork. A Project you love in Claude.ai will produce different output in Cowork, and not necessarily better or worse — just different-shaped. Plan to re-tune, not copy-paste.

What you gain by moving specific workflows into Cowork

GainWhy it matters
Tenant grounding via Microsoft GraphCowork already knows your calendar, email, direct reports, SharePoint files, and Teams activity. No connectors, no OAuth dance, no copy-paste. Claude Cowork desktop requires a connector per data source and still can’t enforce your employer’s ACLs.
ACL inheritanceEvery read and write runs with your M365 permissions. This is not a feature Claude has an equivalent for — it’s an architectural property of Cowork living inside the tenant.
Purview audit trailEvery Cowork action is logged for enterprise audit. Required for regulated work; unavailable in Claude’s consumer products.
Native Office actions under the approval gateCowork sends real emails, posts real Teams messages, creates real calendar events — with a preview dialog. Claude Code can do this only through MCP connectors you wire yourself; Claude.ai can’t.
Scheduled Prompts as a native surfaceClaude Code just added Routines (Apr 14, 2026) — and it’s great — but Routines run inside Claude Code. Cowork’s Scheduled Prompts run across email, calendar, Teams, and SharePoint without leaving the tenant.
Durable Task stateThe Tasks view (List, Board, Scheduled) survives disconnects, laptop closes, and days off. Claude.ai conversations don’t, and Claude Code sessions rely on your machine being on unless you’re in Routines.

What you lose — be honest about it

LossWhy you’ll notice
ArtifactsNo interactive HTML/SVG sandbox pane. Cowork renders Adaptive Cards or produces real Office files — useful, but not the same as Claude.ai’s side-panel prototypes.
Picking your modelMicrosoft routes. You may get Claude Opus 4.7, Sonnet, an OpenAI model, or MAI — and you can’t always tell which. This is Mollick’s public concern.
Anthropic’s release cadenceAnthropic shipped Claude Design five days ago. Microsoft’s cadence on Cowork is TBD — M365 updates tend to arrive in larger but slower waves.
Computer UseClaude Cowork desktop and Claude Code both have Computer Use in research preview. Copilot Cowork does not expose this to you.
Extended Thinking togglesIn Claude.ai and Claude Cowork desktop you can dial up thinking budget. Cowork decides for you.
Shell and local filesystemObvious but worth naming. If your current Claude Code workflow depends on Bash and a repo, that workflow stays in Claude Code.
MobileCowork is browser + desktop only. Claude.ai has mobile apps; Claude Code has an iOS companion.

The “both” pattern — the right answer for most hardcore Claude users

You don’t pick one. You split the workload along the tenant boundary.

Keep in paid Claude (personal subscription)

Code. Local files. Side projects. Anything outside your work tenant. Artifacts and prototypes. Deep, open-ended reasoning sessions where you want to pick the model. Claude Design for visuals. Routines for scheduled developer automation.

Move to Cowork (your E7 seat)

Anything that reads or writes your tenant’s email, calendar, Teams, SharePoint, or OneDrive. Anything that needs the Purview audit trail. Scheduled recurring tasks that depend on Graph data. Meeting prep, customer briefings, stakeholder updates. SKILL.md encoding of your team’s playbooks.

Price check — apples to oranges

Claude Max 20x: $200/month, personal subscription, one seat, you pay. M365 E7: $99/user/month list, seat license your employer pays, covers all of Cowork plus the rest of E5 plus E7. These aren’t substitutes. For a hardcore coder the math usually favors keeping Claude Max personally and using Cowork at work — because the employer pays the Cowork seat, and Claude Max buys you Claude Code + Cowork desktop + Chat + Design on your own machine.

Migration checklist for a Claude power user’s first Cowork week

  1. Don’t port your Projects. Re-express the three Projects you use most as SKILL.md files in OneDrive (Section 12). System prompts don’t transfer; the intent does.
  2. Identify your top three “only in the tenant” use cases. Meeting prep, weekly status from Graph data, customer briefings pulling from CRM + email — these are Cowork’s native advantage. Start there.
  3. Replicate one Claude Code Routine as a Cowork Scheduled Prompt. See which UX you prefer for recurring work. Routines are developer-oriented; Scheduled Prompts are business-oriented.
  4. Accept that the output will feel different. Different system prompt, different context, different refusal boundaries. You’ll re-tune your prompts. Budget a couple of hours.
  5. Don’t give up your Claude Code subscription. Cowork will not replace it. If anything, owning both makes you faster in both.
Practitioner perspective

“[Anthropic’s Cowork] was built in a couple of weeks using Claude Code and is being updated and evolving quickly. [Microsoft] has a tendency to launch a leading product and then let it sit for awhile. I’m curious about whether their pacing will change.”

— Ethan Mollick (Wharton), reacting to the Cowork launch. Reported by GeekWire, Mar 9 2026.

Mollick’s concern is real and unresolved. For a heavy Claude user, that’s another argument for the “both” pattern: Anthropic ships fast, Microsoft ships enterprise — and until the pacing question resolves, holding both keeps you on the frontier of each.

↑ Back to Table of Contents

11. Six Signs You Should Restart

These are all documented symptoms of “context rot” — the measured quality decay of long-running conversations. When you see two or more, stop iterating. Summarize. Open a new Task.

  1. It repeats itself. Same suggestions, same phrasing, across different turns.
  2. It forgets a constraint you set earlier. Classic example: you said “formal tone” on turn 3; by turn 18 it’s drafting like LinkedIn influencer copy.
  3. It re-reads files it already read. You’ll see it say “let me check [doc]” again after you watched it read the same doc five minutes ago.
  4. It hallucinates a file path, a name, or a link. Confidently wrong citations are a late-session tell.
  5. It contradicts a decision from earlier in the session. The summary at turn 20 quietly disagrees with the decision you both made at turn 5.
  6. The responses feel sluggish. Per-turn latency scales with conversation length; a slow turn is a crowded turn.

The summary-and-restart template

Before you hit New Task, paste this into the current Task:

Write a handoff brief for the next Cowork Task:
  GOAL: one sentence — what I’m trying to accomplish
  DECISIONS MADE SO FAR: bullet list of what we’ve locked in
  CURRENT DRAFT / STATE: the latest good version (link to OneDrive file if large)
  CONSTRAINTS: audience, tone, length, format, deadline
  OPEN QUESTIONS: what we still need to decide
  NEXT STEP: one specific thing to do next
Return it as plain markdown so I can paste it into a new Task.

Open a new Task. Paste the brief as the first message. You now have a fresh, clean context with all the signal and none of the noise.

Why this works, mechanically

(a) It puts the stable decisions at the start of the new context, where primacy bias helps you. (b) It discards all the abandoned drafts the model was quietly using as sycophantic signal about “what the user wants.” (c) It resets the 39% multi-turn degradation curve back to turn one.

↑ Back to Table of Contents

12. SKILL.md — Your Durable Memory

A SKILL.md is a small markdown file with YAML frontmatter that lives in OneDrive at /Documents/Cowork/Skills/<skill-name>/SKILL.md. Cowork auto-discovers skills at the start of every conversation and applies them when relevant.

Think of it as a durable, cheap context layer. Unlike instructions in chat, skills are re-loaded from disk after every compaction and never lost to context rot.

A starter skill: weekly status report

---
name: weekly-status
description: Produces my weekly status update in the exact format my team expects.
triggers:
  - "weekly status"
  - "weekly update"
  - "weekly report"
---

# Weekly Status Report Skill

## What this skill does
When I say “make my weekly status,” produce a Word document at
`/Reports/Weekly/YYYY-MM-DD-status.docx` following the rules below.

## Data sources (in priority order)
1. My calendar for the last 7 days (accepted meetings only)
2. Email I sent or replied to in the last 7 days (skip noise/newsletters)
3. Teams messages in channels I own
4. My /Projects folder in OneDrive for anything updated this week

## Required sections (in this order)
1. **Wins** — 3 bullets max, concrete outcomes only
2. **In Flight** — what I’m actively working on, with status
3. **Blockers** — what’s stuck and who I need from
4. **Next Week** — top 3 priorities
5. **Metrics** — any numeric KPIs I touched this week

## Tone and formatting rules
- Crisp, professional, no marketing language
- Numbers and dates, not vibes
- Never use “synergy,” “leverage,” or “circle back”
- Past tense for Wins, present tense for In Flight, future tense for Next Week

## Never do
- Don’t invent metrics you can’t source from a file or message
- Don’t include anything from private 1:1 meetings
- Don’t send it — just save the file and tell me where it is
A supply-chain warning

Microsoft does not validate SKILL.md files. If someone shares a skill with you, read it before dropping it in your OneDrive. A malicious or buggy skill can instruct Cowork to send emails or post to Teams in your name — approval gates help but are not a substitute for reviewing the skill itself.

Good candidates for your first three skills: weekly-status, customer-brief, deal-review. Once those exist, you stop re-explaining structure every time — and Cowork starts feeling like a team member who knows how you work.

↑ Back to Table of Contents

13. Scheduled Prompts — Recurring Work on Autopilot

Scheduled prompts are Cowork’s answer to Power Automate for any recurring job that can be described in a paragraph. You say it in plain English; Cowork parses the schedule, registers the task, and runs it forever (or until you pause it).

What they’re great for

The cardinal rule: each run is stateless

Treat every scheduled run as independent

A scheduled prompt does not remember what happened last run. Each invocation gets a fresh context. Write the prompt as if Cowork has never seen it before: include the instruction, the data source, the output location, and any constraints. Don’t say “continue from yesterday” — say “compare to the last 7 days of email and meetings.”

How to make the output diff-able

Because every run produces an artifact, you’ll want to compare runs week-over-week. Ask for:

Then week 12 is meaningfully comparable to week 1 at a glance. Narrative scheduled outputs are hard to scan; structured scheduled outputs are a dashboard.

Editing and pausing

Tasks → Scheduled shows everything. You can edit, pause, resume, or delete any scheduled prompt. Microsoft does not publish a maximum number of concurrent scheduled prompts per user, but a reasonable guideline: fewer, higher-quality prompts beat a cluttered list.

↑ Back to Table of Contents

14. Deep Research — Your Built-In Sub-Agent

Deep Research is one of Cowork’s 13 built-in skills and it is structurally identical to a Claude Code subagent: it runs in its own fresh context, consumes its own tool noise privately, and returns a final summary to the parent Task.

This matters more than it sounds. When you delegate something to Deep Research, the intermediate web fetches, document reads, note-taking, and verification passes never enter your main Task’s context. The parent Task gets only the conclusion.

The single most useful question to ask before delegating: “Do I need this tool output again — or just the conclusion?”

If just the conclusion, send it to Deep Research. That's the rule.

Good jobs for Deep Research

Include everything the sub-agent needs in the prompt

Because the sub-agent doesn’t see the parent conversation, your Deep Research prompt has to be self-contained. File paths, constraints, prior decisions, tone rules, output destination — all go into the delegation prompt itself.

Use Deep Research to produce a 2-page executive brief on [topic].

Sources:
- My /Research folder in OneDrive
- /Clients/Acme folder in SharePoint (the last 18 months only)
- Public web, restricted to sources dated in the last 9 months

Constraints:
- Cite every factual claim with a URL
- Separate CONFIRMED from UNCERTAIN with a label
- Do not speculate beyond the sources

Deliverable: save as /Research/acme-competitive-brief.docx
Do not email or post it anywhere. Just save the file.
↑ Back to Table of Contents

15. Approvals and Trust

Cowork will never silently send an email, post to Teams, create a calendar event, or modify a file without showing you a preview first. That’s the approval gate, and it’s the reason Cowork is safe to deploy to large populations.

The approval dialog, in one picture

  1. Cowork shows you exactly what it’s about to do: recipient, subject, body, channel, calendar change — whatever the action is.
  2. You approve, edit, or reject.
  3. A dropdown lets you say “don’t ask again for this conversation” — useful when Cowork is about to do 11 similar actions and you’ve already blessed the first.
The single most underused control

“Don’t ask again” is not a safety downgrade — it’s a context saver. Every re-approval adds turns to the conversation, which inflates tokens, which increases the chance of context rot. Batch similar approvals and move on.

Things you should always approve carefully

Things where “Don’t ask again” is usually fine

↑ Back to Table of Contents

16. What Cowork Can’t Do (Yet)

An honest list of today’s limits, all confirmed by Microsoft Learn or independent reporting. Know these going in and you’ll avoid the “why won’t it…” frustration.

Product constraints (by design)

No local file access. Cowork cannot read or write files on your laptop. Everything must go through OneDrive or SharePoint.

No delete. Cowork cannot delete files or folders in OneDrive or SharePoint. If you want something gone, you delete it yourself. This is intentional, for safety.

No encrypted file reads, even if you have the rights.

Attached files must be under 200 MB each.

No mobile. Browser and Windows/Mac desktop only, today.

Custom skills are not validated by Microsoft. The SKILL.md files you create or accept are executed as you wrote them. Review any skill you didn’t author personally.

Access and geography

Things Microsoft hasn’t documented

↑ Back to Table of Contents

17. The 22-Tactic Reference Card

Condensed from Claude Code best practices, named practitioners, and the Cowork docs. Use these as a checklist when you want to improve a workflow.

Context hygiene

#TacticWhy it works
T1New objective → new TaskStarts a fresh context; no stale prefix to reprocess
T2Watch for the six signs (Section 11)Objective signals of context rot — stop iterating
T3Hand off with a written summary templatePrimacy bias works for you on turn 1 of a fresh Task
T4Correct via a new Task, not another turnEvery correction adds two more turns to the history
T5Act at ~75% context, not at 95%The summarizer is at its least smart when under pressure
T6Prefer a hand-written brief over relying on auto-compactTwo-minute investment; you choose what survives

Stable-context architecture

#TacticWhy it works
T7Put stable rules in SKILL.md, not chatRe-loaded every session; never lost to compaction
T8Tier your skills (always / on-demand / never-loaded)Skill sprawl inflates the context budget
T9Don’t enable connectors you won’t use this sessionDeferred tool definitions routinely eat 40%+ of the budget

Caching-aware prompting

#TacticWhy it works
T10Structure every prompt as stable-prefix + dynamic-tailAutomatic caching hits the 90%-discount path
T11Mind the cache-prefix minimums (4k tokens on Opus)Prefixes under the minimum are never cached
T12Don’t toggle thinking/reasoning modes mid-conversationInvalidates the message cache and re-prices everything

Sub-agent delegation

#TacticWhy it works
T13Use Deep Research when you need a conclusion, not a transcriptIntermediate tool noise stays out of your parent Task
T14Include everything the sub-agent needs in the prompt stringThe sub-agent can’t see the parent conversation

Approvals & tool discipline

#TacticWhy it works
T15Batch approvals with “Don’t ask again”Fewer approval turns = fewer tokens = more quality
T16Ask Cowork to plan first, act after you confirmPrevents wasted context on wrong approaches

Externalize state

#TacticWhy it works
T17Put heavy state in OneDrive files, attach by referenceThe file is a stable prefix; chat messages are a growing tail
T18Attach files at the start of the Task, not drip-fedKeeps attachments in the cacheable prefix region
T19Break one long job into multiple Cowork TasksResearch → draft → review, each with fresh context
T20Kill and restart without guiltThe cost of starting fresh is small; the cost of a 150k-token session is large

Scheduled-prompt discipline

#TacticWhy it works
T21Treat scheduled prompts as stateless fresh-context runsThey are — don’t write “continue from yesterday”
T22Structure scheduled output for diff-ability, not narrativeFixed sections + fixed length = a dashboard over time
↑ Back to Table of Contents

18. Troubleshooting — “It Got Weird” Field Guide

Symptom → likely cause → fix. Print this page and tape it to your monitor.

Symptom you observeWhat’s actually happeningFix
Responses are getting slower Context length has grown; per-turn attention work is quadratic Summary-and-restart in a new Task (Section 11 template)
Forgot the tone/format you set earlier Rule was given in chat; auto-compact dropped it Move the rule into a SKILL.md (Section 12) and restart
Keeps re-reading the same file Earlier read fell out of the compacted summary Attach the file fresh at the start of a new Task
Hallucinated a file path, name, or link Late-session context rot; primacy/recency bias breaking mid-context recall Restart; ask it to verify every path/name/link against Graph before citing
Agreed with your pushback when it was actually right Sycophancy — documented behavior in RLHF-trained models In the restart, ask for reasoning before a verdict; resist saying “try again”
Task keeps asking clarifying questions The brief was underspecified or the sub-agent didn’t get the full context Rewrite the prompt with the goal, data sources, constraints, and output destination — see Section 14 template
Approval dialog fired 11 times in a row You didn’t use “Don’t ask again” On approval #1, select “don’t ask again for this conversation”
Scheduled prompt stopped producing useful output Data sources drifted, or the prompt assumed state from prior runs Edit the scheduled prompt; make it self-contained; add fixed section structure (Section 13)
Cowork said it couldn’t access a file you clearly own File is encrypted, or sits in a SharePoint library with ACLs that don’t cascade to Cowork Move to an un-encrypted location; verify the ACL; re-try

The universal reset, in order

  1. Ask for a handoff brief (Section 11 template)
  2. Open a new Task
  3. Paste the brief as the first message
  4. Attach any files the brief references
  5. Proceed
The workshop one-liner

Cowork has no memory between turns. Every message you press Enter on re-sends the whole conversation. Longer thread = more cost AND worse output. Start fresh often.

↑ Back to Table of Contents

19. Further Reading & Sources

Every claim in this guide traces to one of the sources below. Grouped by evidence tier.

Microsoft & Anthropic primary

  1. Microsoft 365 Blog — Copilot Cowork: a new way of getting work done (Lamanna, Mar 9 2026)
  2. Microsoft 365 Blog — Copilot Cowork now available in Frontier (Spataro, Mar 30 2026)
  3. Microsoft Learn — Cowork common questions (Frontier) FAQ
  4. Microsoft Learn — Manage Scheduled Prompts
  5. Microsoft 365 Blog — Expanding model choice in Microsoft 365 Copilot (Lamanna, Sep 24 2025)
  6. Microsoft Tech Community — Claude Opus 4.7 in M365 Copilot (Apr 16 2026)
  7. Anthropic — Claude now available in Microsoft 365 Copilot (Sep 24 2025)
  8. Anthropic — Using Claude Code: session management and 1M context (Apr 15 2026)
  9. Anthropic — Using the Messages API (statelessness)
  10. Anthropic — Prompt caching
  11. Anthropic — Claude Code subagents
  12. Anthropic — System Prompts release notes
  13. Anthropic — Modifying system prompts (Claude Code Agent SDK)
  14. Anthropic — Messages API (temperature defaults)
  15. Anthropic — Download Claude (desktop bundle)
  16. Anthropic — Claude Cowork product page
  17. Anthropic — Cowork: Claude Code power for knowledge work
  18. Anthropic — Claude for Excel
  19. Microsoft Learn — Use Cowork
  20. Microsoft Learn — Microsoft 365 Copilot architecture

Claude product family & April 2026 launches

  1. TechCrunch — Anthropic launches Claude Design (Apr 17 2026)
  2. 9to5Mac — Claude Code Routines (Apr 14 2026)
  3. VentureBeat — Claude Cowork lands on Windows (Feb 11 2026)
  4. VentureBeat — Redesigned Claude Code desktop & Routines (Apr 15 2026)
  5. GitHub — GitHub Copilot docs
  6. Microsoft Learn — Copilot Studio code interpreter FAQ
  7. Microsoft Learn — Copilot in Power Automate overview
  8. Microsoft Tech Community — Copilot support for Python in Excel

Peer-reviewed research (context rot, multi-turn degradation)

  1. Vaswani et al., “Attention Is All You Need,” NeurIPS 2017
  2. Dao et al., “FlashAttention,” NeurIPS 2022
  3. Liu et al., “Lost in the Middle,” TACL 2024
  4. Modarressi et al., “NoLiMa: Long-Context Evaluation Beyond Literal Matching,” ICML 2025
  5. Laban et al., “LLMs Get Lost In Multi-Turn Conversation,” 2025 (Microsoft Research + Salesforce)
  6. Sharma et al., “Towards Understanding Sycophancy in Language Models,” ICLR 2024 (Anthropic)
  7. Hong et al., “Context Rot,” Chroma Research 2025

Named practitioners

  1. GeekWire — Microsoft’s new Copilot Cowork integrates Anthropic’s Claude (Bishop, Mar 9 2026)
  2. IDM Magazine — Claude now inside your M365 tenant: mind the data-residency gap (Mar 26 2026)
  3. Pascal Brunner-Nikolla (MVP) — Copilot Cowork: Your Day 107
  4. Ami Diamond (MVP) — Cowork: 2 use cases for daily scheduled tasks
  5. Christian Buckley (MVP) — 5 things you need to know about Copilot Cowork
  6. Forbes — Janakiram MSV on the digital coworker
  7. SitePoint — Context Management for Long-Running Claude Code Sessions (Mickiewicz, Mar 20 2026)
  8. Morph LLM — Claude Code Compact (Mar 13 2026)
  9. Medium: Mubashar — Clear vs Compact (Feb 23 2026)
  10. bruniaux.com — M04 Compact vs Clear cheat sheet
  11. The Science Talk — Claude Code Context Window Explained (Apr 15 2026)
  12. Jpranav (Medium) — Stop Wasting Tokens (Nov 2025)
  13. CostLens.dev — Anthropic’s Prompt Caching (Nov 3 2025)
  14. Simon Willison — LLMs are stateless functions
  15. James Howard — Context Degradation Syndrome
  16. Cameron Fuller (Quisitive) — Scheduling Copilot Prompts: The Good, the Bad, the Ugly

Glossary

The final habit loop — memorize this

New objective → new Task.
Stable rules → SKILL.md.
Heavy data → OneDrive file, attached once.
Intermediate reasoning → Deep Research.
Confidence drops → summary-and-restart.
Approvals → batch with “Don’t ask again.”

↑ Back to Table of Contents

Prepared by Ken Lince — Sr. Director, Cloud Engineering, TD SYNNEX  ·  ken.lince@tdsynnex.com

Companion to the Give Me an Hour Today: Copilot Cowork workshop. Re-verify model IDs, pricing, and product gates before re-delivering — this space moves quickly.