Copilot Cowork doesn’t chat with you — it goes off and does the work while you’re in a meeting, in a car, or asleep.
Copilot Cowork is a new Microsoft 365 Copilot surface, announced on March 9, 2026 by Charles Lamanna, that lets you delegate multi-step work to an AI agent instead of chatting with it turn-by-turn. It runs server-side inside your Microsoft 365 tenant, reads and writes across Outlook, Teams, Word, Excel, PowerPoint, SharePoint, and OneDrive through Microsoft Graph, and is explicitly powered by Anthropic’s Claude models as a subprocessor.
It is not a rename of Copilot Chat. Chat answers one question at a time. Cowork takes a task like “review last week, clean up my calendar, draft regrets for conflicts, and block 3 hours of focus time Tuesday” and actually does it — with an approval preview before anything hits your inbox or calendar.
SKILL.md files per user in /Documents/Cowork/Skills/SKILL.md files into OneDrive to codify your team’s playbooks. Cowork auto-discovers them at the start of each conversation.Cowork lives at m365.cloud.microsoft and in the Microsoft 365 desktop apps for Windows and Mac. It is not available on mobile today. Your admin must be enrolled in the Microsoft Frontier program for Cowork to appear.
The single most important mindset shift for a new Cowork user is to stop thinking of it as a chatbot.
You ask → it answers → you iterate.
You stay in the loop on every turn. The conversation is the work. When you close the window, the work stops.
Best for quick questions, drafting a paragraph, summarizing a single file, or explaining a concept.
You delegate → it works → you review.
You describe the outcome. Cowork plans, executes, asks for clarification only when it genuinely can’t proceed, and lands finished artifacts in OneDrive or your inbox.
Best for multi-step tasks: meeting prep, weekly reports, customer briefings, calendar cleanup, recurring briefings, research-then-draft jobs.
Cowork runs server-side — “it keeps working even when I close my laptop” — is the single biggest practical change vs chat-bound Copilot.
— Pascal Brunner-Nikolla, Microsoft MVP, Copilot Cowork first look
Practically, this means the unit of work is a Task, not a chat turn. You’ll get the best results by writing prompts the way you’d brief a colleague: state the goal, the constraints, the data sources, the deliverable, and where you want it saved.
If you learn only one thing from this guide, make it this. It explains 90% of the “my AI got dumb” experiences users report.
Every time you press Enter, Cowork (and every Claude-backed product) re-reads your entire conversation from turn one before answering your new message. The model has no memory between turns. What you perceive as “remembering” is actually the client app re-sending the whole transcript every single time.
“The Messages API is stateless, which means that you always send the full conversational history to the API.” — Anthropic, Using the Messages API (platform.claude.com)
A realistic 20-turn Sonnet 4.6 conversation (500-token system prompt, ~600 tokens of user input per turn, ~1,200 tokens of assistant output per turn, no caching):
| Turn | Cumulative input tokens | Input cost (no cache) | With 90% cache read after turn 1 | Savings |
|---|---|---|---|---|
| 1 | 1,100 | $0.0033 | $0.0033 | — |
| 5 | 10,100 | $0.0303 | $0.0044 | 85% |
| 10 | 22,600 | $0.0678 | $0.0083 | 88% |
| 15 | 35,100 | $0.1053 | $0.0122 | 88% |
| 20 | 47,600 | $0.1428 | $0.0162 | 89% |
Scaled to a marathon session: a single turn at 150k tokens costs roughly $0.45 in input alone versus $0.09 at 30k tokens — a 5× difference for the same one-line prompt (SitePoint, Mar 2026).
Every minute you spend piling more context into a single Task raises the cost, slows the response, and lowers the quality of what comes back. The fix is counter-intuitive but universal: new objective → new Task. Stable rules go in SKILL.md. Heavy data goes in an attached OneDrive file. Intermediate exploration goes to Deep Research. You don’t lose memory by starting fresh — there was no memory to lose.
Microsoft and Anthropic have both confirmed on the record that Anthropic Claude models power Cowork and the broader Microsoft 365 Copilot experience. You don’t have to pick a model — Microsoft routes to the right one for the task — but it helps to know what’s running.
| Model family | Best for | Context window | Tradeoff |
|---|---|---|---|
| Claude Opus 4.5 / 4.6 / 4.7 | Hardest reasoning — deep research, multi-document synthesis, strategic drafts | 200,000 tokens | Slowest and most expensive; overkill for quick tasks |
| Claude Sonnet 4.5 / 4.6 | The workhorse — drafting, rewrites, code review, most Cowork tasks | 200,000 tokens | The right default for most delegation |
| Claude Haiku 4.5 | Fast, cheap, routine classification / summarization / lookups | 200,000 tokens | Less depth — don’t hand it strategy work |
| OpenAI GPT-series & Microsoft MAI | Available in M365 Copilot; Microsoft chooses per task | Varies | Model routing is opaque — you don’t always see which model answered |
In EU / EFTA / UK tenants, the Anthropic subprocessor is off by default and must be opted in by your admin. Cowork is not available in GCC, GCC High, DoD, or sovereign clouds. Source: IDM Magazine, Mar 26 2026; Microsoft Learn FAQ.
“Will it continue to use lower-end models or older models without telling you the way Copilot does?”
— Ethan Mollick (Wharton), reacting to the Cowork launch. Reported by GeekWire, Mar 9 2026.
Mollick’s question is unresolved. Microsoft has not published a model-routing telemetry view. Treat Cowork as “Claude-family powered, with Microsoft’s discretion to downshift.” If absolute model transparency matters for a use case (regulated drafting, legal review), document the output and the date and revalidate periodically.
Every power-user pattern across Claude Code, Claude.ai, and Copilot Cowork reduces to six moves. Internalize these and Cowork starts feeling like a colleague instead of a quirky chatbot.
If the goal changes, the Task changes. Don’t reuse yesterday’s Task for today’s question. Anthropic’s own rule for Claude Code applies exactly: “when you start a new task, you should also start a new session.”
SKILL.md, not chatTone, structure, never-do rules, section templates, output destinations — all belong in a SKILL.md file that Cowork auto-discovers. Instructions repeated in chat get lost during compaction; instructions in SKILL.md are re-loaded every session.
The 250k-character message limit is plenty for a pointer to state. Attach the spec/report/dashboard at the start of the Task — it sits in the stable prefix and caches well. Drip-feeding file content across messages scatters it across the growing messages tail and is the worst of both worlds.
If you just need a conclusion, not a transcript, delegate it. Deep Research runs in its own fresh context and returns a summary — the intermediate tool noise never inflates the parent Task’s context.
When Cowork repeats itself, forgets a constraint you set, hallucinates a file path, or slows down noticeably — stop and restart. Paste a short hand-written summary of what you’d decided into the new Task. This is the single highest-leverage habit in the guide.
Every re-confirmation adds a turn to the conversation. If Cowork is going to send 11 similar emails, approve the first, choose “don’t ask again for this conversation,” and let the rest flow. You retain oversight at the Task level.
This plan is aimed at the Microsoft 365 professional who has “used Copilot a few times.” Do these in order; each builds the muscle for the next.
Open Cowork and say (in plain English):
“Every morning at 8:30 AM, send me a daily briefing covering (1) unread email from the last 16 hours that looks important or from anyone who reports to me, (2) my first three meetings with a one-line context per meeting pulled from prior emails, and (3) anything marked follow-up from yesterday. Save the output to my Cowork conversations and send a Teams message to myself with the highlights.”
Check Tasks → Scheduled. You now have a durable recurring agent. This alone is worth the license.
Pick tomorrow’s most important meeting. Tell Cowork:
“For my [time] meeting with [attendees], put together a briefing packet: last six months of email with these people, any open opportunities from CRM if you have access, prior meeting notes, and three suggested talking points. Save as a Word doc in OneDrive at /Meetings/[date]-[name].docx.”
Try this on a messy week:
“Review my next 10 business days. Find any meetings I can decline or delegate without impact, protect at least three 90-minute focus blocks, and draft regrets (polite, in my voice) for meetings I’m declining. Show me the full plan as a preview before making any changes.”
SKILL.mdPick one thing you write the same way every time (a weekly status, a customer brief, a deal review). Write it as a skill (see Section 12 for the template). From that day forward, any Task that needs that format just works.
Pick a company, topic, or competitor you’d normally spend an afternoon researching. Delegate it:
“Use Deep Research to produce a 3-page brief on [topic]: key facts, recent developments, three strategic implications for us. Save to OneDrive at /Research/[topic].docx. Don’t ask me follow-ups unless you hit a genuine ambiguity.”
By Friday: one durable daily agent, one briefing template you trust, one decluttered calendar, one SKILL.md doing invisible work, and one research artifact you didn’t have to write. That’s the baseline. Everything else in this guide compounds from there.
The comparison most people have in their head — “Cowork vs Claude.ai” — is already out of date. As of April 2026, Anthropic ships five distinct products bundled into one desktop install, and Microsoft 365 Copilot Cowork is a sixth, separately licensed surface. Before you can decide which one to use, you have to know they all exist.
A single Claude desktop download gives you Claude Chat, Claude Code, Claude Cowork (desktop), Claude Design, and the Office add-ins (Excel, PowerPoint, Word, Chrome, Slack) — all on one subscription. Copilot Cowork lives outside this bundle, inside your Microsoft 365 tenant, under an E7 seat license.
| Product | What it’s for | Where it runs | Licensing |
|---|---|---|---|
| Claude Chat claude.ai · web, desktop, iOS, Android |
One-question-at-a-time assistant. Artifacts for interactive HTML/SVG/code. Projects, Memory, Research, Skills. | Anthropic cloud | Free · Pro $17/mo · Max $100–$200/mo · Team · Enterprise |
| Claude Code CLI · VS Code · JetBrains · desktop · web · iOS |
Agentic coding tool: reads/writes files, runs shells, edits git repos, spawns subagents, runs plan-mode loops, schedules Routines. | Your machine + Anthropic cloud (Routines are cloud-run) | Included on Pro, Max, Team, Enterprise (as of Apr 22 2026; a pricing test for new Pro users is in flight) |
| Claude Cowork desktop macOS · Windows (GA Feb 10, 2026) |
Non-developer agentic desktop app for knowledge work: file organization, document prep, research synthesis, data extraction. “Claude Code power for knowledge work.” | Local machine; plan-and-approve gates; 38+ connectors; Computer Use in preview | Included on all paid Claude plans |
| Claude Design NEW · launched April 17, 2026 |
Research preview, Opus 4.7–powered. Prototypes, slides, one-pagers from a text prompt. Builds team design systems from your codebase + design files. Exports PDF, URL, PPTX, or push to Canva. | Anthropic cloud (desktop UI) | Pro, Max, Team, Enterprise (off by default for Enterprise; admin-enabled) |
| Claude Office add-ins Excel · PowerPoint · Word · Chrome · Slack |
Embed Claude directly in the Office file you’re editing. Claude for Excel writes formulas with cell-level citations and builds financial models. | Inside the Office app you already use | Included on Pro, Max, Team, Enterprise |
| Microsoft 365 Copilot Cowork m365.cloud.microsoft · M365 desktop apps |
Agentic execution layer inside your M365 tenant. 13 built-in skills for Email, Calendar, Teams, Word, Excel, PowerPoint, PDF, SharePoint, Deep Research, and more. Scheduled prompts. SKILL.md customization. | Server-side in your Microsoft 365 tenant (Graph + Purview) | M365 Frontier preview today · E7 at $99/user/mo · GA May 1, 2026 |
Sources: Anthropic — Download Claude; Anthropic — Claude Cowork product page; TechCrunch — Anthropic launches Claude Design, Apr 17 2026; 9to5Mac — Claude Code Routines, Apr 14 2026; Anthropic — Claude for Excel; Microsoft Learn — Cowork FAQ.
Source: Microsoft Tech Community — Opus 4.7 in M365 Copilot, Apr 16 2026.
| Dimension | Copilot Cowork | Claude Chat | Claude Code | Claude Cowork desktop | Claude Design |
|---|---|---|---|---|---|
| Reads your tenant email/calendar/SharePoint | Yes via Graph | No | Only via MCP connectors | Only via connectors | No |
| Respects Microsoft 365 ACLs | Yes by design | No | No | No | No |
| Reads your local files | No | File upload only | Yes full FS | Yes authorized folders | File upload only |
| Runs shell commands | No | No | Yes Bash | Via Computer Use (preview) | No |
| Native Word/Excel/PPT/Teams actions | Yes first-class | Via Artifacts (sandbox) | Via Skills (PPTX/DOCX/XLSX) | Writes real files | Exports PPTX/PDF |
| Scheduled recurring tasks | Yes native | No | Yes Routines (Apr 2026) | No (practitioner cron) | No |
| Custom skills / playbooks | 20 × SKILL.md in OneDrive | Anthropic Skills | File-based skills + subagents | SKILL.md | Design systems |
| Who picks the model | Microsoft routes | You pick | You pick per session | Anthropic routes; Extended Thinking opt-in | Fixed: Opus 4.7 |
| Enterprise audit / Purview | Yes | No | No | No | No |
| Mobile | No | Yes | iOS companion | Desktop only | Desktop only |
| Delete files / folders | Blocked by policy | N/A | Yes via Bash | Yes with approval | N/A |
The task reads or writes data inside your M365 tenant; enterprise audit is mandatory; the output has to respect existing SharePoint/OneDrive/Outlook ACLs; or you need a scheduled agent that your admin and Purview can see. This is the only product in the landscape that treats your tenant as a first-class operating environment.
You’re editing a git repository, running tests, debugging, refactoring, spawning subagents on a codebase, or need any form of shell access. Bash and a full local filesystem are what Claude Code has and nothing else in this table does.
The work is on your personal machine — local folders, Dropbox, personal Gmail, research PDFs, a side project — and you want agentic plan-and-approve execution without shipping any of it through your employer’s tenant.
You need a single-turn answer, a quick draft, an explanation, an interactive Artifact (HTML/SVG prototype), or a visual mock (Claude Design, if you have access). Works well on mobile. Not designed for multi-step agentic execution.
You want a fast in-tenant answer, no write actions, no scheduling, no cross-app orchestration — the question fits in one turn and you don’t need Cowork’s approval-gate overhead. This is the Work IQ–grounded chat you already know.
There is no “winner” in this table. Each product wins the slot it was designed for. The mistake to avoid: trying to do tenant work in Claude Code (no Graph, no ACLs, no audit) or trying to do code work in Copilot Cowork (no shell, no local files, no git). That’s the next two sections.
If you paste the same message into Claude Chat, Claude Code, Claude Cowork desktop, and Microsoft 365 Copilot Cowork, you will get four different answers. Not slightly different. Structurally different. This confuses new users — “aren’t they all Claude?” — and it shapes what each tool is actually good for.
The divergence is not a bug, and it is not randomness. Each product assembles a different prompt package around your text before the model sees it. Seven mechanisms stack up.
You don’t send a prompt to Claude. You send a prompt to a product, and the product assembles (system prompt) + (tool definitions) + (injected context) + (your message) before sending it to Claude. The four products assemble very different packages. That’s the whole ballgame.
Anthropic publishes claude.ai’s system prompt verbatim — it sets identity, formatting rules, the current date, and safety behaviors. Claude Code’s preset is documented in structure (“tool usage, code style, working directory, git status”) but not in full. Copilot Cowork’s is Microsoft-authored, establishes a Copilot identity (not Claude), enumerates the 13 skills, and installs the approval-gate and no-delete policies. Source: Anthropic — System Prompts release notes; Anthropic — Modifying system prompts.
Claude.ai has Artifacts, web search, and file upload — no shell. Claude Code has Read, Write, Edit, Bash, Grep, Glob, WebSearch, subagents. Copilot Cowork has exactly 13 Microsoft-authored skills — no shell, no local filesystem, no arbitrary code execution. A prompt that requires a tool the product doesn’t have will either be refused, worked around with a script, or redirected to a native skill. That rewiring changes the output entirely.
Claude.ai injects the current date and not much else — it doesn’t know who you work for. Claude Code injects cwd, OS, git branch, platform, auto-memory paths. Copilot Cowork injects your name, email, title, department, manager, direct reports, company, timezone, current local time, and Work IQ signal over recent email/meetings/Teams — before the model sees a single character of your message. Same message, four very different effective context windows.
In Claude.ai and Claude Code you pick the model (Opus, Sonnet, Haiku). In Copilot Cowork Microsoft routes — and has confirmed that routing can pull from Claude, OpenAI, or Microsoft’s own MAI family depending on task, load, and cost. Opus 4.7 is in Cowork’s selector as of April 16, 2026, but which model handled any given turn is not user-visible.
Claude.ai runs Anthropic’s usage policy. Claude Code adds code-safety (don’t commit secrets, confirm destructive shell commands). Copilot Cowork stacks a Microsoft guardrail layer on top: no delete ever, per-action approval gates with risk indicators, no encrypted-file reads, no local filesystem, ACL inheritance, Purview audit, and refusal categories (employee evaluation, profiling by protected characteristics) that claude.ai doesn’t share. Ask all four “help me decide whether to terminate Alex” — you’ll get four different answers, and Cowork is the most likely to decline.
Anthropic’s API docs: “even with temperature of 0.0, the results will not be fully deterministic.” Each product picks its own default, and none publish it. Empirically Claude Code runs lower temperature (more consistent code), Cowork variants prioritize tool-call precision, and Claude.ai runs with enough variation that two retries on the same prompt aren’t identical.
Stack mechanisms 1 through 6 together and “the same prompt” reaches four very different runtime assemblies. Saying “Claude said X” without naming the product is incomplete. The product is the whole ballgame.
Same 17 words. Four radically different deliverables.
| Product | What you actually get | What you have to do next | Time to value |
|---|---|---|---|
| Claude.ai | A complete .py file in an Artifact. Uses Microsoft Graph SDK or imaplib. Placeholder auth — tenant_id, client_id, client_secret as env vars. TODO comments where you configure OAuth. |
Register an OAuth app, install deps, run the script locally, iterate on auth errors. | Hours |
| Claude Code | Claude Code writes main.py + requirements.txt + .env.example to disk, installs deps via Bash, prompts you once for an access token (via az login or device-code flow), then runs python main.py and prints the summary to your terminal. A working script on disk plus a live result. |
Paste a token once. Review output. | Minutes |
| Claude Cowork desktop | Proposes a plan: “(1) connect via Gmail/Outlook connector, (2) filter today’s unread, (3) produce inbox-summary-2026-04-22.docx in your folder.” Waits for approval. Executes. No Python involved — the connector path delivers the intent faster than a script would. |
Approve the plan. Read the doc. | Seconds |
| Copilot Cowork | Doesn’t write the script at all. Uses the Email skill directly — reads your inbox via Microsoft Graph (already authenticated as you), filters unread from this morning, produces the summary in chat or a Word doc in OneDrive. If you insist on a standalone script, it will likely redirect: “I can do this directly — would you like me to, or do you specifically need a standalone program?” | Read the summary. | Seconds |
1. Don’t generalize benchmarks. A prompt that shines in Claude.ai may flop in Copilot Cowork because the Email skill short-circuits the reasoning path you intended.
2. Match the tool to the task. Tenant data → Copilot Cowork. Local code and terminals → Claude Code. Creative/reasoning with no specific environment → Claude Chat. Local-but-not-tenant knowledge work → Claude Cowork desktop.
3. Read the system prompt where you can. Anthropic publishes claude.ai’s at docs.claude.com/en/release-notes/system-prompts. Microsoft does not publish Copilot Cowork’s. That asymmetry is itself a signal.
Sources: Anthropic — System Prompts; Anthropic — Modifying system prompts; Anthropic — Prompt caching; Anthropic Messages API (temperature); Microsoft Learn — Cowork FAQ; Microsoft Tech Community — Opus 4.7, Apr 16 2026; GeekWire — Mollick on silent downshifts, Mar 9 2026.
“Copilot writes code today, so does that mean Cowork is a coding tool?” is the single most common question from developers evaluating the product. The short answer is no, and Microsoft doesn’t claim it is. The long answer has useful nuance.
Cowork is not a coding product. None of its 13 built-in skills is developer-facing. There is no terminal, no git integration, no local filesystem, and no MVP has published a Cowork coding demo.
| Capability | Status | Evidence |
|---|---|---|
| Reads code files you attach with syntax highlighting | Yes | Supports .py .js .ts .java .c .cpp .go .rb .rs, .ipynb, and the full config-file family. Source: Microsoft Learn — Use Cowork. |
| Emits code snippets as chat text | Informally | Cowork is Claude-backed and will produce code if asked. But Microsoft has not certified this, not listed supported languages, and given it no place in the product’s marketed skills. |
Writes a .py or .ps1 file into OneDrive |
Plausible, not documented | Cowork saves outputs to OneDrive/Cowork. No primary source confirms code-file generation as a first-class destination. |
Authors a Python-in-Excel =PY() cell |
No | Python in Excel cell authoring is a Copilot in Excel feature, not a Cowork feature. Source: Microsoft Tech Community. |
| Builds a Power Automate flow | No | Power Automate has its own natural-language Copilot. Cowork does not invoke it. Source: Copilot in Power Automate overview. |
| Generates Office Scripts (TypeScript) for Excel Web | No | Office Scripts has its own editor and Copilot integration; no Cowork surface documented. |
Authors VBA macros into .xlsm |
No | Not documented. Secondary press occasionally conflates “Microsoft 365 Copilot” with Cowork; the actual VBA-authoring path lives inside Copilot in Excel. |
| Edits a local git repository | No | No local file access, no shell. Use Claude Code or GitHub Copilot. |
| Runs scripts server-side | Undefined | One opaque Microsoft Learn sentence: “Some background operations, like script execution, run without displaying individual steps.” Microsoft has not elaborated. Treat as an internal implementation detail, not a user feature. Source: Use Cowork. |
| Tool | Primary audience | What it’s for |
|---|---|---|
| GitHub Copilot | Developers | IDE-embedded code completion, chat, agents, PR reviews. This is the dev tool. |
| Copilot in Excel | Analysts, business users | Authors formulas, Python cells, charts inside a workbook. |
| Copilot in Power Automate | Power users | Generates cloud flows from natural-language descriptions. |
| Copilot Studio | Makers, IT | Builds custom Copilot agents and has a Python code interpreter in a sandbox. |
| Copilot Cowork | Knowledge workers | Agentic execution across Office + Graph. Not a code tool, by design. |
Same brand family, disjoint product surfaces. MVP Christian Buckley frames Cowork as “Claude Code for knowledge workers” — the analogy, not the substitute. The analogy lands because both are long-running, plan-and-execute agents; it misleads only if you read it as “Cowork replaces Claude Code.” It does not.
The honest fit for Cowork in a developer’s workflow is all the non-code work around a project. That’s where it genuinely subsidizes what Claude Code and GitHub Copilot already do.
Repo work on your laptop. Writing, testing, refactoring, running CI, generating migrations, reviewing PRs, spawning subagents for large refactors, scheduling Routines to run nightly code quality checks.
Sprint-review deck from the week’s commits. Weekly status to your skip. Stakeholder update email. Meeting prep for the architecture review. SharePoint write-ups of the design you already coded. Bug-triage digest from Teams and email. Scheduled Monday digest of last week’s releases.
This is a Cowork workflow a developer or engineering manager can run today, with no code written, that demonstrates the actual fit.
Every Monday at 8:00 AM, pull together my engineering team’s week-ahead brief: 1. READ last week’s releases by scanning my team’s Teams channel and the closed GitHub issues referenced in email notifications. 2. LIST open bugs assigned to my direct reports with priority labels from the last 14 days of email. 3. SUMMARIZE three blockers mentioned in my last 7 days of 1:1 meeting notes (saved in OneDrive at /1-1s). 4. DRAFT a short Teams post for my #eng-weekly channel with the top three priorities for this week. Show me a preview before posting; don’t send. 5. SAVE the full brief as /Reports/Weekly/eng-brief-YYYY-MM-DD.docx. Constraints: - Never include anything from private 1:1 meetings by name. - Use past tense for last week, future tense for this week. - If you can’t find data for a section, say so — don’t invent.
What makes this a good Cowork demo: zero lines of code, four Graph data sources, one scheduled recurrence, one approval gate (the Teams post), one durable artifact (the Word doc). A developer gets back 30–60 minutes every Monday and stays in Claude Code for the actual engineering work.
Keep your paid Claude for the code. Use Cowork for the meetings, decks, status updates, and stakeholder communication that surround the code. The two don’t compete — they stack.
Sources: Microsoft Learn — Cowork FAQ; Microsoft Learn — Use Cowork; buckleyPLANET — Christian Buckley (MVP), Apr 2026; GitHub — GitHub Copilot docs; Microsoft Learn — Copilot Studio code interpreter FAQ.
If you already pay for Claude Pro or Max — you’ve used Projects, you live in Artifacts, you run Claude Code daily, maybe you’ve tried Claude Design this week — this section is the straight talk on what to expect moving some of your workflow into Copilot Cowork.
Section 8 is the mechanics. The shorthand: the system prompt, tools, context injection, model routing, and safety stack are all different in Cowork. A Project you love in Claude.ai will produce different output in Cowork, and not necessarily better or worse — just different-shaped. Plan to re-tune, not copy-paste.
| Gain | Why it matters |
|---|---|
| Tenant grounding via Microsoft Graph | Cowork already knows your calendar, email, direct reports, SharePoint files, and Teams activity. No connectors, no OAuth dance, no copy-paste. Claude Cowork desktop requires a connector per data source and still can’t enforce your employer’s ACLs. |
| ACL inheritance | Every read and write runs with your M365 permissions. This is not a feature Claude has an equivalent for — it’s an architectural property of Cowork living inside the tenant. |
| Purview audit trail | Every Cowork action is logged for enterprise audit. Required for regulated work; unavailable in Claude’s consumer products. |
| Native Office actions under the approval gate | Cowork sends real emails, posts real Teams messages, creates real calendar events — with a preview dialog. Claude Code can do this only through MCP connectors you wire yourself; Claude.ai can’t. |
| Scheduled Prompts as a native surface | Claude Code just added Routines (Apr 14, 2026) — and it’s great — but Routines run inside Claude Code. Cowork’s Scheduled Prompts run across email, calendar, Teams, and SharePoint without leaving the tenant. |
| Durable Task state | The Tasks view (List, Board, Scheduled) survives disconnects, laptop closes, and days off. Claude.ai conversations don’t, and Claude Code sessions rely on your machine being on unless you’re in Routines. |
| Loss | Why you’ll notice |
|---|---|
| Artifacts | No interactive HTML/SVG sandbox pane. Cowork renders Adaptive Cards or produces real Office files — useful, but not the same as Claude.ai’s side-panel prototypes. |
| Picking your model | Microsoft routes. You may get Claude Opus 4.7, Sonnet, an OpenAI model, or MAI — and you can’t always tell which. This is Mollick’s public concern. |
| Anthropic’s release cadence | Anthropic shipped Claude Design five days ago. Microsoft’s cadence on Cowork is TBD — M365 updates tend to arrive in larger but slower waves. |
| Computer Use | Claude Cowork desktop and Claude Code both have Computer Use in research preview. Copilot Cowork does not expose this to you. |
| Extended Thinking toggles | In Claude.ai and Claude Cowork desktop you can dial up thinking budget. Cowork decides for you. |
| Shell and local filesystem | Obvious but worth naming. If your current Claude Code workflow depends on Bash and a repo, that workflow stays in Claude Code. |
| Mobile | Cowork is browser + desktop only. Claude.ai has mobile apps; Claude Code has an iOS companion. |
You don’t pick one. You split the workload along the tenant boundary.
Code. Local files. Side projects. Anything outside your work tenant. Artifacts and prototypes. Deep, open-ended reasoning sessions where you want to pick the model. Claude Design for visuals. Routines for scheduled developer automation.
Anything that reads or writes your tenant’s email, calendar, Teams, SharePoint, or OneDrive. Anything that needs the Purview audit trail. Scheduled recurring tasks that depend on Graph data. Meeting prep, customer briefings, stakeholder updates. SKILL.md encoding of your team’s playbooks.
Claude Max 20x: $200/month, personal subscription, one seat, you pay. M365 E7: $99/user/month list, seat license your employer pays, covers all of Cowork plus the rest of E5 plus E7. These aren’t substitutes. For a hardcore coder the math usually favors keeping Claude Max personally and using Cowork at work — because the employer pays the Cowork seat, and Claude Max buys you Claude Code + Cowork desktop + Chat + Design on your own machine.
SKILL.md files in OneDrive (Section 12). System prompts don’t transfer; the intent does.“[Anthropic’s Cowork] was built in a couple of weeks using Claude Code and is being updated and evolving quickly. [Microsoft] has a tendency to launch a leading product and then let it sit for awhile. I’m curious about whether their pacing will change.”
— Ethan Mollick (Wharton), reacting to the Cowork launch. Reported by GeekWire, Mar 9 2026.
Mollick’s concern is real and unresolved. For a heavy Claude user, that’s another argument for the “both” pattern: Anthropic ships fast, Microsoft ships enterprise — and until the pacing question resolves, holding both keeps you on the frontier of each.
These are all documented symptoms of “context rot” — the measured quality decay of long-running conversations. When you see two or more, stop iterating. Summarize. Open a new Task.
Before you hit New Task, paste this into the current Task:
Write a handoff brief for the next Cowork Task: GOAL: one sentence — what I’m trying to accomplish DECISIONS MADE SO FAR: bullet list of what we’ve locked in CURRENT DRAFT / STATE: the latest good version (link to OneDrive file if large) CONSTRAINTS: audience, tone, length, format, deadline OPEN QUESTIONS: what we still need to decide NEXT STEP: one specific thing to do next Return it as plain markdown so I can paste it into a new Task.
Open a new Task. Paste the brief as the first message. You now have a fresh, clean context with all the signal and none of the noise.
(a) It puts the stable decisions at the start of the new context, where primacy bias helps you. (b) It discards all the abandoned drafts the model was quietly using as sycophantic signal about “what the user wants.” (c) It resets the 39% multi-turn degradation curve back to turn one.
SKILL.md — Your Durable MemoryA SKILL.md is a small markdown file with YAML frontmatter that lives in OneDrive at /Documents/Cowork/Skills/<skill-name>/SKILL.md. Cowork auto-discovers skills at the start of every conversation and applies them when relevant.
Think of it as a durable, cheap context layer. Unlike instructions in chat, skills are re-loaded from disk after every compaction and never lost to context rot.
--- name: weekly-status description: Produces my weekly status update in the exact format my team expects. triggers: - "weekly status" - "weekly update" - "weekly report" --- # Weekly Status Report Skill ## What this skill does When I say “make my weekly status,” produce a Word document at `/Reports/Weekly/YYYY-MM-DD-status.docx` following the rules below. ## Data sources (in priority order) 1. My calendar for the last 7 days (accepted meetings only) 2. Email I sent or replied to in the last 7 days (skip noise/newsletters) 3. Teams messages in channels I own 4. My /Projects folder in OneDrive for anything updated this week ## Required sections (in this order) 1. **Wins** — 3 bullets max, concrete outcomes only 2. **In Flight** — what I’m actively working on, with status 3. **Blockers** — what’s stuck and who I need from 4. **Next Week** — top 3 priorities 5. **Metrics** — any numeric KPIs I touched this week ## Tone and formatting rules - Crisp, professional, no marketing language - Numbers and dates, not vibes - Never use “synergy,” “leverage,” or “circle back” - Past tense for Wins, present tense for In Flight, future tense for Next Week ## Never do - Don’t invent metrics you can’t source from a file or message - Don’t include anything from private 1:1 meetings - Don’t send it — just save the file and tell me where it is
Microsoft does not validate SKILL.md files. If someone shares a skill with you, read it before dropping it in your OneDrive. A malicious or buggy skill can instruct Cowork to send emails or post to Teams in your name — approval gates help but are not a substitute for reviewing the skill itself.
Good candidates for your first three skills: weekly-status, customer-brief, deal-review. Once those exist, you stop re-explaining structure every time — and Cowork starts feeling like a team member who knows how you work.
Scheduled prompts are Cowork’s answer to Power Automate for any recurring job that can be described in a paragraph. You say it in plain English; Cowork parses the schedule, registers the task, and runs it forever (or until you pause it).
A scheduled prompt does not remember what happened last run. Each invocation gets a fresh context. Write the prompt as if Cowork has never seen it before: include the instruction, the data source, the output location, and any constraints. Don’t say “continue from yesterday” — say “compare to the last 7 days of email and meetings.”
Because every run produces an artifact, you’ll want to compare runs week-over-week. Ask for:
Then week 12 is meaningfully comparable to week 1 at a glance. Narrative scheduled outputs are hard to scan; structured scheduled outputs are a dashboard.
Tasks → Scheduled shows everything. You can edit, pause, resume, or delete any scheduled prompt. Microsoft does not publish a maximum number of concurrent scheduled prompts per user, but a reasonable guideline: fewer, higher-quality prompts beat a cluttered list.
Deep Research is one of Cowork’s 13 built-in skills and it is structurally identical to a Claude Code subagent: it runs in its own fresh context, consumes its own tool noise privately, and returns a final summary to the parent Task.
This matters more than it sounds. When you delegate something to Deep Research, the intermediate web fetches, document reads, note-taking, and verification passes never enter your main Task’s context. The parent Task gets only the conclusion.
The single most useful question to ask before delegating: “Do I need this tool output again — or just the conclusion?”
Because the sub-agent doesn’t see the parent conversation, your Deep Research prompt has to be self-contained. File paths, constraints, prior decisions, tone rules, output destination — all go into the delegation prompt itself.
Use Deep Research to produce a 2-page executive brief on [topic]. Sources: - My /Research folder in OneDrive - /Clients/Acme folder in SharePoint (the last 18 months only) - Public web, restricted to sources dated in the last 9 months Constraints: - Cite every factual claim with a URL - Separate CONFIRMED from UNCERTAIN with a label - Do not speculate beyond the sources Deliverable: save as /Research/acme-competitive-brief.docx Do not email or post it anywhere. Just save the file.
Cowork will never silently send an email, post to Teams, create a calendar event, or modify a file without showing you a preview first. That’s the approval gate, and it’s the reason Cowork is safe to deploy to large populations.
“Don’t ask again” is not a safety downgrade — it’s a context saver. Every re-approval adds turns to the conversation, which inflates tokens, which increases the chance of context rot. Batch similar approvals and move on.
An honest list of today’s limits, all confirmed by Microsoft Learn or independent reporting. Know these going in and you’ll avoid the “why won’t it…” frustration.
No local file access. Cowork cannot read or write files on your laptop. Everything must go through OneDrive or SharePoint.
No delete. Cowork cannot delete files or folders in OneDrive or SharePoint. If you want something gone, you delete it yourself. This is intentional, for safety.
No encrypted file reads, even if you have the rights.
Attached files must be under 200 MB each.
No mobile. Browser and Windows/Mac desktop only, today.
Custom skills are not validated by Microsoft. The SKILL.md files you create or accept are executed as you wrote them. Review any skill you didn’t author personally.
Condensed from Claude Code best practices, named practitioners, and the Cowork docs. Use these as a checklist when you want to improve a workflow.
| # | Tactic | Why it works |
|---|---|---|
| T1 | New objective → new Task | Starts a fresh context; no stale prefix to reprocess |
| T2 | Watch for the six signs (Section 11) | Objective signals of context rot — stop iterating |
| T3 | Hand off with a written summary template | Primacy bias works for you on turn 1 of a fresh Task |
| T4 | Correct via a new Task, not another turn | Every correction adds two more turns to the history |
| T5 | Act at ~75% context, not at 95% | The summarizer is at its least smart when under pressure |
| T6 | Prefer a hand-written brief over relying on auto-compact | Two-minute investment; you choose what survives |
| # | Tactic | Why it works |
|---|---|---|
| T7 | Put stable rules in SKILL.md, not chat | Re-loaded every session; never lost to compaction |
| T8 | Tier your skills (always / on-demand / never-loaded) | Skill sprawl inflates the context budget |
| T9 | Don’t enable connectors you won’t use this session | Deferred tool definitions routinely eat 40%+ of the budget |
| # | Tactic | Why it works |
|---|---|---|
| T10 | Structure every prompt as stable-prefix + dynamic-tail | Automatic caching hits the 90%-discount path |
| T11 | Mind the cache-prefix minimums (4k tokens on Opus) | Prefixes under the minimum are never cached |
| T12 | Don’t toggle thinking/reasoning modes mid-conversation | Invalidates the message cache and re-prices everything |
| # | Tactic | Why it works |
|---|---|---|
| T13 | Use Deep Research when you need a conclusion, not a transcript | Intermediate tool noise stays out of your parent Task |
| T14 | Include everything the sub-agent needs in the prompt string | The sub-agent can’t see the parent conversation |
| # | Tactic | Why it works |
|---|---|---|
| T15 | Batch approvals with “Don’t ask again” | Fewer approval turns = fewer tokens = more quality |
| T16 | Ask Cowork to plan first, act after you confirm | Prevents wasted context on wrong approaches |
| # | Tactic | Why it works |
|---|---|---|
| T17 | Put heavy state in OneDrive files, attach by reference | The file is a stable prefix; chat messages are a growing tail |
| T18 | Attach files at the start of the Task, not drip-fed | Keeps attachments in the cacheable prefix region |
| T19 | Break one long job into multiple Cowork Tasks | Research → draft → review, each with fresh context |
| T20 | Kill and restart without guilt | The cost of starting fresh is small; the cost of a 150k-token session is large |
| # | Tactic | Why it works |
|---|---|---|
| T21 | Treat scheduled prompts as stateless fresh-context runs | They are — don’t write “continue from yesterday” |
| T22 | Structure scheduled output for diff-ability, not narrative | Fixed sections + fixed length = a dashboard over time |
Symptom → likely cause → fix. Print this page and tape it to your monitor.
| Symptom you observe | What’s actually happening | Fix |
|---|---|---|
| Responses are getting slower | Context length has grown; per-turn attention work is quadratic | Summary-and-restart in a new Task (Section 11 template) |
| Forgot the tone/format you set earlier | Rule was given in chat; auto-compact dropped it | Move the rule into a SKILL.md (Section 12) and restart |
| Keeps re-reading the same file | Earlier read fell out of the compacted summary | Attach the file fresh at the start of a new Task |
| Hallucinated a file path, name, or link | Late-session context rot; primacy/recency bias breaking mid-context recall | Restart; ask it to verify every path/name/link against Graph before citing |
| Agreed with your pushback when it was actually right | Sycophancy — documented behavior in RLHF-trained models | In the restart, ask for reasoning before a verdict; resist saying “try again” |
| Task keeps asking clarifying questions | The brief was underspecified or the sub-agent didn’t get the full context | Rewrite the prompt with the goal, data sources, constraints, and output destination — see Section 14 template |
| Approval dialog fired 11 times in a row | You didn’t use “Don’t ask again” | On approval #1, select “don’t ask again for this conversation” |
| Scheduled prompt stopped producing useful output | Data sources drifted, or the prompt assumed state from prior runs | Edit the scheduled prompt; make it self-contained; add fixed section structure (Section 13) |
| Cowork said it couldn’t access a file you clearly own | File is encrypted, or sits in a SharePoint library with ACLs that don’t cascade to Cowork | Move to an un-encrypted location; verify the ACL; re-try |
Cowork has no memory between turns. Every message you press Enter on re-sends the whole conversation. Longer thread = more cost AND worse output. Start fresh often.
Every claim in this guide traces to one of the sources below. Grouped by evidence tier.
New objective → new Task.
Stable rules → SKILL.md.
Heavy data → OneDrive file, attached once.
Intermediate reasoning → Deep Research.
Confidence drops → summary-and-restart.
Approvals → batch with “Don’t ask again.”
Prepared by Ken Lince — Sr. Director, Cloud Engineering, TD SYNNEX · ken.lince@tdsynnex.com
Companion to the Give Me an Hour Today: Copilot Cowork workshop. Re-verify model IDs, pricing, and product gates before re-delivering — this space moves quickly.