Copilot Summarized Confidential Emails Without Consent

Microsoft confirmed a Copilot bug that allowed the AI to read and summarize confidential emails since January, bypassing DLP and Confidential labels. A patch rollout began in February, but details on affected customers remain scarce.

Comments
Copilot Summarized Confidential Emails Without Consent

3 Minutes

Imagine opening your inbox and finding that an AI has already skimmed your most private drafts. Unsettling? Yes. Real? Microsoft confirmed it.

Security researchers at Bleeping Computer first flagged the flaw: a bug in Copilot’s chat feature allowed the AI to read and summarize draft and sent emails labeled as confidential. This wasn’t a one-off glitch. The issue dates back to January and, alarmingly, it sidestepped customer protections designed to keep sensitive material out of large language models.

Copilot chat is part of paid Microsoft 365 packages, letting users query documents and get AI-powered help inside Word, Excel and PowerPoint. But the feature’s convenience came with a cost when the chat component began processing content it shouldn’t have accessed. The company traces the problem under internal tracking code CW1226324 and says messages marked with a Confidential label were mistakenly handled by Copilot.

How did that happen? The short answer: a software path that should have been closed stayed open. Even enterprises using Data Loss Prevention (DLP) rules — the guardrails many companies deploy to stop sensitive information from leaving their systems — found those rules weren’t enough to keep Copilot from ingesting and summarizing protected emails.

Microsoft says it started pushing a patch in early February. But some questions remain unanswered. The company has not published a tally of affected customers, and spokespeople declined to provide additional comment when asked about the scope of the exposure.

Concerns have rippled beyond individual inboxes. This week, the IT department of the European Parliament told lawmakers it has blocked internal AI tools from work devices, citing fears that such systems might upload potentially confidential correspondence to cloud services. The move is a vivid example of how one software flaw can reshape organizational policy overnight.

Microsoft reports the bug has been addressed and a remediation rollout began in February, but organizations should still review policies and logs to confirm no sensitive data was exposed.

So what should IT leaders do now? Start by treating AI features like any other data integration: audit what systems connect to your mail flow, verify DLP rules against those integration points, and require explicit opt-ins for AI processing. Short-term, that means stricter controls and more monitoring. Long-term, it means demanding clearer transparency from vendors about how AI components interact with user data.

There’s a lesson here for everyone who clicks “enable” without reading the fine print: convenience can be contagious, but so can risk. Keep an eye on your settings — and ask the hard questions before letting AI touch your most sensitive work.

Leave a Comment

Comments