Check Your Settings: Anthropic’s Updated Terms

Check Your Settings: Anthropic’s Updated Terms

A few weeks back, Anthropic quietly rolled out an update to their consumer terms and conditions. Buried in the usual legal boilerplate was a line that’s actually pretty important if you’re using Claude for coding, writing, or just day-to-day problem-solving.

 By default, your chats and coding sessions are now opted into being shared with Anthropic to help train and improve their AI models. This change comes into effect on Sunday [28th September 2025].

For many people, that might not matter. In fact, if you see AI tools as just another product, a bit like Gmail or Google Docs, then data being used in aggregate to improve the service is par for the course. It’s how the web has operated for years.

But this is slightly different. This is the text of your conversations, your code snippets, your prompts, your back-and-forth with a model. Depending on what you’re doing, that could be fairly innocuous, or it could be material that you wouldn’t want leaving your laptop at all.

And that’s why it’s worth slowing down, taking a proper look, and making sure your settings are what you want them to be, rather than just what Anthropic (or any other AI provider) decides they should be.

On paper, the change is fairly simple: unless you opt out, your interactions with Claude can be used as training data.

That means if you’re:

  • experimenting with bits of code,

  • pasting in client or work material to summarise,

  • or just asking everyday personal questions,

…it all potentially gets fed back into Anthropic’s training pipelines.

Now, to be clear, companies like Anthropic are not (or should not be) trawling through individual chats looking for secrets. There are supposed to be safeguards, filtering layers, and anonymisation processes. But that’s not really the point.

The point is: your data, by default, is assumed to be fair game for product improvement. And that’s a big shift in expectation compared to what many users thought they’d signed up for.

In an enterprise context, it’s usually the opposite—data is excluded from training unless an organisation explicitly opts in. That makes sense: businesses need confidentiality, compliance, and clear lines around data usage.

But for individuals? The assumption is flipped. The default is “we’ll use your stuff.”

It’s not just Anthropic. OpenAI, Google, Microsoft, all of them are juggling the same tension: 

  • Training data is the lifeblood of improving models.

  • But user trust hinges on respecting privacy and control.

These companies also have to manage a patchwork of regulations: GDPR in Europe, CCPA in California, and a dozen others either in place or coming soon. Some regulators are already eyeing AI training data practices with suspicion, and defaults like this only make the conversation louder.

There’s also a cultural aspect. We’re only just beginning to understand what it means for “our conversations” to fuel AI development. For some, it’s exciting, being part of the frontier. For others, it feels intrusive, or even exploitative. 

The good news is: you can opt out. But you have to do it yourself, it won’t happen automatically.

Here’s how (as of writing):

  1. Open your Claude account settings.

  2. Find the section on Data Sharing or Training Data Usage.

  3. Toggle the option to disable sharing your conversations for training.

  4. Save/confirm the setting.

Simple enough, but easy to miss. And given how many people rarely touch settings menus after signing up, it’s safe to assume most users are currently opted in without realising it.

Would you be happy sharing your coding projects with a stranger at the pub? Probably not. Yet if you’re pasting them into Claude without opting out, you’re effectively doing the digital equivalent.

And even if the content itself isn’t sensitive, the pattern of your usage might be:

  • How often you’re coding.

  • What kinds of problems you’re working on.

  • The tone or style of your prompts.

All of that data paints a picture. And once it’s part of a training corpus, you don’t really get to unpaint it.

So whether you’re using Claude, ChatGPT, Gemini, or any other tool, take five minutes to go through the settings. Ask yourself:

  • Am I comfortable with this being shared?

  • Does my work involve data that belongs to someone else?

  • Do I actually want to contribute to training, or not?

And then set the toggle the way you want it, not the way a company assumes you do.

Because in the rush to build the future of AI, it’s very easy to forget that you still get a say in how your present is handled.

Halt and Catch Fire

Halt and Catch Fire