15:57
21:44
21:19
10:58
11:47
11:00
15:57
21:44
21:19
10:58
11:47
11:00
15:57
21:44
21:19
10:58
11:47
11:00
15:57
21:44
21:19
10:58
11:47
11:00
Anthropic is revising its Consumer Terms and Privacy Policy and plans to start using user conversations to train its AI assistant Claude.
The company announced that everyone will need to make a choice by September 28: either allow chats to be included in training or opt out.
New users will see the option when signing up, while existing users will get a pop-up titled “Updates to Consumer Terms and Policies” with a toggle reading “You can help improve Claude.” Leaving it on will let Anthropic use the data; turning it off will exclude conversations from training.
Previously, Anthropic deleted all prompts and outputs from its consumer products within 30 days, unless legally required to keep them longer or flagged for policy violations, in which case the data could be stored for up to two years. Under the new rules, those who consent will have their chats stored for up to five years. Anthropic said this will help “deliver even more capable, useful AI models” while also strengthening safeguards against scams, abuse, and disinformation.
The policy applies to
However, it does not extend to business products such as Claude for Work, Claude Gov, Claude for Education, or API access through Amazon Bedrock and Google Vertex AI.
Explaining the shift, the company said that by not opting out, users will “help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations.” Anthropic added that “users will also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.”