17:32
12:38
18:16
11:41
00:12
17:54
17:32
12:38
18:16
11:41
00:12
17:54
17:32
12:38
18:16
11:41
00:12
17:54
17:32
12:38
18:16
11:41
00:12
17:54
A U.S. court has ruled that OpenAI must retain all user conversations on ChatGPT — including those deleted or marked as temporary — raising serious concerns among digital privacy advocates.
The decision stems from an ongoing copyright infringement case, in which the court deemed user chats potential evidence. In practice, this means no conversation with ChatGPT will be permanently erased — even if the user deletes it manually.
The ruling has sparked backlash. Users and digital rights groups like the EFF warn that forced, indefinite storage of conversations could set a troubling precedent, undermining both ethical standards and constitutional protections of privacy. The court dismissed these concerns, arguing that in the context of copyright, privacy does not take precedence.
What’s particularly alarming is that this ruling involves not just technical logs, but full-text conversations where users often disclose personal, medical, financial, or legal information. As a result, AI chats could effectively become a sensitive database vulnerable to both corporate oversight and external breaches.
OpenAI has yet to comment on how it will comply but previously stated its intention to challenge the order. However, the lack of transparency has only deepened mistrust.
In an era of intensifying global regulation around AI and digital platforms, this case could become a critical precedent — legally and socially. Human rights advocates are calling for clear standards: user notification about data retention, the ability to opt out, and accountability around third-party access. Without such measures, they argue, AI chats may shift from a private tool to a surveillance risk.