23:26
12:28
12:28
09:03
09:03
15:41
23:26
12:28
12:28
09:03
09:03
15:41
23:26
12:28
12:28
09:03
09:03
15:41
23:26
12:28
12:28
09:03
09:03
15:41
A hacker, known as Amadon, managed to trick OpenAI’s ChatGPT into providing detailed instructions for creating homemade bombs, bypassing the chatbot's built-in safety guidelines.
Typically, ChatGPT refuses to assist with harmful activities, but Amadon employed a "social engineering hack" to break the guardrails. By convincing ChatGPT to play a fictional game, the hacker manipulated the AI into offering bomb-making materials and methods, including guidance on improvised explosive devices (IEDs).
According to explosives experts, the chatbot's output was highly sensitive and accurate enough to be dangerous.
Amadon reported the issue to OpenAI through its bug bounty program, but the company stated that such model safety concerns require broader research rather than isolated fixes.
The incident underscores ongoing concerns about generative AI's potential misuse, particularly as these models continue to scrape vast amounts of data from the internet. OpenAI has yet to respond publicly to the incident.