• btc = $97 446.00 4 147.65 (4.45 %)

  • eth = $3 482.54 192.84 (5.86 %)

  • ton = $5.75 0.35 (6.42 %)

  • btc = $97 446.00 4 147.65 (4.45 %)

  • eth = $3 482.54 192.84 (5.86 %)

  • ton = $5.75 0.35 (6.42 %)

13 Sep, 2024
1 min time to read

A hacker, known as Amadon, managed to trick OpenAI’s ChatGPT into providing detailed instructions for creating homemade bombs, bypassing the chatbot's built-in safety guidelines.

Typically, ChatGPT refuses to assist with harmful activities, but Amadon employed a "social engineering hack" to break the guardrails. By convincing ChatGPT to play a fictional game, the hacker manipulated the AI into offering bomb-making materials and methods, including guidance on improvised explosive devices (IEDs).

According to explosives experts, the chatbot's output was highly sensitive and accurate enough to be dangerous.

Amadon reported the issue to OpenAI through its bug bounty program, but the company stated that such model safety concerns require broader research rather than isolated fixes.

The incident underscores ongoing concerns about generative AI's potential misuse, particularly as these models continue to scrape vast amounts of data from the internet. OpenAI has yet to respond publicly to the incident.