• btc = $60 381.00 2 069.28 (3.55 %)

  • eth = $2 319.17 26.31 (1.15 %)

  • ton = $5.48 -0.01 (-0.13 %)

  • btc = $60 381.00 2 069.28 (3.55 %)

  • eth = $2 319.17 26.31 (1.15 %)

  • ton = $5.48 -0.01 (-0.13 %)

13 Sep, 2024
1 min time to read

A hacker, known as Amadon, managed to trick OpenAI’s ChatGPT into providing detailed instructions for creating homemade bombs, bypassing the chatbot's built-in safety guidelines.

Typically, ChatGPT refuses to assist with harmful activities, but Amadon employed a "social engineering hack" to break the guardrails. By convincing ChatGPT to play a fictional game, the hacker manipulated the AI into offering bomb-making materials and methods, including guidance on improvised explosive devices (IEDs).

According to explosives experts, the chatbot's output was highly sensitive and accurate enough to be dangerous.

Amadon reported the issue to OpenAI through its bug bounty program, but the company stated that such model safety concerns require broader research rather than isolated fixes.

The incident underscores ongoing concerns about generative AI's potential misuse, particularly as these models continue to scrape vast amounts of data from the internet. OpenAI has yet to respond publicly to the incident.