• btc = $57 486.00 -2 706.70 (-4.50 %)

  • eth = $2 928.13 -73.63 (-2.45 %)

  • ton = $4.76 -0.31 (-6.16 %)

  • btc = $57 486.00 -2 706.70 (-4.50 %)

  • eth = $2 928.13 -73.63 (-2.45 %)

  • ton = $4.76 -0.31 (-6.16 %)

6 Dec, 2022
1 min time to read

AI generates a lot of wrong answers that flooded the site.

The ban on the publication of AI-generated responses appeared after the publication of many responses from the ChatGPT. Moderators explained — very often AI gives plausible but wrong answers. The ban is temporary, but the final ruling would be made later, after consultation with the community.

“The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. As such, we need the volume of these posts to reduce [...] So, for now, the use of ChatGPT to create posts here on Stack Overflow is not permitted. If a user is believed to have used ChatGPT after this temporary policy is posted, sanctions will be imposed to prevent users from continuing to post such content, even if the posts would otherwise be acceptable,” wrote Stack Overflow moderators.

One person from Hacker News also complained about the bot's coding problems. According to him, the text looked very good but contained very big errors. Some users turned the question of AI moderation over to ChatGPT itself, asking the bot to generate arguments for and against its ban on Stack Overflow. The bot replied that it was “a complex decision that would need to be carefully considered by the community”

Probably the problem is huge and affected not only Stack Overflow. Experts are already discussing the potential harm of large language model (LLM) systems. Yann LeCun, chief AI scientist at Facebook-parent Meta believes that LLMs can misinform the user and cause harm.

OpenAI released the ChatGPT bot last week. This is a bot based on the GPT-3.5 language model. As it turned out, it can give logical answers to simple questions and even write a script in C Sharp. However, when the question becomes more complicated, the service begins to add false information to its answer.