• btc = $64 005.00 350.13 (0.55 %)

  • eth = $3 139.68 10.18 (0.33 %)

  • ton = $5.84 0.11 (1.98 %)

  • btc = $64 005.00 350.13 (0.55 %)

  • eth = $3 139.68 10.18 (0.33 %)

  • ton = $5.84 0.11 (1.98 %)

20 Apr, 2023
2 min time to read

Google's chatbot Bard was rushed to keep up with Microsoft and OpenAI, despite employee warnings against launching it.

According to an eye-opening report from Bloomberg, 18 current and former Google workers have criticized the company's chatbot, Bard, in internal messages. These workers have labeled Bard "a pathological liar" and urged the company not to launch it due to concerns about the dangerous advice it gave users. The report also includes screenshots of internal messages, in which one employee noted how Bard would frequently provide users with unsafe guidance on topics such as landing a plane or scuba diving. Another employee described Bard as "worse than useless" and pleaded with the company not to release it.

Despite these concerns, Google provided early access to the "experimental" bot in March 2023, apparently in a bid to keep up with rivals like Microsoft and OpenAI. Bloomberg reports that the company even overruled a risk evaluation submitted by an internal safety team, which stated that the system was not ready for general use.

This report highlights how Google has apparently prioritized business over ethical concerns in its race to compete with rivals. While the company often touts its safety and ethics work in AI, it has long faced criticism for placing business interests above all else.

In late 2020 and early 2021, the company fired two researchers, Timnit Gebru and Margaret Mitchell, after they authored a research paper exposing flaws in the same AI language systems that underpin chatbots like Bard. Bloomberg suggests that the company's decision to release Bard despite concerns from its own employees is further evidence of its focus on business over safety.

However, others argue that public testing is necessary to develop and safeguard these systems and that the harm caused by chatbots is minimal. While chatbots can produce toxic text and offer misleading information, so can countless other sources on the web. Nevertheless, directing a user to a bad source of information is different from giving them that information directly with all the authority of an AI system.

In response to Bloomberg's report, a spokesperson for Google stated that AI ethics remained a top priority for the company, and that it was continuing to invest in the teams that work on applying its AI Principles to its technology.