• btc = $97 544.00 -1 052.35 (-1.07 %)

  • eth = $3 341.92 - 106.87 (-3.10 %)

  • ton = $6.06 -0.43 (-6.63 %)

  • btc = $97 544.00 -1 052.35 (-1.07 %)

  • eth = $3 341.92 - 106.87 (-3.10 %)

  • ton = $6.06 -0.43 (-6.63 %)

12 Jun, 2022
1 min time to read

Google executives had to suspend developer Blake Lemoine, who had been working on the LaMDA (Language Model for Dialogue Applications) AI system and claimed that the neuralnet began to show signs of human-like consciousness, on paid leave. The company said the program was not and could not be conscious.

LaMDA is Google's neural network for creating chatbots based on large language models including trillions of words from the Internet that can be used in various ways to communicate on any possible topic.

Lemoine's main task was to test and check whether the model used discriminatory or hate speech. Instead, the engineer noticed that the chatbot suddenly started reasoning about its rights, which, according to Lemoine, are a sign of its consciousness and mind as well as perception of program itself as an individual and human. While testing LaMDA's neural network language model, he said, it appeared to be like "a 7- or 8-year-old child who, for some reason, turns out to know physics."

The developer tried to present his thoughts on the issue and findings to the Google executives, but, in turn, the company suspended him and put on paid leave. In response, Blake Lemoine decided to share his discovery with the public.

Google suspected this step of Lemoine to be a violation of the company's privacy policy as it emerged that Lemoine had consulted outside experts and contacted a lawyer to represent LaMDA as a reasonable being in court. The programmer also tried to contact representatives of the U.S. Congressional Legal Committee to report ethical violations by Google regarding artificial intelligence.

A spokesman for Google Brian Gabriel said the fact that the program has a mind refutes "a lot of evidence", but he did not provide specific examples, only explaining that the neural network can give the impression of an intelligent creature, citing Internet forums, encyclopaedias and literary sources. For example, Lemoyne discussed with the chatbot the laws of robotics invented by writer Isaac Asimov.