• btc = $66 009.00 1 090.19 (1.68 %)

  • eth = $3 215.40 44.35 (1.40 %)

  • ton = $6.16 -0.08 (-1.28 %)

  • btc = $66 009.00 1 090.19 (1.68 %)

  • eth = $3 215.40 44.35 (1.40 %)

  • ton = $6.16 -0.08 (-1.28 %)

19 Nov, 2022
1 min time to read

Meta has withdrawn a public demonstration of its "scientific knowledge" artificial intelligence model because it generates incorrect and misleading information.

Meta released Galactica last week. The company claims it "can store, combine and reason about scientific knowledge" but scientists quickly discovered that the AI system summaries generate misinformation, including references to the real authors of non-existent scientific papers.

In all cases, it was wrong or biased but sounded right and authoritative. I think it's dangerous,

Michael Black, the director of the Max Planck Institute for Intelligent Systems, wrote in a thread on Twitter after using the tool.

Black gives examples of when Galactica generates scientific texts that are misleading or just plain wrong. While some of them look plausible, they are not backed up by real scientific research. In some cases, the citations even include the names of real authors, but refer to non-existent Github repositories and scientific papers.

In addition, the researchers noted that Galactica does not genetically produce texts on some topics, possibly due to automatic AI filters.

Meta decided to remove the demo version of Galactica. Motherboard reached out to the company for comment and received the following response, published by Papers With Code, the project responsible for the system:

We appreciate the feedback we have received so far from the community, and have paused the demo for now. Our models are available for researchers who want to learn more about the work and reproduce results in the paper.

Galactica is a large language model that generates plausible text that appears to have been written by humans. While the results of such systems are often impressive, the system often fails to understand the content of what is written.

Facebook has released AI before, for which the company has had to apologise. In August, for example, the company released a demo of a chatbot called BlenderBot that made "offensive and untrue" statements by engaging in strange, unnatural conversations.