09:48
14:00
10:14
09:40
12:52
11:39
09:48
14:00
10:14
09:40
12:52
11:39
09:48
14:00
10:14
09:40
12:52
11:39
09:48
14:00
10:14
09:40
12:52
11:39
The signatories are calling for a six-month pause that should be “public and verifiable, and include all key actors.”
More than 1,100 signatories, including Elon Musk, Steve Wozniak, and Tristan Harris of the Center for Humane Technology, have signed an open letter calling for a pause on the development and deployment of AI systems more powerful than GPT-4.
The letter argues that there is a “locked-in, out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” The signatories are calling for a six-month pause that should be “public and verifiable, and include all key actors.” If said pause “cannot be enacted quickly, governments should step in and institute a moratorium,” the letter says.
While some AI experts have signed the letter, OpenAI, the outfit behind the large language model GPT-4, has not signed it. OpenAI CEO Sam Altman said that the company has long given priority to safety in development and spent more than six months doing safety tests on GPT-4 before its launch. Altman also noted that OpenAI has not started training GPT-5.
The open letter states the following:
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.