• btc = $66 817.00 -70.46 (-0.11 %)

  • eth = $3 078.78 -1.24 (-0.04 %)

  • ton = $6.35 -0.06 (-0.98 %)

  • btc = $66 817.00 -70.46 (-0.11 %)

  • eth = $3 078.78 -1.24 (-0.04 %)

  • ton = $6.35 -0.06 (-0.98 %)

31 Mar, 2023
2 min time to read

300M jobs at risk of being replaced by AI. Musk and 1,100 others call for 6-month AI development pause.

A recent report from Goldman Sachs has warned that the increasing use of Artificial Intelligence (AI) could lead to the displacement of the equivalent of 300 million full-time jobs. The report highlights that while AI technology could increase productivity and boost global GDP by up to seven percent over time, it will also cause significant disruption in the labour market, with up to two-thirds of jobs being automated to some degree in the US and Europe.

According to the report, the impact of AI will vary significantly across different sectors. The jobs that are most likely to be automated include 46 percent of tasks in administration, 44 percent in legal professions, 37 percent in architecture and engineering, 36 percent in the life, physical and social sciences sector, and 36 percent in business and financial operations. In contrast, jobs in the building and ground cleaning and maintenance sector are the most AI-proof, with only one percent of tasks being affected. The bottom three jobs in terms of automation are six percent in construction and four percent in maintenance.

The report also states that Generative AI could lead to a reduction in employment in the near term. Furthermore, it highlights that 60 percent of workers are currently in occupations that did not exist in 1940, suggesting that AI technology could create new jobs in the future.

In response to these concerns, Elon Musk, the founder of SpaceX and Tesla, along with over 1,100 other signatories, including AI experts and industry executives, have called for a six-month pause in the development of any AI systems that are more powerful than OpenAI's latest GPT-4. In an open letter issued by the Future of Life Institute, which is funded by the Musk Foundation, the signatories argue that "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable." The letter raises concerns about the potential risks that AI could pose to society and calls for more transparency and democratic control of its development.

While some experts have welcomed the letter, others have criticised it for not providing enough detail on how the risks of AI can be mitigated. Gary Marcus, a professor at New York University, commented that "the big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialise."

Similarly, Suresh Venkatasubramanian, a professor at Brown University and former assistant director in the White House Office of Science and Technology Policy, noted that "a lot of the power to develop these systems has been constantly in the hands of a few companies that have the resources to do it." He called for greater efforts to democratise AI technology so that its benefits can be shared more widely across society.