• btc = $67 413.00 2 548.82 (3.93 %)

  • eth = $3 246.94 82.02 (2.59 %)

  • ton = $6.77 0.21 (3.27 %)

  • btc = $67 413.00 2 548.82 (3.93 %)

  • eth = $3 246.94 82.02 (2.59 %)

  • ton = $6.77 0.21 (3.27 %)

6 Jul, 2023
1 min time to read

OpenAI has announced the formation of a dedicated team tasked with managing the risks associated with superintelligent artificial intelligence.

The move comes at a time when governments around the world are trying to solve the problem of regulating artificial intelligence technology.

Superintelligent AI refers to a hypothetical model that surpasses human intelligence across multiple domains. OpenAI believes that such a model could become a reality before the end of the decade. While acknowledging the potential benefits of superintelligence in solving critical global problems, OpenAI also recognizes the enormous risks it poses, including the potential disempowerment or extinction of humanity.

The newly created team will be led by OpenAI Chief Scientist Ilya Sutskever and Jan Leike, head of alignment research. OpenAI has committed 20 percent of its computing power to this initiative to develop an automated alignment explorer. This system will help OpenAI ensure that superintelligent AI is secure and aligned with human values.

OpenAI recognizes the ambitious nature of its goal and potential challenges, but remains optimistic that a focused effort can address those risks. The organization highlights promising ideas and metrics for progress that have already been identified, and plans to share a roadmap for the future.