• btc = $93 603.00 -73.54 (-0.08 %)

  • eth = $3 427.77 35.24 (1.04 %)

  • ton = $6.30 0.16 (2.68 %)

  • btc = $93 603.00 -73.54 (-0.08 %)

  • eth = $3 427.77 35.24 (1.04 %)

  • ton = $6.30 0.16 (2.68 %)

6 Jul, 2023
1 min time to read

OpenAI has announced the formation of a dedicated team tasked with managing the risks associated with superintelligent artificial intelligence.

The move comes at a time when governments around the world are trying to solve the problem of regulating artificial intelligence technology.

Superintelligent AI refers to a hypothetical model that surpasses human intelligence across multiple domains. OpenAI believes that such a model could become a reality before the end of the decade. While acknowledging the potential benefits of superintelligence in solving critical global problems, OpenAI also recognizes the enormous risks it poses, including the potential disempowerment or extinction of humanity.

The newly created team will be led by OpenAI Chief Scientist Ilya Sutskever and Jan Leike, head of alignment research. OpenAI has committed 20 percent of its computing power to this initiative to develop an automated alignment explorer. This system will help OpenAI ensure that superintelligent AI is secure and aligned with human values.

OpenAI recognizes the ambitious nature of its goal and potential challenges, but remains optimistic that a focused effort can address those risks. The organization highlights promising ideas and metrics for progress that have already been identified, and plans to share a roadmap for the future.