OpenAI, the company that created ChatGPT, said on Wednesday that it aims to commit large resources and form a new research team to ensure that its artificial intelligence remains safe for people – eventually employing AI to monitor itself.
"The vast power of superintelligence could... lead to humanity's disempowerment or even extinction," stated OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike in a blog post. "At the moment, there is no solution for steering or controlling a potentially superintelligent AI and preventing it from going rogue."
The writers of the blog article projected that superintelligent AI — systems that are smarter than humans — might come this decade. To regulate superintelligent AI, humans would require better strategies than are now available, implying the need for advancements in "alignment research," which focuses on ensuring AI remains useful to humans, according to the authors.
They noted that OpenAI, which is supported by Microsoft, is committing 20% of the processing capacity it has secured over the next four years to fixing this challenge. Furthermore, the corporation is developing a new team named the Superalignment team to structure around this initiative.
The team's objective is to develop a "human-level" AI alignment researcher and then scale it up using massive quantities of computational resources. According to OpenAI, this means that they will train AI systems using human input, then train AI systems to aid human evaluation, and lastly teach AI systems to do the alignment study.
Connor Leahy, an AI safety advocate, said the idea was fundamentally wrong because the initial human-level AI might go rampant and cause havoc before being forced to fix AI safety issues.
"You have to solve alignment before you build human-level intelligence, or you won't control it by default," he stated in an interview. "I personally do not think this is a particularly good or safe plan."
Both AI experts and the general public have been concerned about the possible risks of AI. A number of AI industry executives and experts signed an open letter in April calling for a six-month moratorium on creating systems more powerful than OpenAI's GPT-4, citing possible societal hazards. According to a May Reuters/Ipsos poll, more than two-thirds of Americans are concerned about the potential negative implications of AI, with 61 percent believing it may undermine civilization.
Post a Comment
0Comments