OpenAI disbands team dedicated to addressing AI DANGERS


Artificial intelligence (AI) firm OpenAI has disbanded a team devoted to mitigating the long-term dangers of so-called artificial general intelligence (AGI).

The San Francisco-based OpenAI confirmed the end of its superalignment group on May 17. Members of the group, the dissolution of which began weeks ago, were integrated into other projects and research endeavors.

“The dismantling of an OpenAI team focused on keeping sophisticated artificial intelligence under control comes as such technology faces increased scrutiny from regulators and fears mount regarding its dangers,” Yahoo News reported.

Following the disbandment of the superalignment team, OpenAI co-founder Ilya Sutskever and team co-leader Jan Leike announced their departures from the technology firm. In a post on X, Sutskever said he was leaving the company after almost a decade. He praised OpenAI in the same post, describing it as a firm whose “trajectory has been nothing short of miraculous.”

“I’m confident that OpenAI will build AGI that is both safe and beneficial,” added Sutskever, about computer technology that seeks to perform as well as human cognition, if not better than it. Incidentally, Sutskever was a member of the board that voted to remove founder and CEO Sam Altman last November. Despite this, Altman was reinstated a few days later after staff and investors rebelled.

Leike’s post on X about his departure also touched on AGI. He urged all OpenAI employees to “act with the gravitas” warranted by what they are building. Leike reiterated: “OpenAI must become a safety-first AGI company.” (Related: What are the risks posed by artificial general intelligence?)

Altman responded to Leike’s post by thanking him for his work at the company and expressing sadness over his departure. “He’s right – we have a lot more to do. We are committed to doing it,” the OpenAI CEO continued.

Disbandment and departures come amid advanced version of ChatGPT

The disbandment of the superalignment team, alongside the departures of Sutskever and Leike, came amid OpenAI releasing an advanced version of its signature ChatGPT chatbot. This advanced version, which boasts a higher performing capacity and even more human-like interactions, was made free to all users.

“It feels like AI from the movies,” Altman said in a blog post. The OpenAI CEO has previously pointed to the 2013 film “Her” as an inspiration for where he would like AI interactions to go. “Her” centers on the character Theodore Twombly (played by Joaquin Phoenix) developing a relationship with an AI named Samantha (voiced by Scarlett Johansson).

Sutskever meanwhile said during a talk at a TED AI summit in San Francisco late last year that “AGI will have a dramatic impact on every area of life.” He added that the day will come when “digital brains will become as good and even better” than that of humans.

According to WIRED magazine, research on the risks associated with more powerful AI models will now be led by John Schulman after the superalignment team’s dissolution. Schulman co-leads the team responsible for fine-tuning AI models after training.

OpenAI declined to comment on the departures of Sutskever, Leike or other members of the now-disbanded superalignment team. It also refused to comment on the future of its work on long-term AI risks.

“There is no indication that the recent departures have anything to do with OpenAI’s efforts to develop more human-like AI or to ship products,” the magazine noted. “But the latest advances do raise ethical questions around privacy, emotional manipulation and cybersecurity risks.”

Head over to Robots.news for more stories about AI and its dangers.

Watch Glenn Beck explaining the issue involving Scarlett Johansson’s accusation that OpenAI stole her voice to use in the latest version of ChatGPT.

This video is from the High Hopes channel on Brighteon.com.

More related stories:

AI pioneer warns humanity’s remaining timeline is only a few more years thanks to the risk that emerging AI tech could destroy the human race.

AI now considered an “existential risk” to the planet and humankind as atomic scientists reset Doomsday Clock at 90 seconds to midnight.

Is AI going to kill everyone? Top experts say yes, warning about “risk of extinction” similar to nuclear weapons, pandemics.

DeepLearning.AI founder warns against the dangers of AI during annual meeting of globalist WEF.

OpenAI researchers warn board that rapidly advancing AI technology threatens humanity.

Sources include:

Yahoo.com

WIRED.com

Brighteon.com


Submit a correction >>

Get Our Free Email Newsletter
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
Your privacy is protected. Subscription confirmation required.


Comments
comments powered by Disqus

Get Our Free Email Newsletter
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
Your privacy is protected. Subscription confirmation required.

RECENT NEWS & ARTICLES

Get the world's best independent media newsletter delivered straight to your inbox.
x

By continuing to browse our site you agree to our use of cookies and our Privacy Policy.