OpenAI Co-founder Ilya Sutskever Launches New Startup to Pioneer ‘Safe Superintelligence’
Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)
Ilya Sutskever, former chief scientist at OpenAI, has revealed his next major project after departing the AI research company he co-founded in May.
Alongside fellow OpenAI alumnus Daniel Levy and Apple’s former AI lead Daniel Gross, the trio has formed Safe Superintelligence Inc. (SSI), a startup solely focused on building safe superintelligent systems.
I am starting a new company: https://t.co/BG3K3SI3A1
The formation of SSI follows the brief November 2023 ousting of OpenAI’s CEO Sam Altman, in which Sutskever played a central role before later expressing regret over the situation.
In a message on SSI’s website, the founders state:
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
Sutskever’s work at SSI represents a continuation of his efforts at OpenAI, where he was part of the superalignment team tasked with designing control methods for powerful new AI systems. However, that group was disbanded following Sutskever’s high-profile departure.
According to SSI, it will pursue safe superintelligence in “a straight shot, with one focus, one goal, and one product.” This singular focus stands in contrast to the diversification seen at major AI labs like OpenAI, DeepMind, and Anthropic over recent years.
Only time will tell if Sutskever’s team can make substantive progress toward their lofty goal of safe superintelligent AI. Critics argue the challenge represents a matter of philosophy as much as engineering. However, the pedigree of SSI’s founders means their efforts will be followed with great interest.
In the meantime, expect to see a resurgence of the “What did Ilya see?” meme:
What did Ilya see? pic.twitter.com/2Vhn2OXBMt
See also: Meta unveils five AI models for multi-modal processing, music generation, and more
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: agi, artificial intelligence, ethics, Ilya Sutskever, openai, safety, ssi, ssi inc, superintelligence
You must be logged in to post a comment.
Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More