In a bold and visionary move, Ilya Sutskever, the co-founder and former chief scientist of OpenAI, has embarked on a new venture aimed at creating a safe and powerful AI system. Announced on Wednesday, Sutskever’s new company, Safe Superintelligence Inc. (SSI), is dedicated to one clear mission: advancing AI capabilities while prioritising safety.
The formation of SSI comes amidst growing concerns about the balance between AI advancement and safety. Sutskever emphasised that SSI’s approach uniquely integrates safety and capabilities, enabling rapid development without compromising security. This strategy stands in stark contrast to the external pressures faced by AI teams at major tech firms like OpenAI, Google, and Microsoft, which often juggle management demands and product cycles.
“By insulating our business model from short-term commercial pressures, we ensure that safety, security, and progress are not just priorities but the very foundation of our operations,” stated Sutskever. This singular focus on creating a safe superintelligence system aims to eliminate the distractions that typically plague other AI projects.
SSI boasts an impressive founding team, including Daniel Gross, a former AI lead at Apple, and Daniel Levy, who was a technical staff member at OpenAI. This powerhouse team is poised to drive SSI’s vision forward with a combined wealth of experience and expertise in AI development and safety.
Sutskever’s departure from OpenAI in May followed a tumultuous period where he led a push to oust OpenAI CEO Sam Altman. This internal conflict highlighted differing priorities within OpenAI, particularly regarding the emphasis on safety. Shortly after Sutskever’s exit, AI researcher Jan Leike and policy researcher Gretchen Krueger also left OpenAI, both citing concerns that safety processes were being overshadowed by the pursuit of flashy products.
In an interview with Bloomberg, Sutskever underscored SSI’s unwavering commitment to its mission. The company’s first and only product for the foreseeable future will be a safe superintelligence. This focused strategy means that, unlike OpenAI’s burgeoning partnerships with Apple and Microsoft, SSI will steer clear of diversifying its efforts until it has achieved its foundational goal.
Sutskever’s initiative signals a significant shift in the AI landscape, one that underscores the critical importance of aligning technological progress with robust safety measures. As SSI embarks on this journey, the tech world watches with keen interest, recognising the potential for this new venture to set a precedent for future AI developments.