OpenAI Co-Founder Ilya Sutskever Starts New AI Venture
The co-founder of OpenAI, who departed from the leading artificial intelligence startup last month, has revealed his new initiative: a company focused on creating safe and powerful AI that might challenge his former employer. Ilya Sutskever shared the news about his new enterprise, Safe Superintelligence Inc. (SSI), in a post on X on Wednesday. Sutskever left his position as OpenAI’s chief scientist last month to pursue a personally meaningful project, coinciding with other notable departures from the company.
According to a statement on the company’s website, SSI’s mission and product roadmap are entirely centered on safety. The company’s team, investors, and business strategy are unified in their aim to advance AI capabilities rapidly while ensuring safety remains a priority, enabling scalable and secure progress.
This announcement arrives amidst growing apprehension in the tech industry regarding the swift pace of AI advancements compared to the development of safety and regulatory measures. Current regulations are insufficient, leaving tech companies largely self-regulated in terms of safety guidelines.
Sutskever, a key figure in the AI revolution, started his career under Geoffrey Hinton, the “Godfather of AI,” working on an AI startup later acquired by Google. After contributing to Google’s AI research team, he co-founded OpenAI, the creator of ChatGPT.
However, Sutskever’s time at OpenAI became tumultuous when he played a role in an attempt to remove CEO Sam Altman, which led to a dramatic leadership shuffle. This incident saw Altman dismissed and then reinstated, with significant changes to the board occurring within a week. Sutskever later expressed deep regret for his involvement and signed a letter calling for the board’s resignation and Altman’s return.
The operational and financial strategies of Safe Superintelligence remain unclear, including how it will monetize a safer AI model and the specifics of its technology. The company’s definition of “safety” in the context of advanced AI also lacks clarity.
Nonetheless, the launch reflects a belief within the tech community that highly intelligent AI systems are an imminent reality rather than distant science fiction. Sutskever compared their approach to safety with nuclear safety rather than conventional trust and safety measures.
Recently, some ex-OpenAI employees criticized the company for emphasizing commercial growth over long-term safety. Jan Leike, a former employee, highlighted concerns about OpenAI’s decision to disband its “superalignment” team, which focused on training AI to align with human needs and priorities. OpenAI claimed it redistributed the superalignment of team members to better meet safety objectives.
The launch announcement from SSI emphasizes their commitment to an undistracted focus on safety, security, and progress, free from short-term commercial pressures. Sutskever is joined in this new venture by Daniel Levy, an OpenAI alumnus, and Daniel Gross, a former Y Combinator partner, and Apple machine learning expert. The company will operate from offices in Palo Alto, California, and Tel Aviv, Israel.
Have you read?
Richest Countries In Europe In 2024.
Most Attractive Countries To Private Equity, Venture Capital, and Hedge Fund Investors.
Revealed: Highest-paid news media executive in the United Kingdom.
Countries Leading the Way on Climate Change.
World’s Best Countries For Adventure Tourism.
Add CEOWORLD magazine to your Google News feed.
Follow CEOWORLD magazine headlines on: Google News, LinkedIn, Twitter, and Facebook.
Copyright 2024 The CEOWORLD magazine. All rights reserved. This material (and any extract from it) must not be copied, redistributed or placed on any website, without CEOWORLD magazine' prior written consent. For media queries, please contact: info@ceoworld.biz