[BY]
Dmytro Kremeznyi
[Category]
AI
[DATE]
Jun 21, 2024
Ilya Sutskever, former chief scientist at OpenAI, has founded Safe Superintelligence Inc., a company dedicated to developing a powerful AI system.
Ilya Sutskever, co-founder and former chief scientist of OpenAI, has initiated a new venture, Safe Superintelligence Inc. (SSI), which was announced last Wednesday. The company's mission is to develop a powerful AI system that prioritizes safety, marking a significant next step after Sutskever’s departure from OpenAI.
SSI differentiates itself by integrating safety features directly with its AI capabilities, allowing the company to progress swiftly while ensuring robust safety measures. Sutskever emphasized that SSI’s focus will be solely on creating safe AI solutions, steering clear of the distractions typically associated with management tasks and frequent product cycles.
SSI’s team includes Daniel Gross, a former AI leader at Apple, and Daniel Levy, a former colleague of Sutskever at OpenAI. Together, they aim to achieve the company’s objective of crafting a safe superintelligence.
While major players like Apple and Microsoft are broadening their AI partnerships, SSI is choosing a focused approach. In a recent interview with Bloomberg, Sutskever stated that SSI’s primary goal is to develop a safe superintelligence system, without branching out into other projects until this is accomplished.
Content