What Did OpenAI Do This Week? - 09/07/2023 [+20 LINKS]
DID OPENAI JUST SAVE US…OR ITSELF?
OpenAI was very busy this week getting its house in order after formally recognising that Sam Altman et al’s trip (‘conversations across 22 countries’) “clarified [the company’s] sense of what it takes for our products to be accessible and useful for users and developers around the world.” The company also pulled the plug on ChatGPT/Bing integration after people used it to bypass paywalls (who could ever have foreseen bad actors?!). OpenAI also made GPT-4 API with 8K context, available to all existing, paying API customers and it announced the rolling out code interpreter to all ChatGPT Plus users over the next week.
OpenAI execs also expressed intent to “double down” on efforts to secure our future from AI destruction by investing significant resources and announcing the creation of a new team dubbed ‘the Superintelligence team’ in a short blog post. The new team will be led by Ilya Sutskever (Co-founder and Chief Scientist of OpenAI) and Jan Leike (Head of Alignment). Most interesting? The company believes Superintelligence (way beyond what we have today), is super dangerous. Groundbreaking.
“Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.
While superintelligence* seems far off now, we believe it could arrive this decade.*Here we focus on superintelligence rather than AGI to stress a much higher capability level. We have a lot of uncertainty over the speed of development of the technology over the next few years, so we choose to aim for the more difficult target to align a much more capable system.”
The team will be formed from top machine learning researchers and engineers, moving over from the previous alignment team, researchers across the company and a new recruitment drive (promoted in their blog post). OpenAI will work to unlock ‘scientific and technical breakthroughs to steer and control AI systems much smarter than us’. AI systems that OpenAI state will be with us within the decade.
Their aim is to align Superintelligence within 4 years, utilising a dedicated 20% of the compute OpenAI has secured to date towards the problem. How? The company intends to create a "human-level" AI alignment researcher, and then scale it through vast amounts of compute power. OpenAI says that means they will train AI systems using human feedback, train AI systems to assist human evaluation, and then finally train AI systems to actually do the alignment research. “Alignment research” refers to ensuring AI systems achieve desired outcomes and without going “rogue”.
Responses to OpenAI’s new research (rescue) mission have been mixed to say the least. Some commentators have (again) reminded us that hyping the risks of future Superintelligent AI (again) conveniently drives attention away from the concrete and present-day issues which AI companies are currently exacerbating. Others simply expressed thanks that they’re living in a time where someone is doing this…
Want to know what this means for you and +20 news links this week? Subscribe now… Need more reasons? Forbes recommends WDODTW for one and in a short time WDODTW is already in the top 100 AI Substacks:
SO WHAT?
Keep reading with a 7-day free trial
Subscribe to What Did OpenAI Do This Week? to keep reading this post and get 7 days of free access to the full post archives.