What Did OpenAI Do This Week?
OPENAI’S STEALTH SECURITY FIXES
An unusually quiet week for OpenAI, even with the July 4th vacation taken into account. Two security stories rang through the quiet. The first alarm went off when Swift developer and engineer Pedro José Pereira Vieito shared a concern on Threads that the newly released ChatGPT app for Mac had been failing to sandbox conversations. [sandboxing = a security practice that keeps potential vulnerabilities and failures from spreading from one application to others on a machine]. In this case, data was being stored in plain text which meant it could be easily accessed by other apps or anyone else who has access to your Mac. The app is only available from OpenAI's website, and since it's not available on the App Store, it doesn't have to follow Apple's sandboxing requirements. Vieito then demonstrated a second app could access ChatGPT’s logs and show you the text of a conversation immediately after it happened. After the exploit attracted attention, OpenAI released an update that added encryption to locally stored chats.
The second alarm was late in sounding as it concerned an event back in 2023. Last spring, hackers breached OpenAI's internal messaging system, stealing sensitive company information. The hacker lifted details from discussions in an online forum where employees talked about OpenAI's latest technologies, per an NYT report, citing two people familiar with the incident. However, hackers did not get into the systems where OpenAI houses and builds its AI. Leopold Aschenbrenner, a technical program manager at OpenAI, expressed alarm to OpenAI about security loopholes arguing that the hack had exposed internal vulnerabilities that foreign actors could exploit with ongoing consequences. OpenAI executives informed both employees at an all-hands meeting in April last year and the company's board about the breach but executives decided not to share the news publicly as no information about customers or partners had been stolen.
Aschenbrenner now says he was fired for disclosing information about OpenAI and for surfacing concerns about the company’s security. A representative from OpenAI told The New York Times that “while we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work” and added that his exit was not the result of whistleblowing.
Subscribe to find out what all this means for you, and your business along with +15 other stories and announcements below ⬇ Need a discount? Sign up to TBD+ and you’ll get 50% off FOR LIFE!
Keep reading with a 7-day free trial
Subscribe to What Did OpenAI Do This Week? to keep reading this post and get 7 days of free access to the full post archives.