What Did OpenAI Do This Week? - 16/04/2023 [+20 LINKS]
OPENAI COMPLETES TICKBOX SECURITY EXERCISE AS REGULATORS CIRCLE WAGONS
Amidst a week of deep-pocketed competitor announcements and six days after outlining its approach to safety, and eight years after founding OpenAI, the company launched its bug-finding program. The average deployment of bug programs is 2.7 years, but it's important to note deployment does vary.
According to Greg Brockman (OpenAI's President/Co-Founder, who is talking at TED this week), the program takes one step toward resolving some security safety concerns. While somewhat expected, Brockman tweeted about the program earlier in the year, the announcement comes as calls to regulate OpenAI's products (and AI in general) in the US and Europe is getting louder and louder. Per the OpenAI blog post, "… with any complex technology, we understand that vulnerabilities and flaws can emerge. We believe that transparency and collaboration are crucial to addressing this reality. That's why we are inviting the global community of security researchers, ethical hackers, and technology enthusiasts to help us identify and address vulnerabilities in our systems."
Find a bug, get cold, hard cash; "rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries". OpenAI has partnered with Bugcrowd, a leading bug bounty platform, to manage the submission and reward process using the Bugcrowd Vulnerability Rating Taxonomy. OpenAI promises 'acknowledgement and credit contributions' if you are the first to report a unique vulnerability that leads to a code or configuration change. Engagement in the program comes with a chunky set of warnings, guidelines, and scope directives. At the time of writing, there have been 29 vulnerabilities awarded with cash.
Global AI expert, Dr Lance B. Eliot, notes on Forbes the scheme isn't to tackle the kind of behaviour bugs that could wipe out all of humankind; it's the everyday kind that is being hunted down. Overall, this is a cute, semi-serious response to the current climate and indicates that OpenAI knows there are issues, people are looking, and the competition is increasing.
SO WHAT?
The program launched the same week as the Biden Administration called for public comments on potential accountability measures for artificial intelligence (AI) systems and leading AI firms met to discuss best practices. The issue here is less the timing and more the lack of openness within a seemingly very open move. The move feels like an eery badging exercise for a future congress appearance than a move to make things happening right now better. True to OpenAI's modus operandi as a pseudo 'open source' company – the bug bounty program is framed as an invitation to everyone to share in the responsibility to fix OpenAI's problems just don't jailbreak ChatGPT or cause it to generate malicious code or text. OpenAI states, "Issues related to the content of model prompts and responses are strictly out of scope, and will not get rewarded," and "…model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed." A way of saying, don't show us up; we can fix all the embarrassing things behind closed doors #kfanksbai. Ultimately any bounty program is just icing when governments tell you what your cake can be of and how it needs to be made. When this will happen exactly is currently unclear. The smart money is on sooner rather than later, especially with the hype around AutoGPT, and while fear is greater than understanding in certain influential circles.
WANT TO INNOVATE LIKE OPENAI? Order your copy of the second volume of ‘Disruptive Technologies’ now!
Keep reading with a 7-day free trial
Subscribe to What Did OpenAI Do This Week? to keep reading this post and get 7 days of free access to the full post archives.