What Did OpenAI Do This Week? - 09/04/2023
OPENAI PLAYS HOT POTATO IN SAFETY BLOG POST [+20 LINKS]
OpenAI posted a not-so-whopping 1,187-word blog outlining their approach to safety, prompting several raised eyebrows from journalists who labelled the post as a ‘whack-a-mole response to multiple controversies’ to AI experts and political figures asking if the company even acknowledges there is an existential risk with what the company is building.
The company reaffirmed its commitment to keeping powerful AI “safe and broadly beneficial”, building safety “into our system at all levels” due to the “real risks” that come with tools like ChatGPT. The key word here is ‘broad’. OpenAI listed efforts to protect children by minimising harmful content, improving factual accuracy, and removing personal information from its training data “where feasible” (which appears to mean ‘on request’) and prefaced with an explanation of their ‘rigorous safety evaluations’ = rigorous testing, expert feedback, reinforcement learning/human feedback and broad safety and monitoring systems (there’s that word ‘broad’ again).
The privacy paragraph above was a great opportunity for OpenAI to explore how it does this and offer some data on success to date etc. Instead, an incredibly important area remains fluffy and sounds like they’ll do their best and offer a ‘we tried’ should anything go badly. OpenAI has yet to publish a privacy impact assessment.
A key paragraph highlighting OpenAI considers the responsibility for what is developed and deployed is not with the company making the technology, but with those outside the company. A lovely buck-passing statement that again speaks volumes to where the company is heading and how the company plans to get there.
Also contained in the post, and news to many, was that those under 18’s are not, per OpenAI, allowed to use ChatGPT. Something, if you’ve been on TikTok recently, is clearly not happening. OpenAI has said that the company is looking into, note: not currently working on, verification options. Amidst various issues bubbling to the surface, including technological sexism, OpenAI would do well to get ahead of the press on these arguments and issues while it can.
SO WHAT?
Released after US President Joe Biden's remarks about the risks of AI systems, OpenAI's blog post outlines the positive steps towards responsible AI development but does little to acknowledge the potential risks and limitations. In part, a response to the Italy ban, data investigation in Canada and impending lawsuits due to false accusations created by ChatGPT, OpenAI needed to put out something fast that made the company look like safety and security are front and centre. To be clear, OpenAI’s blog post does little to stop that bleeding and is a missed opportunity to score some proper PR points, but it will buy the company some time. The blog post does show that OpenAI is keen to pass on responsibility for the higher-order risks to others rather than lead the debate and be open and upfront for the sake of market share and subscription dollars.
As OpenAI says, there are limits to what you can learn in a lab, but so much is being seemingly ignored, the company is beginning to look blasé or worse reckless. The development of AI needs to be much more transparent if we are to create responsible systems and not just move fast/break things (Stanford agrees). We know how that story goes, and we’re all still paying the price.
WANT TO INNOVATE LIKE OPENAI? Order your copy of the second volume of ‘Disruptive Technologies’ now!
OpenAI published a blog post on its approach to AI safety. /OpenAI Blog
OpenAI pledged to be more transparent about user data and age verification. /Reuters
OpenAI plans to present measures to remedy Italy’s ban on Thursday. /Reuters
ChatGPT-4 passed US medical licensing exam with flying colours. /BI
ChatGPT falsely accused a prominent lawyer of sexual harassment allegations, even making up a source. /BI
ChatGPT was found to be making up fake Guardian articles. /Guardian
ChatGPT falsely claimed an Australian Mayor was convicted of bribery. /Yahoo!
DALL·E 2 is now built into the Microsoft Edge browser. /Petapixel
OpenAI’s plagiarism tool was found to be 3x less effective than competitors. /FT
Sam Altman (OpenAI CEO) was interviewed about his learnings from GPT-4 and ‘Impromptu’, the first book written by GPT-4. /Greylock
OpenAI’s President, Greg Brockman, was profiled. /The Information
OpenAI was served its first defamation lawsuit over ChatGPT content. /Reuters
ChatGPT is under investigation by the Canadian privacy commissioner. /Betakit
Former OpenAI VP of Research, Dario Amodei, wants to raise $5B to compete with OpenAI. /TechCrunch
What OpenAI is doing that Google isn’t. /The Information
Bloomberg GPT launches a large-scale generative AI model to outperform the competition. /Nieman Lab
Samsung employees unwittingly leaked confidential data to ChatGPT. /Gizmodo
Universities in Japan are limiting the usage of ChatGPT. /JapanNews
Dall-E is being used by nursing facility residents to express themselves. /Patch
Alibaba’s ChatGPT competitor is called ‘Tongyi Qianwen’ and is currently invite-only. /Reuters
Amazon launched a global generative AI accelerator for start-ups. /Amazon
Microsoft and Google are choosing speed over caution in AI race. /NYTimes