What Did OpenAI Do This Week? - 4/06/2023 [+25 LINKS]
OPENAI - IN THE THICK OF IT
Sam Altman’s deliberate schmoozing of regional EU heads of government – many of whom have influence over the final shape of the EU’s AI rulebook via the European Council – reached its peak this week when he met the President of the EU Commission, Ursula von der Leyen. After the meeting, Ursula’s tweet mirrored EU industry chief Thierry Breton’s stance stating “…to match the speed of tech development, AI firms need to play their part”.
The real question is has Sam’s ‘productive weeks of conversation in Europe’ influenced the ‘over-regulation’ he saw coming down the pipe last week? Altman won’t have long to find out. Altman will meet Thierry in San Francisco later this month to workshop details of an ‘interim AI Pact’ – a draft voluntary AI code to provide safeguards while new laws are developed. An AI code that EU tech chief Margrethe Vestager stated was likely to be drawn up ‘within weeks’. An AI code that, until now, only Google had publicly agreed to work on with the EU.
So why did Altman (and hundreds of AI scientists, academics, tech CEOs and public figures) sign a statement, asking for the risk of extinction from AI to be the global priority? Firstly, the AI Pact (which is expected to set global standards) is not regulatory and it will deal only with risks that do exist, vs. AI risks that don’t exist yet. The statement (hosted on the website of a San Francisco-based, privately-funded not-for-profit called the Center for AI Safety (CAIS)) equates AI risk with the existential harms posed by nuclear apocalypse and calls for policymakers to focus their attention on mitigating what is claimed as a ‘doomsday’ extinction-level AI risk. Here’s the (intentionally brief) statement in full:
It’s claimed that it’s intentionally short to ensure that the extinction-level message isn’t drowned out by messaging around the “important and urgent risks from AI” (that’s today’s problems). But you’ve got to ask, why are OpenAI and others piling this on now given how hard it would be to regulate against risks that don’t exist yet?
The Center for AI Safety says this “opens the discussion” although Andrew Griffin (The Independent) rightly points out “It reads a little like being hectored by a burglar about your house’s locks not being good enough” as every dangerous AI is the product of intentional choices by its developers - most of which just signed this statement. The very same companies and individuals who signed the controversial open letter promoted by Elon Musk that pointed directly to OpenAI and proposed to stop the development of language models more powerful than GPT-4 for 6 months. Has AI responsibility shifted more widely? Yes. However, enquiring minds may be thinking that it is ‘more than a little helpful to the companies that signed it, in making those risks seem inevitable and naturally occurring’ as though they’re not making a profit from AI and can’t influence it. Equally, it conveniently acts as a smokescreen around the real issues that need to be regulated right now. The argument isn’t going away, the EU is driving accountability and the world is watching on.
Keep reading with a 7-day free trial