What Did OpenAI Do This Week?
OPENAI BREATHES LIFE INTO ADVANCED VOICE MODE
OpenAI began rolling out Advanced Voice mode in ChatGPT this week, but only to a small group of Plus users, with the promise of all Plus users by the end of the year. After a month-long delay users got their hands on the alpha of GPT-4o Advanced Voice mode. Update your apps, users receive an email with instructions and a message in there to activate the new capabilities. OpenAI was set to launch in late June but postponed to sometime in July ‘to improve its safety measures’ and probably to check with lawyers over the whole Scarlett Johansson’s ‘Sky’ voice debacle.
OpenAI heavily pushed copyright by blocking requests and safety; ‘We tested GPT-4o's voice capabilities with 100+ external red teamers across 45 languages. To protect people's privacy, we've trained the model to only speak in the four preset voices, and we built systems to block outputs that differ from those voices. We've also implemented guardrails to block requests for violent or copyrighted content.”
Users have been busy tyre kicking. The biggest surprise was its uncanny ability to breath:
SO WHAT?
Subscribe now to find out what all this means for you, and your business along with +25 other stories and announcements below ⬇
Need a discount? Sign up to TBD+ and you’ll get 50% off FOR LIFE!
Keep reading with a 7-day free trial
Subscribe to What Did OpenAI Do This Week? to keep reading this post and get 7 days of free access to the full post archives.