OpenAI’s livestreamed GPT announcement event happened at 10 a.m. PT Monday, but you can still catch up on the reveals.
The company announced the event as an opportunity to demonstrate some of the latest updates to ChatGPT and GPT-4. CEO Sam Altman enthusiastically promoted the event by stating, “While it’s not GPT-5 or a search engine, we have been diligently working on some exciting new features that we believe will be well-received. It truly feels like magic to me.”
During the event, OpenAI introduced a new model called GPT-4o, with the “o” representing “omni.” This latest model boasts enhanced responsiveness to voice commands and improved vision capabilities.
Speaking at a keynote presentation held at OpenAI’s headquarters in San Francisco, OpenAI’s CTO Mira Murati highlighted the significance of GPT-4o’s ability to reason across voice, text, and vision, emphasizing its importance in shaping the future of human-machine interaction.
Following the event, OpenAI shared a series of demonstrations showcasing GPT-4o’s diverse capabilities on its YouTube channel. These include enhancing visual accessibility through partnerships like Be My Eyes, its unique ability to harmonize with itself, and expanded translation capabilities.