12:52
11:39
13:16
09:59
14:15
10:28
12:52
11:39
13:16
09:59
14:15
10:28
12:52
11:39
13:16
09:59
14:15
10:28
12:52
11:39
13:16
09:59
14:15
10:28
OpenAI has introduced its latest breakthrough in artificial intelligence, GPT-4o, designed to revolutionize interactions with its AI-powered chatbot, ChatGPT, and beyond.
This new flagship model, referred to as "omni," combines text, speech, and vision capabilities, promising a seamless and immersive user experience.
GPT-4o, an advancement over its predecessor GPT-4, boasts enhanced intelligence across various modalities. It integrates voice recognition, allowing users to engage with ChatGPT conversationally, interrupt responses, and receive real-time, nuanced interactions. Additionally, GPT-4o enhances ChatGPT's vision abilities, enabling it to analyze images and answer related queries accurately and swiftly.
This multi-modal AI model not only improves user experience but also enhances multilingual support, with optimized performance in around 50 languages. Moreover, GPT-4o offers faster processing and higher rate limits compared to its predecessor, GPT-4 Turbo, making it more efficient and cost-effective.
OpenAI has also unveiled a refreshed ChatGPT UI, providing a more conversational interface, along with a desktop version for macOS users and plans for a Windows version in the future.
Furthermore, OpenAI has opened access to its GPT Store for users of ChatGPT's free tier, offering creation tools for third-party chatbots. Additionally, free users can now enjoy previously premium features like memory capability, file and photo uploads, and web search functionality within ChatGPT.