Skip to main content

GPT-4o: What the latest ChatGPT update can do and when you can get it

OpenAI developer using GPT-4o.
OpenAI

GPT-4o is the latest and greatest large language model (LLM) AI released by OpenAI, and it brings with it heaps of new features for free and paid users alike. It’s a multimodal AI and enhances ChatGPT with faster responses, greater comprehension, and a number of new abilities that will continue to roll out in the weeks to come.

With increasing competition from Meta’s Llama 3 and Google’s Gemini, OpenAI’s latest release is looking to stay ahead of the game. Here’s why it’s so exciting.

Availability and price

If you’ve been using the free version of ChatGPT for a while and jealously eyed the features that ChatGPT Plus users have been enjoying, there’s great news! You too can now play around with image detection, file uploads, find custom GPTs in the GPT Store, utilize Memory to retain your conversation as you chat so that you don’t need to repeat yourself, and analyze data and perform complicated calculations.

That’s all alongside the higher intelligence of the standard GPT-4 model, which GPT-4o is an equivalent of, even if it was trained from the ground up as a multimodal AI. The reason this is possible is because GPT-4o is computationally far cheaper to run, meaning it requires fewer tokens, which makes it more viable for a wider user base to enjoy it.

However, free users will have a limited number of messages they can send to GPT-4o per day. When that threshold is reached, you’ll be bumped over to the GPT-3.5 model.

It’s way faster

OpenAI's Mira Murati introduces GPT-4o.
OpenAI

GPT-4 was distinct from GPT-3.5 in a number of ways, and speed was one of them. GPT-4 was just way, way slower, even with its advances in recent months and the introduction of GPT-4 Turbo. However, GPT-4o is almost instantaneous. That makes its text responses far swifter and more actionable, with voice conversations occurring in closer to real- time.

While response speed feels like more of a nice-to-have feature than a game-changing one, the fact that you can get responses in near real time makes GPT-4o a much more viable tool for tasks like translation and conversational help.

Advanced voice support

Although upon its initial debut, GPT-4o is only able to work with text and images, it’s been built from the ground up to utilize voice commands and to be able to interact with users using audio. That means that where GPT-4 could take a voice, convert it into text, respond to that, and then convert its text response to a voice output, GPT-4o can hear a voice, and respond in kind. With its improved speed, it can respond far more conversationally, and can understand unique aspects of voice like tone, pace, mood, and more.

GPT-4o can laugh, be sarcastic, catch itself when making a mistake, and adjust midstream, and you can interrupt it conversationally without that derailing its response. It can also understand different languages and translate on the fly, making it usable as a real-time translation tool. It can sing — or even duet with itself.

Two GPT-4os interacting and singing

This could be used for interview prep, singing coaching, running role-playing NPCs, telling dramatic bedtime stories with different voices and characters, creating voiced dialogue for a game project, telling jokes (and laughing in response to yours), and so much more.

Improved comprehension

GPT-4o understands you much better than its predecessors did, especially if you speak to it. It can read tone and intention far better, and if you want it to be relaxed and friendly, it’ll joke with you in an attempt to keep the conversation light.

When it’s analyzing code or text, it’ll take your intentions into consideration far more, making it better at giving you the response you want and requiring less-specific prompting. It’s better at reading video and images, making it capable of understanding the world around it.

Live demo of GPT-4o vision capabilities

In several demos, OpenAI showed users filming the room they’re in, with GPT-4o models then describing it. In one video, the AI even described the room space to another version of itself, which then had its own responses based on that description.

Native macOS desktop app

The ChatGPT desktop app open in a window next to some code.
OpenAI

Native AI in Windows is still restricted to the very limited Copilot (for now), but macOS users will soon be able to make full use of ChatGPT and its new GPT-4o model right from the desktop. With a new native desktop app, ChatGPT will be more readily available — and with a new user interface to boot — making it easier to use than ever before.

The app will be available for most ChatGPT Plus users in the coming days, and will be rolled out to free users in the coming weeks. A Windows version is promised for later this year.

It’s not all quite ready, yet

At the time of writing, the only aspects of GPT-4o that are available to the public are the text and image modes. There’s no advanced voice support, no real-time video comprehension, and the macOS desktop app won’t be available to everyone for a few more days at least.

But it is all coming. These changes and other exciting upgrades for ChatGPT are just around the corner.

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
ChatGPT’s highly anticipated Advanced Voice could arrive ‘next week’
screencap. two people sitting at a desk talking to OpenAI's Advanced Voice mode on a cellphone

OpenAI CEO and co-founder Sam Altman revealed on X (formerly Twitter) Thursday that its Advanced Voice feature will begin rolling out "next week," though only for a few select ChatGPT-Plus subscribers.

The company plans to "start the alpha with a small group of users to gather feedback and expand based on what we learn."

Read more
GPT-4: everything you need to know about ChatGPT’s standard AI model
A laptop opened to the ChatGPT website.

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot originally powered by the GPT-3.5 large language model. But when the highly anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, with some calling it the early glimpses of AGI (artificial general intelligence).
What is GPT-4?
GPT-4 is the newest language model created by OpenAI that can generate text that is similar to human speech. It advances the technology used by ChatGPT, which was previously based on GPT-3.5 but has since been updated. GPT is the acronym for Generative Pre-trained Transformer, a deep learning technology that uses artificial neural networks to write like a human.

According to OpenAI, this next-generation language model is more advanced than ChatGPT in three key areas: creativity, visual input, and longer context. In terms of creativity, OpenAI says GPT-4 is much better at both creating and collaborating with users on creative projects. Examples of these include music, screenplays, technical writing, and even "learning a user's writing style."

Read more
OpenAI just took the shackles off the free version of ChatGPT
ChatGPT results on an iPhone.

OpenAI announced the release of its newest snack-sized generative model, dubbed GPT-4o mini, which is both less resource intensive and cheaper to operate than its standard GPT-4o model, allowing developers to integrate the AI technology into a far wider range of products.

It's a big upgrade for developers and apps, but it also expands the capabilities and reduces limitations on the free version of ChatGPT. GPT-4o mini is now available to users on the Free, Plus, and Team tiers through the ChatGPT web and app for users and developers starting today, while ChatGPT Enterprise subscribers will gain access next week. GPT-4o mini will replace the company's existing small model, GPT-3.5 Turbo, for end users beginning today.

Read more