July 27, 2024 06:55 (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
Paris Olympics: Lady Gaga rocks opening ceremony with her jaw-dropping act | Rahul Gandhi stops at a cobbler's shop on his way back to Lucknow | Priyanka Gandhi rips into Israeli govt over war on Gaza, says 'their actions are unacceptable' | Barack Obama endorses Kamala Harris for US Presidency | France: Rail network hit by 'malicious' arson attacks ahead of Paris Olympics
Open AI introduces GPT-4o model, check out latest features
Open AI
Photo Courtesy: Open AI website

Open AI introduces GPT-4o model, check out latest features

| @indiablooms | 14 May 2024, 09:03 am

Tech giant Open AI announced a major update on Monday when the tech firm introduced its latest model named GPT-4o.

The new announcement was made during OpenAI Spring Update event which was hosted by company's CTO Mira Murati.

Company chief Sam Altman said in a statement: "First, a key part of our mission is to put very capable AI tools in the hands of people for free (or at a great price). I am very proud that we’ve made the best model in the world available for free in ChatGPT, without ads or anything like that."

"Second, the new voice (and video) mode is the best computer interface I’ve ever used. It feels like AI from the movies; and it’s still a bit surprising to me that it’s real. Getting to human-level response times and expressiveness turns out to be a big change," he said.

"The original ChatGPT showed a hint of what was possible with language interfaces; this new thing feels viscerally different. It is fast, smart, fun, natural, and helpful," he said.

The company said developers can also now access GPT-4o in the API as a text and vision model. GPT-4o is 2x faster, half the price, and has 5x higher rate limits compared to GPT-4 Turbo.

Prior to GPT-4o, one could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average.

"To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio," read Open AI website.

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.