Select Language

English

Down Icon

Select Country

America

Down Icon

In crowded voice AI market, OpenAI bets on instruction-following and expressive speech to win enterprise adoption

In crowded voice AI market, OpenAI bets on instruction-following and expressive speech to win enterprise adoption

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

OpenAI adds to an increasingly competitive AI voice market for enterprises with its new model, gpt-realtime, that follows complex instructions and with voices “that sound more natural and expressive.”

As voice AI continues to grow, and customers find use cases such as customer service calls or real-time translation, the market for realistic-sounding AI voices that also offer enterprise-grade security is heating up. OpenAI claims its new model provides a more human-like voice, but it still needs to compete against companies like ElevenLabs.

The model will be available on the Realtime API, which the company also made generally available. Along with the gpt-realtime model, OpenAI also released new voices on the API, which it calls Cedar and Marin, and updated its other voices to work with the latest model.

OpenAI said in a livestream that it worked with its customers who are building voice applications to train gpt-realtime and “carefully aligned the model to evals that are built on real-world scenarios like customer support and academic tutoring.”

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

  • Turning energy into a strategic advantage
  • Architecting efficient inference for real throughput gains
  • Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO

The company touted the model’s ability to create emotive, natural-sounding voices that also align with how developers build with the technology.

The model operates within a speech-to-speech framework, enabling it to understand spoken prompts and respond vocally. Speech-to-speech models are ideally suited for real-time responses, where a person, typically a customer, interacts with an application.

For example, a customer wants to return some products and calls a customer service platform. They could be talking to an AI voice assistant that responds to questions and requests as if they were speaking with a human.

In a livestream, OpenAI customers T-Mobile showcased an AI voice-powered agent that helps people find new phones. Another customer, the real estate search platform Zillow, showcased an agent who helps someone narrow down a neighborhood to find the perfect place.

OpenAI said gpt-realtime is its “most advanced, production-ready voice model.” Like its other voice models, it can switch languages mid-sentence. However, OpenAI researchers noted gpt-realtime can follow more complex instructions like “speak emphatically in a French accent.”

But gpt-realtime faces competition from other models that many brands already use. ElevenLabs released Conversation AI 2.0 in May. Soundhound partners with fast food franchises for an AI voice drive-thru. Emphatic AI startup Hume has launched its EVI 3 model, which allows users to generate AI versions of their own voice.

As enterprises discover various use cases for voice AI, even more general model providers that offer multimodal LLMs are making a case for themselves. Mistral released its new Voxtral model, stating it would work well with real-time translation. Google is enhancing its audio capabilities and gaining popularity with an audio feature on NotebookLM that converts research notes into a podcast.

OpenAI said gpt-realtime is smarter and understands native audio better, including the ability to catch non-verbal cues like laughs or sighs.

Benchmarking using the Big Bench Audio eval showed the model scoring 82.8% in accuracy, compared to its previous model, which scored 65.6%. OpenAI did not provide numbers testing gpt-realtime against models from its competitors.

OpenAI focused on improving the model’s instruction-following capabilities, ensuring the model would adhere to directions more effectively. The new model achieves a score of 30.5% on the MultiChallenge audio benchmark. The engineers also beefed up function calling so gpt-realtime can access the correct tools.

To support the new model and enhance how enterprises integrate real-time AI capabilities into their applications, OpenAI has added several new features to the Realtime API.

It can now support MCP and recognize image inputs, allowing it to inform users about what it sees in real-time. This is a feature Google heavily emphasized during its Project Astra presentation last year.

The Realtime API can also handle Session Initiation Protocol (SIP). SIP connects apps to phones like a public phone network or desk phones, opening up more contact center use cases. Users can also save and reuse prompts on the API.

So far, people are impressed with the model, although these are still initial tests of a model that was recently released.

Tbh, the MCP and SIP features are the real story here, not just another model. The ability to connect to external tools and systems seamlessly is what will finally move these models from being impressive demos to being integrated into actual workflows.

The real time aspect…

venturebeat

venturebeat

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow