OpenAI Introduces Realtime API for Low-Latency Voice Interactions 

Realtime API to arm developers with the ability to add low-latency, multimodal voice interactions to applications. 

On Monday, October 14, OpenAI officially announced the public beta of its Realtime API to arm developers with the ability to add low-latency, multimodal voice interactions to applications.  

The AI company’s Realtime API also updates the Chat Completions API, adding audio input and output support to further voice-enabled application capabilities. 

OpenAI Realtime API 

The Realtime API allows developers to create fluent, real-time speech-to-speech applications with six preconfigured voices. By bundling speech recognition and synthesis into one API call, developers have the ability to create a more organic conversational experience without having to manage several models. 

The redesigned Chat Completions API now allows developers to accept input in either text or audio and provides responses in text, audio, or both. The newly added layer of flexibility welcomes a broader range of use cases, particularly for those that do not need instantaneous performance by the Realtime API. 

Developing voice assistant experiences traditionally required the balancing act of having multiple models for different tasks, such as automatic speech recognition, text inference, and text-to-speech. These often result in delays and loss of subtlety during conversations but with the Realtime API launch these issues are addressed by stitching this interaction seamlessly; thus, the communication comes across faster and much more naturally. 

The Realtime API works over a WebSocket that keeps open for the lifetime of a request and maintains a continuous flow of messages to and from GPT-4o. Besides that, it allows function calling, thus giving voice assistants the ability to perform orders or access users’ data to personalize the responses. 

Early feedback from developers on the best Realtime API solutions, and their opinions about Realtime API launch revealed some limitations, though. Currently, voice is limited to alloy, echo, and shimmer, and there have been complaints about response cutoffs, like those reflected in ChatGPT’s Advanced Voice Mode. This issue highlights that there is, in fact, another model guiding the course of conversations. 

The Realtime API is now available in public beta for all paid developers, and audio capabilities will begin rolling out in the Chat Completions API in the upcoming weeks. Coming to the pricing of Realtime API, it also includes both text and audio tokens; costs are marked at approximately $0.06 per minute for audio input and $0.24 per minute for audio output. 

 However, some concerns were raised about what this could mean for interactions that are long in duration. Certainly, the developers have pointed out that even though they can build Realtime API model, which must refer to prior conversations with every response, it could also result in a rapidly increasing cost structure. Overall, questions arise as to whether this OpenAI Realtime API service is worth it, specifically for long conversations. 

Final Thoughts  

The Realtime API launch is another major technological advancement in the OpenAI world, embracing the capabilities of AI to drive innovation. 

As OpenAI continues to enhance its Realtime API and extends the capabilities of audio, developers will have to evaluate the benefit adjustments with increased voice interactions against the cost implications.  

However, new features promise significant enhancement in user experiences, addressing limitations and price concerns will be critical for a long-term adoption, but with more enhancement, it could set a new bar in voice-driven application development. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.