Tutorials
Setup a custom LLM Websocket
Step-by-step guide on how to set up a custom LLM for your Millis AI voice agent
Using a WebSocket connection, your voice agent can interact in real-time with your custom LLM, enabling dynamic and responsive conversational capabilities based on your specific requirements.
Sample Code
A sample code using FastAPI websocket and asyncio. Make sure to use the AsyncOpenAI
client to prevent OpenAI from blocking the asyncio thread during token streaming. This ensures messages are sent immediately via websocket.