Skills Azure.AI.VoiceLive (.NET)
🎙️

Azure.AI.VoiceLive (.NET)

Safe

Build Real-Time Voice AI Apps with Azure VoiceLive

Developers struggle to implement real-time voice AI features with proper authentication and event handling. This skill provides complete .NET SDK documentation for Azure VoiceLive, including secure authentication, WebSocket session management, and bidirectional audio streaming patterns.

Supports: Claude Codex Code(CC)
📊 69 Adequate
1

Download the skill ZIP

2

Upload in Claude

Go to Settings → Capabilities → Skills → Upload skill

3

Toggle on and start using

Test it

Using "Azure.AI.VoiceLive (.NET)". I want to build a voice assistant that can check the weather

Expected outcome:

  • VoiceLiveClient connects to Azure AI endpoint using DefaultAzureCredential
  • Session configured with voice modality and weather function definition
  • User speaks query, audio sent via WebSocket to Azure
  • Function call triggered, weather API response sent back to session
  • Assistant responds with spoken weather information

Using "Azure.AI.VoiceLive (.NET)". How do I handle errors in VoiceLive sessions

Expected outcome:

  • SessionUpdateError events contain error details
  • Cancellation failed errors can be safely ignored
  • Authentication errors require credential verification
  • Network errors should trigger session reconnection logic

Security Audit

Safe
v1 • 2/24/2026

Static analysis scanned 0 files with 0 lines and detected no security issues. This is a documentation-only skill (SKILL.md) providing guidance for using the Azure AI VoiceLive SDK. No executable code, network calls, or file system access patterns are present in the skill itself. The skill recommends secure authentication practices using DefaultAzureCredential.

0
Files scanned
0
Lines analyzed
0
findings
1
Total audits
No security issues found
Audited by: claude

Quality Score

38
Architecture
100
Maintainability
87
Content
31
Community
100
Security
74
Spec Compliance

What You Can Build

Voice Assistant Development

Build conversational voice assistants that process speech in real-time and respond with synthesized audio and text.

Real-Time Speech-to-Speech Translation

Create applications that capture voice input, process it through AI models, and output translated speech with minimal latency.

Voice-Enabled Chatbots

Integrate natural voice interaction into existing chatbot systems using Azure AI VoiceLive for hands-free user experiences.

Try These Prompts

Basic Voice Session Setup
Help me create a basic Azure VoiceLive session in .NET. I need to authenticate with DefaultAzureCredential, configure the session with text and audio modalities, and handle incoming audio events.
Function Calling Configuration
Show me how to define and handle function calls in Azure VoiceLive. I want to add a weather lookup function that the voice assistant can call during conversations.
Custom Voice and Turn Detection
Configure Azure VoiceLive with a custom neural voice and semantic voice activity detection. Set appropriate silence duration and threshold values for natural conversation flow.
Full Voice Assistant Implementation
Create a complete real-time voice assistant example using Azure VoiceLive SDK. Include authentication, session management, event handling loop, error handling, and function calling for external APIs.

Best Practices

  • Use DefaultAzureCredential for authentication instead of hardcoded API keys
  • Configure both Text and Audio modalities for complete voice assistant functionality
  • Always wrap VoiceLiveSession in using statement for proper resource disposal

Avoid

  • Do not hardcode API keys in source code - use environment variables or managed identity
  • Do not omit error handling for SessionUpdateError events
  • Do not skip disposing VoiceLiveSession - always use using statement

Frequently Asked Questions

What Azure resources do I need to use VoiceLive SDK
You need an Azure AI Services resource with VoiceLive enabled. Assign the Cognitive Services User role for managed identity authentication, or obtain an API key from the Azure Portal.
Does VoiceLive support custom voices
Yes, VoiceLive supports Azure Standard voices, Azure HD voices, and Azure Custom voices. Use AzureStandardVoice for built-in voices or AzureCustomVoice with an endpoint ID for custom neural voices.
What audio format does VoiceLive require
VoiceLive uses PCM 16-bit audio at 24kHz sample rate in mono. Set InputAudioFormat and OutputAudioFormat to Pcm16 in your session configuration.
Can VoiceLive handle multiple languages
Yes, VoiceLive supports multiple languages through voice selection. Specify language-region voice identifiers like en-US-AvaNeural or configure custom voices for other languages.
How does function calling work in VoiceLive
Define VoiceLiveFunctionDefinition with JSON schema parameters, add to session options Tools collection, then handle SessionUpdateResponseFunctionCallArgumentsDone events to process calls and send FunctionCallOutputItem responses.
What is the difference between VoiceLive and Cognitive Services Speech SDK
VoiceLive provides real-time bidirectional voice AI with GPT-4o models for conversational assistants. Cognitive Services Speech SDK handles speech-to-text and text-to-speech separately without integrated AI reasoning.

Developer Details

File structure

📄 SKILL.md