diff --git a/analytics_sentry_cf02a907.txt b/analytics_sentry_cf02a907.txt new file mode 100644 index 0000000000000000000000000000000000000000..732668efb551213542ab1f630536860cdeb0ed00 --- /dev/null +++ b/analytics_sentry_cf02a907.txt @@ -0,0 +1,5 @@ +URL: https://docs.pipecat.ai/server/services/analytics/sentry#configuration +Title: Sentry Metrics - Pipecat +================================================== + +Sentry Metrics - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Analytics & Monitoring Sentry Metrics Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Sentry Metrics Utilities Advanced Frame Processors Audio Processing Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline ​ Overview SentryMetrics extends FrameProcessorMetrics to provide performance monitoring integration with Sentry. It tracks Time to First Byte (TTFB) and processing duration metrics for frame processors. ​ Installation To use Sentry metrics, install the Sentry SDK: Copy Ask AI pip install "pipecat-ai[sentry]" ​ Configuration Sentry must be initialized in your application before metrics will be collected: Copy Ask AI import sentry_sdk sentry_sdk.init( dsn = "your-sentry-dsn" , traces_sample_rate = 1.0 , ) ​ Usage Example Copy Ask AI import sentry_sdk from pipecat.services.openai.llm import OpenAILLMService from pipecat.services.elevenlabs.tts import ElevenLabsTTSService from pipecat.processors.aggregators.openai_llm_context import OpenAILLMContext from pipecat.processors.metrics.sentry import SentryMetrics from pipecat.transports.services.daily import DailyParams, DailyTransport async def create_metrics_pipeline (): sentry_sdk.init( dsn = "your-sentry-dsn" , traces_sample_rate = 1.0 , ) transport = DailyTransport( room_url, token, "Chatbot" , DailyParams( audio_out_enabled = True , audio_in_enabled = True , video_out_enabled = False , vad_analyzer = SileroVADAnalyzer(), transcription_enabled = True , ), ) tts = ElevenLabsTTSService( api_key = os.getenv( "ELEVENLABS_API_KEY" ), metrics = SentryMetrics(), ) llm = OpenAILLMService( api_key = os.getenv( "OPENAI_API_KEY" ), model = "gpt-4o" ), metrics = SentryMetrics(), ) messages = [ { "role" : "system" , "content" : "You are Chatbot, a friendly, helpful robot. Your goal is to demonstrate your capabilities in a succinct way. Your output will be converted to audio so don't include special characters in your answers. Respond to what the user said in a creative and helpful way, but keep your responses brief. Start by introducing yourself. Keep all your responses to 12 words or fewer." , }, ] context = OpenAILLMContext(messages) context_aggregator = llm.create_context_aggregator(context) # Use in pipeline pipeline = Pipeline([ transport.input(), context_aggregator.user(), llm, tts, transport.output(), context_aggregator.assistant(), ]) ​ Transaction Information Each transaction includes: Operation type ( ttfb or processing ) Description with processor name Start timestamp End timestamp Unique transaction ID ​ Fallback Behavior If Sentry is not available (not installed or not initialized): Warning logs are generated Metric methods execute without error No data is sent to Sentry ​ Notes Requires Sentry SDK to be installed and initialized Thread-safe metric collection Automatic transaction management Supports selective TTFB reporting Integrates with Sentry’s performance monitoring Provides detailed timing information Maintains timing data even when Sentry is unavailable Moondream Producer & Consumer Processors On this page Overview Installation Configuration Usage Example Transaction Information Fallback Behavior Notes Assistant Responses are generated using AI and may contain mistakes. \ No newline at end of file diff --git a/android_api-reference_d0f255ab.txt b/android_api-reference_d0f255ab.txt new file mode 100644 index 0000000000000000000000000000000000000000..d3b2008607caefc2612370da832fe61dd370c908 --- /dev/null +++ b/android_api-reference_d0f255ab.txt @@ -0,0 +1,5 @@ +URL: https://docs.pipecat.ai/client/android/api-reference#content +Title: All modules +================================================== + +All modules All modules: pipecat-client-android Link copied to clipboard pipecat-transport-daily Link copied to clipboard pipecat-transport-gemini-live-websocket Link copied to clipboard pipecat-transport-openai-realtime-webrtc Link copied to clipboard © 2025 Copyright Generated by dokka \ No newline at end of file diff --git a/android_introduction_7ffdd137.txt b/android_introduction_7ffdd137.txt new file mode 100644 index 0000000000000000000000000000000000000000..22228deac08702061faf01b6512439e656cb5e96 --- /dev/null +++ b/android_introduction_7ffdd137.txt @@ -0,0 +1,5 @@ +URL: https://docs.pipecat.ai/client/android/introduction#installation +Title: SDK Introduction - Pipecat +================================================== + +SDK Introduction - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Android SDK SDK Introduction Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Client SDKs The RTVI Standard RTVIClient Migration Guide Javascript SDK SDK Introduction API Reference Transport packages React SDK SDK Introduction API Reference React Native SDK SDK Introduction API Reference iOS SDK SDK Introduction API Reference Transport packages Android SDK SDK Introduction API Reference Transport packages C++ SDK SDK Introduction Daily WebRTC Transport The Pipecat Android SDK provides a Kotlin implementation for building voice and multimodal AI applications on Android. It handles: Real-time audio and video streaming Bot communication and state management Media device handling Configuration management Event handling ​ Installation Add the dependency for your chosen transport to your build.gradle file. For example, to use the Daily transport: Copy Ask AI implementation "ai.pipecat:daily-transport:0.3.3" ​ Example Here’s a simple example using Daily as the transport layer. Note that the clientConfig is optional and depends on what is required by the bot backend. Copy Ask AI val clientConfig = listOf ( ServiceConfig ( service = "llm" , options = listOf ( Option ( "model" , "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo" ), Option ( "messages" , Value. Array ( Value. Object ( "role" to Value. Str ( "system" ), "content" to Value. Str ( "You are a helpful assistant." ) ) )) ) ), ServiceConfig ( service = "tts" , options = listOf ( Option ( "voice" , "79a125e8-cd45-4c13-8a67-188112f4dd22" ) ) ) ) val callbacks = object : RTVIEventCallbacks () { override fun onBackendError (message: String ) { Log. e (TAG, "Error from backend: $message " ) } } val options = RTVIClientOptions ( services = listOf ( ServiceRegistration ( "llm" , "together" ), ServiceRegistration ( "tts" , "cartesia" )), params = RTVIClientParams (baseUrl = "" , config = clientConfig) ) val client = RTVIClient (DailyTransport. Factory (context), callbacks, options) client. connect (). await () // Using Coroutines // Or using callbacks: // client.start().withCallback { /* handle completion */ } ​ Documentation API Reference Complete SDK API documentation Daily Transport WebRTC implementation using Daily OpenAIRealTimeWebRTCTransport API Reference On this page Installation Example Documentation Assistant Responses are generated using AI and may contain mistakes. \ No newline at end of file diff --git a/audio_audio-buffer-processor_3a034d3c.txt b/audio_audio-buffer-processor_3a034d3c.txt new file mode 100644 index 0000000000000000000000000000000000000000..b12b9f9242f37cfcae4443e1c978257483e93ed3 --- /dev/null +++ b/audio_audio-buffer-processor_3a034d3c.txt @@ -0,0 +1,5 @@ +URL: https://docs.pipecat.ai/server/utilities/audio/audio-buffer-processor#constructor +Title: AudioBufferProcessor - Pipecat +================================================== + +AudioBufferProcessor - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing AudioBufferProcessor Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline ​ Overview The AudioBufferProcessor captures and buffers audio frames from both input (user) and output (bot) sources during conversations. It provides synchronized audio streams with configurable sample rates, supports both mono and stereo output, and offers flexible event handlers for various audio processing workflows. ​ Constructor Copy Ask AI AudioBufferProcessor( sample_rate = None , num_channels = 1 , buffer_size = 0 , enable_turn_audio = False , ** kwargs ) ​ Parameters ​ sample_rate Optional[int] default: "None" The desired output sample rate in Hz. If None , uses the transport’s sample rate from the StartFrame . ​ num_channels int default: "1" Number of output audio channels: 1 : Mono output (user and bot audio are mixed together) 2 : Stereo output (user audio on left channel, bot audio on right channel) ​ buffer_size int default: "0" Buffer size in bytes that triggers audio data events: 0 : Events only trigger when recording stops >0 : Events trigger whenever buffer reaches this size (useful for chunked processing) ​ enable_turn_audio bool default: "False" Whether to enable per-turn audio event handlers ( on_user_turn_audio_data and on_bot_turn_audio_data ). ​ Properties ​ sample_rate Copy Ask AI @ property def sample_rate ( self ) -> int The current sample rate of the audio processor in Hz. ​ num_channels Copy Ask AI @ property def num_channels ( self ) -> int The number of channels in the audio output (1 for mono, 2 for stereo). ​ Methods ​ start_recording() Copy Ask AI async def start_recording () Start recording audio from both user and bot sources. Initializes recording state and resets audio buffers. ​ stop_recording() Copy Ask AI async def stop_recording () Stop recording and trigger final audio data handlers with any remaining buffered audio. ​ has_audio() Copy Ask AI def has_audio () -> bool Check if both user and bot audio buffers contain data. Returns: True if both buffers contain audio data. ​ Event Handlers The processor supports multiple event handlers for different audio processing workflows. Register handlers using the @processor.event_handler() decorator. ​ on_audio_data Triggered when buffer_size is reached or recording stops, providing merged audio. Copy Ask AI @audiobuffer.event_handler ( "on_audio_data" ) async def on_audio_data ( buffer , audio : bytes , sample_rate : int , num_channels : int ): # Handle merged audio data pass Parameters: buffer : The AudioBufferProcessor instance audio : Merged audio data (format depends on num_channels setting) sample_rate : Sample rate in Hz num_channels : Number of channels (1 or 2) ​ on_track_audio_data Triggered alongside on_audio_data , providing separate user and bot audio tracks. Copy Ask AI @audiobuffer.event_handler ( "on_track_audio_data" ) async def on_track_audio_data ( buffer , user_audio : bytes , bot_audio : bytes , sample_rate : int , num_channels : int ): # Handle separate audio tracks pass Parameters: buffer : The AudioBufferProcessor instance user_audio : Raw user audio bytes (always mono) bot_audio : Raw bot audio bytes (always mono) sample_rate : Sample rate in Hz num_channels : Always 1 for individual tracks ​ on_user_turn_audio_data Triggered when a user speaking turn ends. Requires enable_turn_audio=True . Copy Ask AI @audiobuffer.event_handler ( "on_user_turn_audio_data" ) async def on_user_turn_audio_data ( buffer , audio : bytes , sample_rate : int , num_channels : int ): # Handle user turn audio pass Parameters: buffer : The AudioBufferProcessor instance audio : Audio data from the user’s speaking turn sample_rate : Sample rate in Hz num_channels : Always 1 (mono) ​ on_bot_turn_audio_data Triggered when a bot speaking turn ends. Requires enable_turn_audio=True . Copy Ask AI @audiobuffer.event_handler ( "on_bot_turn_audio_data" ) async def on_bot_turn_audio_data ( buffer , audio : bytes , sample_rate : int , num_channels : int ): # Handle bot turn audio pass Parameters: buffer : The AudioBufferProcessor instance audio : Audio data from the bot’s speaking turn sample_rate : Sample rate in Hz num_channels : Always 1 (mono) ​ Audio Processing Features Automatic resampling : Converts incoming audio to the specified sample rate Buffer synchronization : Aligns user and bot audio streams temporally Silence insertion : Fills gaps in non-continuous audio streams to maintain timing Turn tracking : Monitors speaking turns when enable_turn_audio=True ​ Integration Notes ​ STT Audio Passthrough If using an STT service in your pipeline, enable audio passthrough to make audio available to the AudioBufferProcessor: Copy Ask AI stt = DeepgramSTTService( api_key = os.getenv( "DEEPGRAM_API_KEY" ), audio_passthrough = True , ) audio_passthrough is enabled by default. ​ Pipeline Placement Add the AudioBufferProcessor after transport.output() to capture both user and bot audio: Copy Ask AI pipeline = Pipeline([ transport.input(), # ... other processors ... transport.output(), audiobuffer, # Place after audio output # ... remaining processors ... ]) UserIdleProcessor KoalaFilter On this page Overview Constructor Parameters Properties sample_rate num_channels Methods start_recording() stop_recording() has_audio() Event Handlers on_audio_data on_track_audio_data on_user_turn_audio_data on_bot_turn_audio_data Audio Processing Features Integration Notes STT Audio Passthrough Pipeline Placement Assistant Responses are generated using AI and may contain mistakes. \ No newline at end of file diff --git a/audio_krisp-filter_06d586db.txt b/audio_krisp-filter_06d586db.txt new file mode 100644 index 0000000000000000000000000000000000000000..32e39339f3061b8345a09bc0281a72c31f4d801e --- /dev/null +++ b/audio_krisp-filter_06d586db.txt @@ -0,0 +1,5 @@ +URL: https://docs.pipecat.ai/server/utilities/audio/krisp-filter#param-model-path +Title: KrispFilter - Pipecat +================================================== + +KrispFilter - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing KrispFilter Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline ​ Overview KrispFilter is an audio processor that reduces background noise in real-time audio streams using Krisp AI technology. It inherits from BaseAudioFilter and processes audio frames to improve audio quality by removing unwanted noise. To use Krisp, you need a Krisp SDK license. Get started at Krisp.ai . Looking for help getting started with Krisp and Pipecat? Checkout our Krisp noise cancellation guide . ​ Installation The Krisp filter requires additional dependencies: Copy Ask AI pip install "pipecat-ai[krisp]" ​ Environment Variables You need to provide the path to the Krisp model. This can either be done by setting the KRISP_MODEL_PATH environment variable or by setting the model_path in the constructor. ​ Constructor Parameters ​ sample_type str default: "PCM_16" Audio sample type format ​ channels int default: "1" Number of audio channels ​ model_path str default: "None" Path to the Krisp model file. You can set the model_path directly. Alternatively, you can set the KRISP_MODEL_PATH environment variable to the model file path. ​ Input Frames ​ FilterEnableFrame Frame Specific control frame to toggle filtering on/off Copy Ask AI from pipecat.frames.frames import FilterEnableFrame # Disable noise reduction await task.queue_frame(FilterEnableFrame( False )) # Re-enable noise reduction await task.queue_frame(FilterEnableFrame( True )) ​ Usage Example Copy Ask AI from pipecat.audio.filters.krisp_filter import KrispFilter transport = DailyTransport( room_url, token, "Respond bot" , DailyParams( audio_in_filter = KrispFilter(), # Enable Krisp noise reduction audio_in_enabled = True , audio_out_enabled = True , vad_analyzer = SileroVADAnalyzer(), ), ) ​ Audio Flow ​ Notes Requires Krisp SDK and model file to be available Supports real-time audio processing Supports additional features like background voice removal Handles PCM_16 audio format Thread-safe for pipeline processing Can be dynamically enabled/disabled Maintains audio quality while reducing noise Efficient processing for low latency KoalaFilter NoisereduceFilter On this page Overview Installation Environment Variables Constructor Parameters Input Frames Usage Example Audio Flow Notes Assistant Responses are generated using AI and may contain mistakes. \ No newline at end of file diff --git a/audio_krisp-filter_78ecbba3.txt b/audio_krisp-filter_78ecbba3.txt new file mode 100644 index 0000000000000000000000000000000000000000..90dd80b1f9645c7073c624a1cf50e2029e0b8223 --- /dev/null +++ b/audio_krisp-filter_78ecbba3.txt @@ -0,0 +1,5 @@ +URL: https://docs.pipecat.ai/server/utilities/audio/krisp-filter#audio-flow +Title: KrispFilter - Pipecat +================================================== + +KrispFilter - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing KrispFilter Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline ​ Overview KrispFilter is an audio processor that reduces background noise in real-time audio streams using Krisp AI technology. It inherits from BaseAudioFilter and processes audio frames to improve audio quality by removing unwanted noise. To use Krisp, you need a Krisp SDK license. Get started at Krisp.ai . Looking for help getting started with Krisp and Pipecat? Checkout our Krisp noise cancellation guide . ​ Installation The Krisp filter requires additional dependencies: Copy Ask AI pip install "pipecat-ai[krisp]" ​ Environment Variables You need to provide the path to the Krisp model. This can either be done by setting the KRISP_MODEL_PATH environment variable or by setting the model_path in the constructor. ​ Constructor Parameters ​ sample_type str default: "PCM_16" Audio sample type format ​ channels int default: "1" Number of audio channels ​ model_path str default: "None" Path to the Krisp model file. You can set the model_path directly. Alternatively, you can set the KRISP_MODEL_PATH environment variable to the model file path. ​ Input Frames ​ FilterEnableFrame Frame Specific control frame to toggle filtering on/off Copy Ask AI from pipecat.frames.frames import FilterEnableFrame # Disable noise reduction await task.queue_frame(FilterEnableFrame( False )) # Re-enable noise reduction await task.queue_frame(FilterEnableFrame( True )) ​ Usage Example Copy Ask AI from pipecat.audio.filters.krisp_filter import KrispFilter transport = DailyTransport( room_url, token, "Respond bot" , DailyParams( audio_in_filter = KrispFilter(), # Enable Krisp noise reduction audio_in_enabled = True , audio_out_enabled = True , vad_analyzer = SileroVADAnalyzer(), ), ) ​ Audio Flow ​ Notes Requires Krisp SDK and model file to be available Supports real-time audio processing Supports additional features like background voice removal Handles PCM_16 audio format Thread-safe for pipeline processing Can be dynamically enabled/disabled Maintains audio quality while reducing noise Efficient processing for low latency KoalaFilter NoisereduceFilter On this page Overview Installation Environment Variables Constructor Parameters Input Frames Usage Example Audio Flow Notes Assistant Responses are generated using AI and may contain mistakes. \ No newline at end of file diff --git a/audio_silero-vad-analyzer_beb54155.txt b/audio_silero-vad-analyzer_beb54155.txt new file mode 100644 index 0000000000000000000000000000000000000000..9e88f6af6e21797e1c40521280a7241f7d51673a --- /dev/null +++ b/audio_silero-vad-analyzer_beb54155.txt @@ -0,0 +1,5 @@ +URL: https://docs.pipecat.ai/server/utilities/audio/silero-vad-analyzer +Title: SileroVADAnalyzer - Pipecat +================================================== + +SileroVADAnalyzer - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing SileroVADAnalyzer Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline ​ Overview SileroVADAnalyzer is a Voice Activity Detection (VAD) analyzer that uses the Silero VAD ONNX model to detect speech in audio streams. It provides high-accuracy speech detection with efficient processing using ONNX runtime. ​ Installation The Silero VAD analyzer requires additional dependencies: Copy Ask AI pip install "pipecat-ai[silero]" ​ Constructor Parameters ​ sample_rate int default: "None" Audio sample rate in Hz. Must be either 8000 or 16000. ​ params VADParams default: "VADParams()" Voice Activity Detection parameters object Show properties ​ confidence float default: "0.7" Confidence threshold for speech detection. Higher values make detection more strict. Must be between 0 and 1. ​ start_secs float default: "0.2" Time in seconds that speech must be detected before transitioning to SPEAKING state. ​ stop_secs float default: "0.8" Time in seconds of silence required before transitioning back to QUIET state. ​ min_volume float default: "0.6" Minimum audio volume threshold for speech detection. Must be between 0 and 1. ​ Usage Example Copy Ask AI transport = DailyTransport( room_url, token, "Respond bot" , DailyParams( audio_in_enabled = True , audio_out_enabled = True , vad_analyzer = SileroVADAnalyzer( params = VADParams( stop_secs = 0.5 )), ), ) ​ Technical Details ​ Sample Rate Requirements The analyzer supports two sample rates: 8000 Hz (256 samples per frame) 16000 Hz (512 samples per frame) Model Management Uses ONNX runtime for efficient inference Automatically resets model state every 5 seconds to manage memory Runs on CPU by default for consistent performance Includes built-in model file ​ Notes High-accuracy speech detection Efficient ONNX-based processing Automatic memory management Thread-safe for pipeline processing Built-in model file included CPU-optimized inference Supports 8kHz and 16kHz audio NoisereduceFilter SoundfileMixer On this page Overview Installation Constructor Parameters Usage Example Technical Details Sample Rate Requirements Notes Assistant Responses are generated using AI and may contain mistakes. \ No newline at end of file diff --git a/audio_silero-vad-analyzer_c604e722.txt b/audio_silero-vad-analyzer_c604e722.txt new file mode 100644 index 0000000000000000000000000000000000000000..ec60dced058b31e27f9423965f9d15fc8c421709 --- /dev/null +++ b/audio_silero-vad-analyzer_c604e722.txt @@ -0,0 +1,5 @@ +URL: https://docs.pipecat.ai/server/utilities/audio/silero-vad-analyzer#notes +Title: SileroVADAnalyzer - Pipecat +================================================== + +SileroVADAnalyzer - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Audio Processing SileroVADAnalyzer Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Server API Reference API Reference Reference docs Services Supported Services Transport Serializers Speech-to-Text LLM Text-to-Speech Speech-to-Speech Image Generation Video Memory Vision Analytics & Monitoring Utilities Advanced Frame Processors Audio Processing AudioBufferProcessor KoalaFilter KrispFilter NoisereduceFilter SileroVADAnalyzer SoundfileMixer Frame Filters Metrics and Telemetry MCP Observers Service Utilities Smart Turn Detection Task Handling and Monitoring Telephony Text Aggregators and Filters User and Bot Transcriptions User Interruptions Frameworks RTVI Pipecat Flows Pipeline PipelineParams PipelineTask Pipeline Idle Detection Pipeline Heartbeats ParallelPipeline ​ Overview SileroVADAnalyzer is a Voice Activity Detection (VAD) analyzer that uses the Silero VAD ONNX model to detect speech in audio streams. It provides high-accuracy speech detection with efficient processing using ONNX runtime. ​ Installation The Silero VAD analyzer requires additional dependencies: Copy Ask AI pip install "pipecat-ai[silero]" ​ Constructor Parameters ​ sample_rate int default: "None" Audio sample rate in Hz. Must be either 8000 or 16000. ​ params VADParams default: "VADParams()" Voice Activity Detection parameters object Show properties ​ confidence float default: "0.7" Confidence threshold for speech detection. Higher values make detection more strict. Must be between 0 and 1. ​ start_secs float default: "0.2" Time in seconds that speech must be detected before transitioning to SPEAKING state. ​ stop_secs float default: "0.8" Time in seconds of silence required before transitioning back to QUIET state. ​ min_volume float default: "0.6" Minimum audio volume threshold for speech detection. Must be between 0 and 1. ​ Usage Example Copy Ask AI transport = DailyTransport( room_url, token, "Respond bot" , DailyParams( audio_in_enabled = True , audio_out_enabled = True , vad_analyzer = SileroVADAnalyzer( params = VADParams( stop_secs = 0.5 )), ), ) ​ Technical Details ​ Sample Rate Requirements The analyzer supports two sample rates: 8000 Hz (256 samples per frame) 16000 Hz (512 samples per frame) Model Management Uses ONNX runtime for efficient inference Automatically resets model state every 5 seconds to manage memory Runs on CPU by default for consistent performance Includes built-in model file ​ Notes High-accuracy speech detection Efficient ONNX-based processing Automatic memory management Thread-safe for pipeline processing Built-in model file included CPU-optimized inference Supports 8kHz and 16kHz audio NoisereduceFilter SoundfileMixer On this page Overview Installation Constructor Parameters Usage Example Technical Details Sample Rate Requirements Notes Assistant Responses are generated using AI and may contain mistakes. \ No newline at end of file diff --git a/base-classes_media_68c24817.txt b/base-classes_media_68c24817.txt new file mode 100644 index 0000000000000000000000000000000000000000..58446ea8e99f51196b4fec864379c2c5dead36a7 --- /dev/null +++ b/base-classes_media_68c24817.txt @@ -0,0 +1,5 @@ +URL: https://docs.pipecat.ai/server/base-classes/media#real-time-processing +Title: Overview - Pipecat +================================================== + +Overview - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Get Started Overview Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Get Started Overview Installation & Setup Quickstart Core Concepts Next Steps & Examples Pipecat is an open source Python framework that handles the complex orchestration of AI services, network transport, audio processing, and multimodal interactions. “Multimodal” means you can use any combination of audio, video, images, and/or text in your interactions. And “real-time” means that things are happening quickly enough that it feels conversational—a “back-and-forth” with a bot, not submitting a query and waiting for results. ​ What You Can Build Voice Assistants Natural, real-time conversations with AI using speech recognition and synthesis Interactive Agents Personal coaches and meeting assistants that can understand context and provide guidance Multimodal Apps Applications that combine voice, video, images, and text for rich interactions Creative Tools Storytelling experiences and social companions that engage users Business Solutions Customer intake flows and support bots for automated business processes Complex Flows Structured conversations using Pipecat Flows for managing complex interactions ​ How It Works The flow of interactions in a Pipecat application is typically straightforward: The bot says something The user says something The bot says something The user says something This continues until the conversation naturally ends. While this flow seems simple, making it feel natural requires sophisticated real-time processing. ​ Real-time Processing Pipecat’s pipeline architecture handles both simple voice interactions and complex multimodal processing. Let’s look at how data flows through the system: Voice app Multimodal app 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio and Video Transmit and capture audio, video, and image inputs simultaneously 2 Process Streams Handle multiple input streams in parallel 3 Model Processing Send combined inputs to multimodal models (like GPT-4V) 4 Generate Outputs Create various outputs (text, images, audio, etc.) 5 Coordinate Presentation Synchronize and present multiple output types In both cases, Pipecat: Processes responses as they stream in Handles multiple input/output modalities concurrently Manages resource allocation and synchronization Coordinates parallel processing tasks This architecture creates fluid, natural interactions without noticeable delays, whether you’re building a simple voice assistant or a complex multimodal application. Pipecat’s pipeline architecture is particularly valuable for managing the complexity of real-time, multimodal interactions, ensuring smooth data flow and proper synchronization regardless of the input/output types involved. Pipecat handles all this complexity for you, letting you focus on building your application rather than managing the underlying infrastructure. ​ Next Steps Ready to build your first Pipecat application? Installation & Setup Prepare your environment and install required dependencies Quickstart Build and run your first Pipecat application Core Concepts Learn about pipelines, frames, and real-time processing Use Cases Explore example implementations and patterns ​ Join Our Community Discord Community Connect with other developers, share your projects, and get support from the Pipecat team. Installation & Setup On this page What You Can Build How It Works Real-time Processing Next Steps Join Our Community Assistant Responses are generated using AI and may contain mistakes. \ No newline at end of file diff --git a/base-classes_speech_3040381e.txt b/base-classes_speech_3040381e.txt new file mode 100644 index 0000000000000000000000000000000000000000..f10450b650aa881b490a416771565ba0b27677d1 --- /dev/null +++ b/base-classes_speech_3040381e.txt @@ -0,0 +1,5 @@ +URL: https://docs.pipecat.ai/server/base-classes/speech#next-steps +Title: Overview - Pipecat +================================================== + +Overview - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation Get Started Overview Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Get Started Overview Installation & Setup Quickstart Core Concepts Next Steps & Examples Pipecat is an open source Python framework that handles the complex orchestration of AI services, network transport, audio processing, and multimodal interactions. “Multimodal” means you can use any combination of audio, video, images, and/or text in your interactions. And “real-time” means that things are happening quickly enough that it feels conversational—a “back-and-forth” with a bot, not submitting a query and waiting for results. ​ What You Can Build Voice Assistants Natural, real-time conversations with AI using speech recognition and synthesis Interactive Agents Personal coaches and meeting assistants that can understand context and provide guidance Multimodal Apps Applications that combine voice, video, images, and text for rich interactions Creative Tools Storytelling experiences and social companions that engage users Business Solutions Customer intake flows and support bots for automated business processes Complex Flows Structured conversations using Pipecat Flows for managing complex interactions ​ How It Works The flow of interactions in a Pipecat application is typically straightforward: The bot says something The user says something The bot says something The user says something This continues until the conversation naturally ends. While this flow seems simple, making it feel natural requires sophisticated real-time processing. ​ Real-time Processing Pipecat’s pipeline architecture handles both simple voice interactions and complex multimodal processing. Let’s look at how data flows through the system: Voice app Multimodal app 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio Transmit and capture streamed audio from the user 2 Transcribe Speech Convert speech to text as the user is talking 3 Process with LLM Generate responses using a large language model 4 Convert to Speech Transform text responses into natural speech 5 Play Audio Stream the audio response back to the user 1 Send Audio and Video Transmit and capture audio, video, and image inputs simultaneously 2 Process Streams Handle multiple input streams in parallel 3 Model Processing Send combined inputs to multimodal models (like GPT-4V) 4 Generate Outputs Create various outputs (text, images, audio, etc.) 5 Coordinate Presentation Synchronize and present multiple output types In both cases, Pipecat: Processes responses as they stream in Handles multiple input/output modalities concurrently Manages resource allocation and synchronization Coordinates parallel processing tasks This architecture creates fluid, natural interactions without noticeable delays, whether you’re building a simple voice assistant or a complex multimodal application. Pipecat’s pipeline architecture is particularly valuable for managing the complexity of real-time, multimodal interactions, ensuring smooth data flow and proper synchronization regardless of the input/output types involved. Pipecat handles all this complexity for you, letting you focus on building your application rather than managing the underlying infrastructure. ​ Next Steps Ready to build your first Pipecat application? Installation & Setup Prepare your environment and install required dependencies Quickstart Build and run your first Pipecat application Core Concepts Learn about pipelines, frames, and real-time processing Use Cases Explore example implementations and patterns ​ Join Our Community Discord Community Connect with other developers, share your projects, and get support from the Pipecat team. Installation & Setup On this page What You Can Build How It Works Real-time Processing Next Steps Join Our Community Assistant Responses are generated using AI and may contain mistakes. \ No newline at end of file diff --git a/c_transport_ae6d2316.txt b/c_transport_ae6d2316.txt new file mode 100644 index 0000000000000000000000000000000000000000..129201503314b8efad1a375b3c8f0daa7e21ae2e --- /dev/null +++ b/c_transport_ae6d2316.txt @@ -0,0 +1,5 @@ +URL: https://docs.pipecat.ai/client/c++/transport#dependencies +Title: Daily WebRTC Transport - Pipecat +================================================== + +Daily WebRTC Transport - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation C++ SDK Daily WebRTC Transport Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Client SDKs The RTVI Standard RTVIClient Migration Guide Javascript SDK SDK Introduction API Reference Transport packages React SDK SDK Introduction API Reference React Native SDK SDK Introduction API Reference iOS SDK SDK Introduction API Reference Transport packages Android SDK SDK Introduction API Reference Transport packages C++ SDK SDK Introduction Daily WebRTC Transport The Daily transport implementation enables real-time audio and video communication in your Pipecat C++ applications using Daily’s WebRTC infrastructure. ​ Dependencies ​ Daily Core C++ SDK Download the Daily Core C++ SDK from the available releases for your platform and set: Copy Ask AI export DAILY_CORE_PATH = / path / to / daily-core-sdk ​ Pipecat C++ SDK Build the base Pipecat C++ SDK first and set: Copy Ask AI export PIPECAT_SDK_PATH = / path / to / pipecat-client-cxx ​ Building First, set a few environment variables: Copy Ask AI PIPECAT_SDK_PATH = /path/to/pipecat-client-cxx DAILY_CORE_PATH = /path/to/daily-core-sdk Then, build the project: Linux/macOS Windows Copy Ask AI cmake . -G Ninja -Bbuild -DCMAKE_BUILD_TYPE=Release ninja -C build Copy Ask AI cmake . -G Ninja -Bbuild -DCMAKE_BUILD_TYPE=Release ninja -C build Copy Ask AI # Initialize Visual Studio environment "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Auxiliary\Build\vcvarsall.bat" amd64 # Configure and build cmake . -Bbuild --preset vcpkg cmake --build build --config Release ​ Examples Basic Client Simple C++ implementation example Audio Client C++ client with PortAudio support Node.js Server Example Node.js proxy implementation SDK Introduction On this page Dependencies Daily Core C++ SDK Pipecat C++ SDK Building Examples Assistant Responses are generated using AI and may contain mistakes. \ No newline at end of file diff --git a/client_rtvi-standard_0f269efd.txt b/client_rtvi-standard_0f269efd.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6f9e6457fdb4c887909d87a81039c2f9b201cf2 --- /dev/null +++ b/client_rtvi-standard_0f269efd.txt @@ -0,0 +1,5 @@ +URL: https://docs.pipecat.ai/client/rtvi-standard#metrics-and-monitoring +Title: The RTVI Standard - Pipecat +================================================== + +The RTVI Standard - Pipecat Pipecat home page Search... ⌘ K Ask AI Search... Navigation The RTVI Standard Getting Started Guides Server APIs Client SDKs Community GitHub Examples Changelog Client SDKs The RTVI Standard RTVIClient Migration Guide Javascript SDK SDK Introduction API Reference Transport packages React SDK SDK Introduction API Reference React Native SDK SDK Introduction API Reference iOS SDK SDK Introduction API Reference Transport packages Android SDK SDK Introduction API Reference Transport packages C++ SDK SDK Introduction Daily WebRTC Transport The RTVI (Real-Time Voice and Video Inference) standard defines a set of message types and structures sent between clients and servers. It is designed to facilitate real-time interactions between clients and AI applications that require voice, video, and text communication. It provides a consistent framework for building applications that can communicate with AI models and the backends running those models in real-time. This page documents version 1.0 of the RTVI standard, released in June 2025. ​ Key Features Connection Management RTVI provides a flexible connection model that allows clients to connect to AI services and coordinate state. Transcriptions The standard includes built-in support for real-time transcription of audio streams. Client-Server Messaging The standard defines a messaging protocol for sending and receiving messages between clients and servers, allowing for efficient communication of requests and responses. Advanced LLM Interactions The standard supports advanced interactions with large language models (LLMs), including context management, function call handline, and search results. Service-Specific Insights RTVI supports events to provide insight into the input/output and state for typical services that exist in speech-to-speech workflows. Metrics and Monitoring RTVI provides mechanisms for collecting metrics and monitoring the performance of server-side services. ​ Terms Client : The front-end application or user interface that interacts with the RTVI server. Server : The backend-end service that runs the AI framework and processes requests from the client. User : The end user interacting with the client application. Bot : The AI interacting with the user, technically an amalgamation of a large language model (LLM) and a text-to-speech (TTS) service. ​ RTVI Message Format The messages defined as part of the RTVI protocol adhere to the following format: Copy Ask AI { "id" : string , "label" : "rtvi-ai" , "type" : string , "data" : unknown } ​ id string A unique identifier for the message, used to correlate requests and responses. ​ label string default: "rtvi-ai" required A label that identifies this message as an RTVI message. This field is required and should always be set to 'rtvi-ai' . ​ type string required The type of message being sent. This field is required and should be set to one of the predefined RTVI message types listed below. ​ data unknown The payload of the message, which can be any data structure relevant to the message type. ​ RTVI Message Types Following the above format, this section describes the various message types defined by the RTVI standard. Each message type has a specific purpose and structure, allowing for clear communication between clients and servers. Each message type below includes either a 🤖 or 🏄 emoji to denote whether the message is sent from the bot (🤖) or client (🏄). ​ Connection Management ​ client-ready 🏄 Indicates that the client is ready to receive messages and interact with the server. Typically sent after the transport media channels have connected. type : 'client-ready' data : version : string The version of the RTVI standard being used. This is useful for ensuring compatibility between client and server implementations. about : AboutClient Object An object containing information about the client, such as its rtvi-version, client library, and any other relevant metadata. The AboutClient object follows this structure: Show AboutClient ​ library string required ​ library_version string ​ platform string ​ platform_version string ​ platform_details any Any platform-specific details that may be relevant to the server. This could include information about the browser, operating system, or any other environment-specific data needed by the server. This field is optional and open-ended, so please be mindful of the data you include here and any security concerns that may arise from exposing sensitive or personal-identifiable information. ​ bot-ready 🤖 Indicates that the bot is ready to receive messages and interact with the client. Typically send after the transport media channels have connected. type : 'bot-ready' data : version : string The version of the RTVI standard being used. This is useful for ensuring compatibility between client and server implementations. about : any (Optional) An object containing information about the server or bot. It’s structure and value are both undefined by default. This provides flexibility to include any relevant metadata your client may need to know about the server at connection time, without any built-in security concerns. Please be mindful of the data you include here and any security concerns that may arise from exposing sensitive information. ​ disconnect-bot 🏄 Indicates that the client wishes to disconnect from the bot. Typically used when the client is shutting down or no longer needs to interact with the bot. Note: Disconnets should happen automatically when either the client or bot disconnects from the transport, so this message is intended for the case where a client may want to remain connected to the transport but no longer wishes to interact with the bot. type : 'disconnect-bot' data : undefined ​ error 🤖 Indicates an error occurred during bot initialization or runtime. type : 'error' data : message : string Description of the error. fatal : boolean Indicates if the error is fatal to the session. ​ Transcription ​ user-started-speaking 🤖 Emitted when the user begins speaking type : 'user-started-speaking' data : None ​ user-stopped-speaking 🤖 Emitted when the user stops speaking type : 'user-stopped-speaking' data : None ​ bot-started-speaking 🤖 Emitted when the bot begins speaking type : 'bot-started-speaking' data : None ​ bot-stopped-speaking 🤖 Emitted when the bot stops speaking type : 'bot-stopped-speaking' data : None ​ user-transcription 🤖 Real-time transcription of user speech, including both partial and final results. type : 'user-transcription' data : text : string The transcribed text of the user. final : boolean Indicates if this is a final transcription or a partial result. timestamp : string The timestamp when the transcription was generated. user_id : string Identifier for the user who spoke. ​ bot-transcription 🤖 Transcription of the bot’s speech. Note: This protocol currently does not match the user transcription format to support real-time timestamping for bot transcriptions. Rather, the event is typically sent for each sentence of the bot’s response. This difference is currently due to limitations in TTS services which mostly do not support (or support well), accurate timing information. If/when this changes, this protocol may be updated to include the necessary timing information. For now, if you want to attempt real-time transcription to match your bot’s speaking, you can try using the bot-tts-text message type. type : 'bot-transcription' data : text : string The transcribed text from the bot, typically aggregated at a per-sentence level. ​ Client-Server Messaging ​ server-message 🤖 An arbitrary message sent from the server to the client. This can be used for custom interactions or commands. This message may be coupled with the client-message message type to handle responses from the client. type : 'server-message' data : any The data can be any JSON-serializable object, formatted according to your own specifications. ​ client-message 🏄 An arbitrary message sent from the client to the server. This can be used for custom interactions or commands. This message may be coupled with the server-response message type to handle responses from the server. type : 'client-message' data : t : string d : unknown (optional) The data payload should contain a t field indicating the type of message and an optional d field containing any custom, corresponding data needed for the message. ​ server-response 🤖 An message sent from the server to the client in response to a client-message . IMPORTANT : The id should match the id of the original client-message to correlate the response with the request. type : 'client-message' data : t : string d : unknown (optional) The data payload should contain a t field indicating the type of message and an optional d field containing any custom, corresponding data needed for the message. ​ error-response 🤖 Error response to a specific client message. IMPORTANT : The id should match the id of the original client-message to correlate the response with the request. type : 'error-response' data : error : string ​ Advanced LLM Interactions ​ append-to-context 🏄 A message sent from the client to the server to append data to the context of the current llm conversation. This is useful for providing text-based content for the user or augmenting the context for the assistant. type : 'append-to-context' data : role : "user" | "assistant" The role the context should be appended to. Currently only supports "user" and "assistant" . content : unknown The content to append to the context. This can be any data structure the llm understand. run_immediately : boolean (optional) Indicates whether the context should be run immediately after appending. Defaults to false . If set to false , the context will be appended but not executed until the next llm run. ​ llm-function-call 🤖 A function call request from the LLM, sent from the bot to the client. Note that for most cases, an LLM function call will be handled completely server-side. However, in the event that the call requires input from the client or the client needs to be aware of the function call, this message/response schema is required. type : 'llm-function-call' data : function_name : string Name of the function to be called. tool_call_id : string Unique identifier for this function call. args : Record Arguments to be passed to the function. ​ llm-function-call-result 🏄 The result of the function call requested by the LLM, returned from the client. type : 'llm-function-call-result' data : function_name : string Name of the called function. tool_call_id : string Identifier matching the original function call. args : Record Arguments that were passed to the function. result : Record | string The result returned by the function. ​ bot-llm-search-response 🤖 Search results from the LLM’s knowledge base. Currently, Google Gemini is the only LLM that supports built-in search. However, we expect other LLMs to follow suite, which is why this message type is defined as part of the RTVI standard. As more LLMs add support for this feature, the format of this message type may evolve to accommodate discrepancies. type : 'bot-llm-search-response' data : search_result : string (optional) Raw search result text. rendered_content : string (optional) Formatted version of the search results. origins : Array Source information and confidence scores for search results. The Origin Object follows this structure: Copy Ask AI { "site_uri" : string (optional) , "site_title" : string (optional) , "results" : Array< { "text" : string , "confidence" : number [] } > } Example: Copy Ask AI "id" : undefined "label" : "rtvi-ai" "type" : "bot-llm-search-response" "data" : { "origins" : [ { "results" : [ { "confidence" : [ 0.9881149530410768 ], "text" : "* Juneteenth: A Freedom Celebration is scheduled for June 18th from 12:00 pm to 2:00 pm." }, { "confidence" : [ 0.9692034721374512 ], "ext" : "* A Juneteenth celebration at Fort Negley Park will take place on June 19th from 5:00 pm to 9:30 pm." } ], "site_title" : "vanderbilt.edu" , "site_uri" : "https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHwif83VK9KAzrbMSGSBsKwL8vWfSfC9pgEWYKmStHyqiRoV1oe8j1S0nbwRg_iWgqAr9wUkiegu3ATC8Ll-cuE-vpzwElRHiJ2KgRYcqnOQMoOeokVpWqi" }, { "results" : [ { "confidence" : [ 0.6554043292999268 ], "text" : "In addition to these events, Vanderbilt University is a large research institution with ongoing activities across many fields." } ], "site_title" : "wikipedia.org" , "site_uri" : "https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQESbF-ijx78QbaglrhflHCUWdPTD4M6tYOQigW5hgsHNctRlAHu9ktfPmJx7DfoP5QicE0y-OQY1cRl9w4Id0btiFgLYSKIm2-SPtOHXeNrAlgA7mBnclaGrD7rgnLIbrjl8DgUEJrrvT0CKzuo" }], "rendered_content" : "