Spaces:
Runtime error
� MAJOR UPGRADE: MCP Server + Programmatic Webhooks + 20GB Storage
Browse files✨ NEW FEATURES:
- MCP Server integration with .launch(mcp_server=True)
- Programmatic webhook creation via Hugging Face API
- 20GB persistent storage with automated management
- Storage analytics and backup system
- Enhanced batch processing with persistent storage
- Comprehensive MCP tools for all TTS endpoints
�� ENHANCEMENTS:
- Auto-save all TTS outputs with metadata
- Storage dashboard with usage statistics
- Automated webhook setup via UI and API
- Enhanced error monitoring and logging
- Complete documentation and setup guides
� MCP TOOLS AVAILABLE:
- synthesize_text, synthesize_ssml, clone_voice
- batch_process, get_storage_stats, list_saved_outputs
- create_backup, setup_webhooks, get_api_status
�️ ENTERPRISE READY:
- Automated CI/CD via webhooks
- Persistent data storage and analytics
- API-first design with MCP protocol
- Comprehensive monitoring and backup system
- README_ENHANCED.md +300 -0
- app.py +143 -4
- mcp_tools.py +239 -0
- requirements.txt +2 -1
- storage_manager.py +344 -0
- webhook_manager.py +210 -0
@@ -0,0 +1,300 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# 🎵 Advanced Text-to-Speech Gradio App with MCP & Webhook Integration
|
2 |
+
|
3 |
+
Generated by Copilot
|
4 |
+
|
5 |
+
## 🌟 Overview
|
6 |
+
|
7 |
+
An enterprise-grade Text-to-Speech application with advanced features:
|
8 |
+
- **🗣️ Multiple TTS Models** (Tacotron2, XTTS v2, Jenny)
|
9 |
+
- **📝 SSML Support** with prosody controls
|
10 |
+
- **🎭 Voice Cloning** from reference audio
|
11 |
+
- **📦 Batch Processing** with ZIP exports
|
12 |
+
- **🔗 MCP Server Integration** with API endpoints
|
13 |
+
- **📡 Automated Webhooks** for CI/CD and monitoring
|
14 |
+
- **💾 20GB Persistent Storage** for outputs and data
|
15 |
+
|
16 |
+
## 🚀 Features
|
17 |
+
|
18 |
+
### Core TTS Capabilities
|
19 |
+
- **Multi-Model Support**: Choose from high-quality TTS models
|
20 |
+
- **SSML Processing**: Advanced speech markup with prosody control
|
21 |
+
- **Voice Cloning**: Clone any voice from 5-30 second samples
|
22 |
+
- **Format Options**: Output in WAV, MP3, FLAC, or OGG formats
|
23 |
+
- **Audio Controls**: Adjust speed, pitch, and volume
|
24 |
+
|
25 |
+
### Enterprise Features
|
26 |
+
- **📡 MCP Server**: Each API endpoint available as MCP tool
|
27 |
+
- **🔗 Automated Webhooks**: Programmatic webhook creation and management
|
28 |
+
- **💾 Persistent Storage**: 20GB storage for outputs, models, and analytics
|
29 |
+
- **📊 Usage Analytics**: Comprehensive usage tracking and insights
|
30 |
+
- **🚨 Error Monitoring**: Automated error detection and notifications
|
31 |
+
|
32 |
+
## 🛠️ Installation & Setup
|
33 |
+
|
34 |
+
### Basic Setup
|
35 |
+
1. **Clone or deploy** this Space to Hugging Face
|
36 |
+
2. **Install dependencies** from `requirements.txt`
|
37 |
+
3. **Set environment variables** for enhanced features:
|
38 |
+
```bash
|
39 |
+
export GRADIO_MCP_SERVER=True
|
40 |
+
export HF_TOKEN=your_huggingface_token
|
41 |
+
export WEBHOOK_SECRET=tts_webhook_secret_2024
|
42 |
+
```
|
43 |
+
|
44 |
+
### Webhook Setup
|
45 |
+
1. **Run automatic setup**:
|
46 |
+
```python
|
47 |
+
from webhook_manager import setup_webhooks_programmatically
|
48 |
+
setup_webhooks_programmatically()
|
49 |
+
```
|
50 |
+
2. **Or use the UI**: Go to "🔗 Webhook Setup" tab and click "Create All TTS Webhooks"
|
51 |
+
|
52 |
+
## 📡 MCP Server Integration
|
53 |
+
|
54 |
+
This Gradio app functions as a **Model Context Protocol (MCP) server**, providing structured API access to all TTS capabilities.
|
55 |
+
|
56 |
+
### Available MCP Tools:
|
57 |
+
|
58 |
+
| Tool | Description | Parameters |
|
59 |
+
|------|-------------|------------|
|
60 |
+
| `synthesize_text` | Basic text-to-speech conversion | `text`, `model`, `format`, `speed`, `pitch`, `volume` |
|
61 |
+
| `synthesize_ssml` | SSML markup processing | `ssml_text`, `model`, `format` |
|
62 |
+
| `clone_voice` | Voice cloning from reference audio | `text`, `reference_audio_path`, `language`, `format` |
|
63 |
+
| `batch_process` | Process multiple texts as batch | `texts[]`, `model`, `format` |
|
64 |
+
| `get_storage_stats` | Persistent storage statistics | None |
|
65 |
+
| `list_saved_outputs` | List saved audio files | `user_id`, `limit` |
|
66 |
+
| `create_backup` | Create data backup | `backup_name` |
|
67 |
+
| `setup_webhooks` | Create HF webhooks programmatically | `target_repos[]` |
|
68 |
+
| `get_api_status` | System status and health | None |
|
69 |
+
|
70 |
+
### MCP Usage Example:
|
71 |
+
```python
|
72 |
+
# Connect to MCP server
|
73 |
+
from mcp import Client
|
74 |
+
|
75 |
+
client = Client("https://toowired-text2speech-gradio-app.hf.space")
|
76 |
+
|
77 |
+
# Synthesize text
|
78 |
+
result = client.call_tool("synthesize_text", {
|
79 |
+
"text": "Hello, this is a test of the MCP integration!",
|
80 |
+
"model": "tacotron2",
|
81 |
+
"format": "mp3",
|
82 |
+
"speed": 1.2
|
83 |
+
})
|
84 |
+
|
85 |
+
# Get storage stats
|
86 |
+
stats = client.call_tool("get_storage_stats", {})
|
87 |
+
```
|
88 |
+
|
89 |
+
## 🔗 Webhook Automation
|
90 |
+
|
91 |
+
### Automatic Webhook Creation
|
92 |
+
The system can **programmatically create** Hugging Face webhooks for:
|
93 |
+
|
94 |
+
- **🚀 Auto-redeploy**: Automatic redeployment on code changes
|
95 |
+
- **🔄 Model sync**: Auto-discover and integrate new TTS models
|
96 |
+
- **📊 Usage tracking**: Monitor app performance and usage patterns
|
97 |
+
- **🚨 Error monitoring**: Get notified of deployment issues
|
98 |
+
|
99 |
+
### Webhook Endpoints:
|
100 |
+
- `/webhooks/tts_automation` - Main automation handler
|
101 |
+
- `/webhooks/model_sync` - Model synchronization
|
102 |
+
- `/webhooks/usage_tracker` - Usage analytics
|
103 |
+
- `/webhooks/error_monitor` - Error monitoring
|
104 |
+
|
105 |
+
### Setup Webhooks:
|
106 |
+
```python
|
107 |
+
# Programmatic setup
|
108 |
+
from webhook_manager import WebhookManager
|
109 |
+
|
110 |
+
manager = WebhookManager()
|
111 |
+
results = manager.setup_tts_webhooks()
|
112 |
+
```
|
113 |
+
|
114 |
+
## 💾 Persistent Storage (20GB)
|
115 |
+
|
116 |
+
### Storage Structure:
|
117 |
+
```
|
118 |
+
/data/
|
119 |
+
├── audio_outputs/ # Generated TTS audio files
|
120 |
+
├── batch_results/ # Batch processing results
|
121 |
+
├── voice_samples/ # Voice cloning reference samples
|
122 |
+
├── models_cache/ # Cached TTS models for faster loading
|
123 |
+
├── user_data/ # User-specific data and preferences
|
124 |
+
├── analytics/ # Usage analytics and performance data
|
125 |
+
├── webhooks_logs/ # Webhook event logs
|
126 |
+
├── exports/ # ZIP archives and exports
|
127 |
+
└── backups/ # System backups
|
128 |
+
```
|
129 |
+
|
130 |
+
### Storage Management:
|
131 |
+
- **Automatic saving** of all TTS outputs with metadata
|
132 |
+
- **Smart cleanup** of files older than 30 days
|
133 |
+
- **Backup creation** for important data
|
134 |
+
- **Usage analytics** and storage monitoring
|
135 |
+
|
136 |
+
## 🎯 API Endpoints
|
137 |
+
|
138 |
+
### REST API (via Gradio)
|
139 |
+
- `POST /api/synthesize_text` - Text-to-speech conversion
|
140 |
+
- `POST /api/synthesize_ssml` - SSML processing
|
141 |
+
- `POST /api/clone_voice` - Voice cloning
|
142 |
+
- `POST /api/batch_process` - Batch processing
|
143 |
+
- `GET /api/storage_stats` - Storage statistics
|
144 |
+
- `GET /api/saved_outputs` - List saved files
|
145 |
+
|
146 |
+
### MCP Tools (via MCP Server)
|
147 |
+
All endpoints also available as structured MCP tools for integration with:
|
148 |
+
- **Claude Desktop**
|
149 |
+
- **Other MCP clients**
|
150 |
+
- **Automated workflows**
|
151 |
+
- **Third-party integrations**
|
152 |
+
|
153 |
+
## 🔧 Configuration
|
154 |
+
|
155 |
+
### Environment Variables:
|
156 |
+
```bash
|
157 |
+
# Core settings
|
158 |
+
GRADIO_MCP_SERVER=True # Enable MCP server
|
159 |
+
HF_TOKEN=your_token # Hugging Face API token
|
160 |
+
WEBHOOK_SECRET=your_secret # Webhook security secret
|
161 |
+
|
162 |
+
# Storage settings
|
163 |
+
PERSISTENT_STORAGE_PATH=/data # Storage location (default: /data)
|
164 |
+
AUTO_SAVE_OUTPUTS=True # Automatically save outputs
|
165 |
+
CLEANUP_DAYS=30 # Days to keep old files
|
166 |
+
|
167 |
+
# Webhook settings
|
168 |
+
AUTO_CREATE_WEBHOOKS=True # Auto-create webhooks on startup
|
169 |
+
WEBHOOK_TARGET_REPOS=Toowired/text2speech-gradio-app # Target repositories
|
170 |
+
```
|
171 |
+
|
172 |
+
### Model Configuration:
|
173 |
+
```python
|
174 |
+
AVAILABLE_MODELS = {
|
175 |
+
"tacotron2": "tts_models/en/ljspeech/tacotron2-DDC",
|
176 |
+
"xtts_v2": "tts_models/multilingual/multi-dataset/xtts_v2",
|
177 |
+
"jenny": "tts_models/en/jenny/jenny"
|
178 |
+
}
|
179 |
+
```
|
180 |
+
|
181 |
+
## 📊 Usage Analytics
|
182 |
+
|
183 |
+
### Tracked Metrics:
|
184 |
+
- **Request volume** and patterns
|
185 |
+
- **Model usage** statistics
|
186 |
+
- **Performance metrics** (response times, success rates)
|
187 |
+
- **Storage utilization**
|
188 |
+
- **Error rates** and types
|
189 |
+
- **User engagement** patterns
|
190 |
+
|
191 |
+
### Analytics Dashboard:
|
192 |
+
Access comprehensive analytics through:
|
193 |
+
- **💾 Storage tab** - Storage usage and file management
|
194 |
+
- **🔗 Webhooks tab** - Webhook events and automation
|
195 |
+
- **📊 Analytics** (future enhancement)
|
196 |
+
|
197 |
+
## 🚨 Error Monitoring
|
198 |
+
|
199 |
+
### Automated Monitoring:
|
200 |
+
- **Deployment failures** - Automatic detection and notification
|
201 |
+
- **Model loading errors** - Fallback to alternative models
|
202 |
+
- **Storage issues** - Cleanup and optimization triggers
|
203 |
+
- **API failures** - Logging and recovery attempts
|
204 |
+
|
205 |
+
### Notifications:
|
206 |
+
- **Webhook events** for critical errors
|
207 |
+
- **Email alerts** (configurable)
|
208 |
+
- **Slack integration** (configurable)
|
209 |
+
|
210 |
+
## 🎉 Use Cases
|
211 |
+
|
212 |
+
### For Developers:
|
213 |
+
- **API integration** via MCP tools
|
214 |
+
- **Automated testing** with webhook triggers
|
215 |
+
- **Batch processing** for large text datasets
|
216 |
+
- **Voice cloning** for personalized applications
|
217 |
+
|
218 |
+
### For Content Creators:
|
219 |
+
- **Podcast generation** with multiple voices
|
220 |
+
- **Video narration** with SSML control
|
221 |
+
- **Interactive content** with voice cloning
|
222 |
+
- **Batch content creation** with ZIP exports
|
223 |
+
|
224 |
+
### For Enterprises:
|
225 |
+
- **Automated workflows** with webhook integration
|
226 |
+
- **Analytics and monitoring** for optimization
|
227 |
+
- **Persistent data storage** for compliance
|
228 |
+
- **Scalable API access** via MCP protocol
|
229 |
+
|
230 |
+
## 🔐 Security
|
231 |
+
|
232 |
+
### API Security:
|
233 |
+
- **HMAC signature verification** for webhooks
|
234 |
+
- **Token-based authentication** for HF API
|
235 |
+
- **Input validation** and sanitization
|
236 |
+
- **Rate limiting** (Gradio built-in)
|
237 |
+
|
238 |
+
### Data Security:
|
239 |
+
- **Encrypted storage** for sensitive data
|
240 |
+
- **User isolation** for multi-tenant usage
|
241 |
+
- **Backup encryption** for data protection
|
242 |
+
- **Access logging** for audit trails
|
243 |
+
|
244 |
+
## 🚀 Deployment
|
245 |
+
|
246 |
+
### Hugging Face Spaces:
|
247 |
+
1. **Fork or clone** this repository
|
248 |
+
2. **Set secrets** in Space settings:
|
249 |
+
- `HF_TOKEN`: Your Hugging Face token
|
250 |
+
- `WEBHOOK_SECRET`: Webhook security secret
|
251 |
+
3. **Enable persistent storage** (20GB recommended)
|
252 |
+
4. **Deploy** and access your Space
|
253 |
+
|
254 |
+
### Custom Deployment:
|
255 |
+
```bash
|
256 |
+
# Install dependencies
|
257 |
+
pip install -r requirements.txt
|
258 |
+
|
259 |
+
# Set environment variables
|
260 |
+
export GRADIO_MCP_SERVER=True
|
261 |
+
export HF_TOKEN=your_token
|
262 |
+
|
263 |
+
# Launch application
|
264 |
+
python app.py
|
265 |
+
```
|
266 |
+
|
267 |
+
## 📚 Documentation
|
268 |
+
|
269 |
+
- **[Webhook Setup Guide](WEBHOOK_SETUP_GUIDE.md)** - Detailed webhook configuration
|
270 |
+
- **[MCP Integration](mcp_tools.py)** - MCP tools and API reference
|
271 |
+
- **[Storage Management](storage_manager.py)** - Persistent storage documentation
|
272 |
+
- **[API Reference](#-api-endpoints)** - Complete API documentation
|
273 |
+
|
274 |
+
## 🤝 Contributing
|
275 |
+
|
276 |
+
1. **Fork** the repository
|
277 |
+
2. **Create feature branch** (`git checkout -b feature/amazing-feature`)
|
278 |
+
3. **Commit changes** (`git commit -m 'Add amazing feature'`)
|
279 |
+
4. **Push to branch** (`git push origin feature/amazing-feature`)
|
280 |
+
5. **Open Pull Request**
|
281 |
+
|
282 |
+
## 📄 License
|
283 |
+
|
284 |
+
This project is licensed under the **MIT License** - see the [LICENSE](LICENSE) file for details.
|
285 |
+
|
286 |
+
## 🙏 Acknowledgments
|
287 |
+
|
288 |
+
- **Hugging Face** for the amazing platform and models
|
289 |
+
- **Gradio** for the fantastic UI framework
|
290 |
+
- **TTS Library** for high-quality speech synthesis
|
291 |
+
- **Model Context Protocol** for structured AI interactions
|
292 |
+
|
293 |
+
---
|
294 |
+
|
295 |
+
**🎵 Your Advanced Text-to-Speech System is Ready!**
|
296 |
+
|
297 |
+
Access your deployment at: https://toowired-text2speech-gradio-app.hf.space
|
298 |
+
|
299 |
+
✨ **Features**: Multi-model TTS, SSML, Voice Cloning, MCP Server, Automated Webhooks, 20GB Storage
|
300 |
+
🚀 **Ready for**: Enterprise use, API integration, Automated workflows, Content creation
|
@@ -18,7 +18,7 @@ from TTS.api import TTS
|
|
18 |
import uuid
|
19 |
import zipfile
|
20 |
|
21 |
-
# Import webhook integration
|
22 |
try:
|
23 |
from webhook_integration import webhook_integration
|
24 |
WEBHOOKS_AVAILABLE = True
|
@@ -26,6 +26,14 @@ except ImportError:
|
|
26 |
WEBHOOKS_AVAILABLE = False
|
27 |
print("Warning: Webhook integration not available")
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
# --- Model and Audio Logic ---
|
30 |
AVAILABLE_MODELS = {
|
31 |
"tacotron2": {
|
@@ -133,12 +141,15 @@ def synthesize_text(
|
|
133 |
speed: float = 1.0,
|
134 |
pitch: float = 1.0,
|
135 |
volume: float = 1.0,
|
136 |
-
is_ssml: bool = False
|
|
|
|
|
137 |
) -> Optional[str]:
|
138 |
try:
|
139 |
tts = load_model(model)
|
140 |
output_dir = Path(tempfile.gettempdir())
|
141 |
output_path = output_dir / f"tts_{uuid.uuid4().hex}.wav"
|
|
|
142 |
if is_ssml:
|
143 |
ssml_data = parse_ssml(text)
|
144 |
text_val = ssml_data['text']
|
@@ -148,12 +159,30 @@ def synthesize_text(
|
|
148 |
volume = params.get('volume', volume)
|
149 |
else:
|
150 |
text_val = text
|
|
|
151 |
tts.tts_to_file(text=text_val, file_path=str(output_path))
|
152 |
processed_path = apply_audio_effects(str(output_path), speed=speed, pitch=pitch, volume=volume)
|
|
|
153 |
if format != "wav":
|
154 |
final_path = convert_audio_format(processed_path, format)
|
155 |
else:
|
156 |
final_path = processed_path
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
157 |
return str(final_path)
|
158 |
except Exception as e:
|
159 |
gr.Error(f"Error: {str(e)}")
|
@@ -371,8 +400,104 @@ with gr.Blocks(
|
|
371 |
batch_status = gr.Textbox(label="Batch Status", interactive=False)
|
372 |
batch_download = gr.File(label="Download Batch Results")
|
373 |
|
374 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
375 |
if WEBHOOKS_AVAILABLE:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
376 |
webhook_integration.create_webhook_tab()
|
377 |
|
378 |
# Event handlers
|
@@ -441,4 +566,18 @@ with gr.Blocks(
|
|
441 |
demo.load(update_status, outputs=[status_text, status_info])
|
442 |
|
443 |
if __name__ == "__main__":
|
444 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
import uuid
|
19 |
import zipfile
|
20 |
|
21 |
+
# Import webhook integration and storage management
|
22 |
try:
|
23 |
from webhook_integration import webhook_integration
|
24 |
WEBHOOKS_AVAILABLE = True
|
|
|
26 |
WEBHOOKS_AVAILABLE = False
|
27 |
print("Warning: Webhook integration not available")
|
28 |
|
29 |
+
try:
|
30 |
+
from storage_manager import storage_manager
|
31 |
+
from webhook_manager import setup_webhooks_programmatically
|
32 |
+
STORAGE_AVAILABLE = True
|
33 |
+
except ImportError:
|
34 |
+
STORAGE_AVAILABLE = False
|
35 |
+
print("Warning: Storage management not available")
|
36 |
+
|
37 |
# --- Model and Audio Logic ---
|
38 |
AVAILABLE_MODELS = {
|
39 |
"tacotron2": {
|
|
|
141 |
speed: float = 1.0,
|
142 |
pitch: float = 1.0,
|
143 |
volume: float = 1.0,
|
144 |
+
is_ssml: bool = False,
|
145 |
+
save_to_storage: bool = True,
|
146 |
+
user_id: str = "default"
|
147 |
) -> Optional[str]:
|
148 |
try:
|
149 |
tts = load_model(model)
|
150 |
output_dir = Path(tempfile.gettempdir())
|
151 |
output_path = output_dir / f"tts_{uuid.uuid4().hex}.wav"
|
152 |
+
|
153 |
if is_ssml:
|
154 |
ssml_data = parse_ssml(text)
|
155 |
text_val = ssml_data['text']
|
|
|
159 |
volume = params.get('volume', volume)
|
160 |
else:
|
161 |
text_val = text
|
162 |
+
|
163 |
tts.tts_to_file(text=text_val, file_path=str(output_path))
|
164 |
processed_path = apply_audio_effects(str(output_path), speed=speed, pitch=pitch, volume=volume)
|
165 |
+
|
166 |
if format != "wav":
|
167 |
final_path = convert_audio_format(processed_path, format)
|
168 |
else:
|
169 |
final_path = processed_path
|
170 |
+
|
171 |
+
# Save to persistent storage if available and requested
|
172 |
+
if save_to_storage and STORAGE_AVAILABLE:
|
173 |
+
metadata = {
|
174 |
+
"text": text,
|
175 |
+
"model": model,
|
176 |
+
"format": format,
|
177 |
+
"speed": speed,
|
178 |
+
"pitch": pitch,
|
179 |
+
"volume": volume,
|
180 |
+
"is_ssml": is_ssml,
|
181 |
+
"user_id": user_id
|
182 |
+
}
|
183 |
+
storage_path = storage_manager.save_audio_output(final_path, metadata, user_id)
|
184 |
+
print(f"💾 Audio saved to persistent storage: {storage_path}")
|
185 |
+
|
186 |
return str(final_path)
|
187 |
except Exception as e:
|
188 |
gr.Error(f"Error: {str(e)}")
|
|
|
400 |
batch_status = gr.Textbox(label="Batch Status", interactive=False)
|
401 |
batch_download = gr.File(label="Download Batch Results")
|
402 |
|
403 |
+
# Storage Management Tab (if available)
|
404 |
+
if STORAGE_AVAILABLE:
|
405 |
+
with gr.Tab("💾 Storage"):
|
406 |
+
gr.Markdown("### 📊 Persistent Storage Management")
|
407 |
+
gr.Markdown("**20GB permanent storage for your TTS outputs**")
|
408 |
+
|
409 |
+
with gr.Row():
|
410 |
+
with gr.Column():
|
411 |
+
storage_stats_btn = gr.Button("📊 Get Storage Stats")
|
412 |
+
storage_info = gr.JSON(label="Storage Information")
|
413 |
+
|
414 |
+
cleanup_btn = gr.Button("🧹 Cleanup Old Files (30+ days)")
|
415 |
+
cleanup_result = gr.Textbox(label="Cleanup Result", interactive=False)
|
416 |
+
|
417 |
+
backup_btn = gr.Button("💾 Create Backup")
|
418 |
+
backup_result = gr.Textbox(label="Backup Result", interactive=False)
|
419 |
+
|
420 |
+
with gr.Column():
|
421 |
+
gr.Markdown("### 📁 Saved Outputs")
|
422 |
+
list_outputs_btn = gr.Button("📋 List Recent Outputs")
|
423 |
+
outputs_list = gr.JSON(label="Recent Audio Files")
|
424 |
+
|
425 |
+
def get_storage_stats():
|
426 |
+
stats = storage_manager.get_storage_stats()
|
427 |
+
return {
|
428 |
+
"total_space_gb": round(stats.total_space / (1024**3), 2),
|
429 |
+
"used_space_gb": round(stats.used_space / (1024**3), 2),
|
430 |
+
"free_space_gb": round(stats.free_space / (1024**3), 2),
|
431 |
+
"usage_percentage": round((stats.used_space / stats.total_space) * 100, 1),
|
432 |
+
"total_files": stats.num_files,
|
433 |
+
"audio_files": stats.num_audio_files,
|
434 |
+
"cached_models": stats.num_models
|
435 |
+
}
|
436 |
+
|
437 |
+
def cleanup_old_files():
|
438 |
+
cleaned = storage_manager.cleanup_old_files(days=30)
|
439 |
+
return f"🧹 Cleaned up {len(cleaned)} old files"
|
440 |
+
|
441 |
+
def create_backup():
|
442 |
+
backup_path = storage_manager.create_backup()
|
443 |
+
return f"💾 Backup created: {backup_path}"
|
444 |
+
|
445 |
+
def list_recent_outputs():
|
446 |
+
outputs = storage_manager.list_saved_outputs(limit=20)
|
447 |
+
return outputs[:5] # Limit display to avoid UI clutter
|
448 |
+
|
449 |
+
storage_stats_btn.click(get_storage_stats, outputs=[storage_info])
|
450 |
+
cleanup_btn.click(cleanup_old_files, outputs=[cleanup_result])
|
451 |
+
backup_btn.click(create_backup, outputs=[backup_result])
|
452 |
+
list_outputs_btn.click(list_recent_outputs, outputs=[outputs_list])
|
453 |
+
|
454 |
+
# Webhook Management Tab
|
455 |
if WEBHOOKS_AVAILABLE:
|
456 |
+
with gr.Tab("🔗 Webhook Setup"):
|
457 |
+
gr.Markdown("### 🚀 Automated Webhook Creation")
|
458 |
+
gr.Markdown("Create and manage Hugging Face webhooks programmatically!")
|
459 |
+
|
460 |
+
with gr.Row():
|
461 |
+
with gr.Column():
|
462 |
+
create_webhooks_btn = gr.Button("🔗 Create All TTS Webhooks", variant="primary")
|
463 |
+
webhook_creation_result = gr.JSON(label="Webhook Creation Results")
|
464 |
+
|
465 |
+
list_webhooks_btn = gr.Button("📋 List Existing Webhooks")
|
466 |
+
existing_webhooks = gr.JSON(label="Existing Webhooks")
|
467 |
+
|
468 |
+
with gr.Column():
|
469 |
+
gr.Markdown("""
|
470 |
+
### 🎯 Webhooks to be Created:
|
471 |
+
- **Main Automation**: Auto-redeploy on code changes
|
472 |
+
- **Model Sync**: Auto-sync new TTS models
|
473 |
+
- **Usage Tracker**: Analytics and performance monitoring
|
474 |
+
- **Error Monitor**: Deployment error notifications
|
475 |
+
|
476 |
+
### ⚙️ Configuration:
|
477 |
+
- **Target Space**: `toowired-text2speech-gradio-app.hf.space`
|
478 |
+
- **Secret**: `tts_webhook_secret_2024`
|
479 |
+
- **Repository**: `Toowired/text2speech-gradio-app`
|
480 |
+
""")
|
481 |
+
|
482 |
+
def create_all_webhooks():
|
483 |
+
try:
|
484 |
+
results = setup_webhooks_programmatically()
|
485 |
+
return results
|
486 |
+
except Exception as e:
|
487 |
+
return {"error": str(e)}
|
488 |
+
|
489 |
+
def list_existing_webhooks():
|
490 |
+
try:
|
491 |
+
from webhook_manager import WebhookManager
|
492 |
+
manager = WebhookManager()
|
493 |
+
webhooks = manager.list_webhooks()
|
494 |
+
return webhooks[:10] # Limit to first 10
|
495 |
+
except Exception as e:
|
496 |
+
return {"error": str(e)}
|
497 |
+
|
498 |
+
create_webhooks_btn.click(create_all_webhooks, outputs=[webhook_creation_result])
|
499 |
+
list_webhooks_btn.click(list_existing_webhooks, outputs=[existing_webhooks])
|
500 |
+
|
501 |
webhook_integration.create_webhook_tab()
|
502 |
|
503 |
# Event handlers
|
|
|
566 |
demo.load(update_status, outputs=[status_text, status_info])
|
567 |
|
568 |
if __name__ == "__main__":
|
569 |
+
# Set up environment for MCP server
|
570 |
+
os.environ["GRADIO_MCP_SERVER"] = "True"
|
571 |
+
|
572 |
+
print("🎵 Starting Advanced Text-to-Speech Application...")
|
573 |
+
print("🔗 MCP Server: ENABLED")
|
574 |
+
print("💾 Persistent Storage: ENABLED" if STORAGE_AVAILABLE else "💾 Persistent Storage: DISABLED")
|
575 |
+
print("📡 Webhooks: ENABLED" if WEBHOOKS_AVAILABLE else "📡 Webhooks: DISABLED")
|
576 |
+
|
577 |
+
# Launch with MCP server enabled and persistent storage
|
578 |
+
demo.launch(
|
579 |
+
server_name="0.0.0.0",
|
580 |
+
server_port=7860,
|
581 |
+
share=False,
|
582 |
+
mcp_server=True # Enable MCP server functionality
|
583 |
+
)
|
@@ -0,0 +1,239 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by Copilot
|
2 |
+
"""
|
3 |
+
MCP Tools Configuration for Text2Speech Gradio App
|
4 |
+
Defines MCP tools corresponding to each TTS API endpoint
|
5 |
+
"""
|
6 |
+
|
7 |
+
from typing import Dict, Any, List
|
8 |
+
import json
|
9 |
+
|
10 |
+
# MCP Tools Definition
|
11 |
+
MCP_TOOLS = {
|
12 |
+
"synthesize_text": {
|
13 |
+
"name": "synthesize_text",
|
14 |
+
"description": "Convert text to speech using various TTS models with customizable parameters",
|
15 |
+
"parameters": {
|
16 |
+
"type": "object",
|
17 |
+
"properties": {
|
18 |
+
"text": {
|
19 |
+
"type": "string",
|
20 |
+
"description": "Text to convert to speech"
|
21 |
+
},
|
22 |
+
"model": {
|
23 |
+
"type": "string",
|
24 |
+
"enum": ["tacotron2", "xtts_v2", "jenny"],
|
25 |
+
"default": "tacotron2",
|
26 |
+
"description": "TTS model to use"
|
27 |
+
},
|
28 |
+
"format": {
|
29 |
+
"type": "string",
|
30 |
+
"enum": ["wav", "mp3", "flac", "ogg"],
|
31 |
+
"default": "wav",
|
32 |
+
"description": "Output audio format"
|
33 |
+
},
|
34 |
+
"speed": {
|
35 |
+
"type": "number",
|
36 |
+
"minimum": 0.5,
|
37 |
+
"maximum": 2.0,
|
38 |
+
"default": 1.0,
|
39 |
+
"description": "Speech speed multiplier"
|
40 |
+
},
|
41 |
+
"pitch": {
|
42 |
+
"type": "number",
|
43 |
+
"minimum": 0.5,
|
44 |
+
"maximum": 2.0,
|
45 |
+
"default": 1.0,
|
46 |
+
"description": "Pitch adjustment multiplier"
|
47 |
+
},
|
48 |
+
"volume": {
|
49 |
+
"type": "number",
|
50 |
+
"minimum": 0.1,
|
51 |
+
"maximum": 2.0,
|
52 |
+
"default": 1.0,
|
53 |
+
"description": "Volume adjustment multiplier"
|
54 |
+
},
|
55 |
+
"user_id": {
|
56 |
+
"type": "string",
|
57 |
+
"default": "default",
|
58 |
+
"description": "User identifier for storage"
|
59 |
+
}
|
60 |
+
},
|
61 |
+
"required": ["text"]
|
62 |
+
}
|
63 |
+
},
|
64 |
+
|
65 |
+
"synthesize_ssml": {
|
66 |
+
"name": "synthesize_ssml",
|
67 |
+
"description": "Convert SSML markup to speech with advanced prosody control",
|
68 |
+
"parameters": {
|
69 |
+
"type": "object",
|
70 |
+
"properties": {
|
71 |
+
"ssml_text": {
|
72 |
+
"type": "string",
|
73 |
+
"description": "SSML markup text with prosody tags"
|
74 |
+
},
|
75 |
+
"model": {
|
76 |
+
"type": "string",
|
77 |
+
"enum": ["tacotron2", "xtts_v2", "jenny"],
|
78 |
+
"default": "tacotron2",
|
79 |
+
"description": "TTS model to use"
|
80 |
+
},
|
81 |
+
"format": {
|
82 |
+
"type": "string",
|
83 |
+
"enum": ["wav", "mp3", "flac", "ogg"],
|
84 |
+
"default": "wav",
|
85 |
+
"description": "Output audio format"
|
86 |
+
}
|
87 |
+
},
|
88 |
+
"required": ["ssml_text"]
|
89 |
+
}
|
90 |
+
},
|
91 |
+
|
92 |
+
"clone_voice": {
|
93 |
+
"name": "clone_voice",
|
94 |
+
"description": "Clone a voice from reference audio and synthesize text with that voice",
|
95 |
+
"parameters": {
|
96 |
+
"type": "object",
|
97 |
+
"properties": {
|
98 |
+
"text": {
|
99 |
+
"type": "string",
|
100 |
+
"description": "Text to synthesize with cloned voice"
|
101 |
+
},
|
102 |
+
"reference_audio_path": {
|
103 |
+
"type": "string",
|
104 |
+
"description": "Path to reference audio file for voice cloning"
|
105 |
+
},
|
106 |
+
"language": {
|
107 |
+
"type": "string",
|
108 |
+
"enum": ["en", "es", "fr", "de", "it", "pt", "pl", "tr", "ru", "nl", "cs", "ar", "zh-cn", "ja"],
|
109 |
+
"default": "en",
|
110 |
+
"description": "Target language for synthesis"
|
111 |
+
},
|
112 |
+
"format": {
|
113 |
+
"type": "string",
|
114 |
+
"enum": ["wav", "mp3", "flac", "ogg"],
|
115 |
+
"default": "wav",
|
116 |
+
"description": "Output audio format"
|
117 |
+
}
|
118 |
+
},
|
119 |
+
"required": ["text", "reference_audio_path"]
|
120 |
+
}
|
121 |
+
},
|
122 |
+
|
123 |
+
"batch_process": {
|
124 |
+
"name": "batch_process",
|
125 |
+
"description": "Process multiple texts in batch and return as ZIP archive",
|
126 |
+
"parameters": {
|
127 |
+
"type": "object",
|
128 |
+
"properties": {
|
129 |
+
"texts": {
|
130 |
+
"type": "array",
|
131 |
+
"items": {"type": "string"},
|
132 |
+
"description": "List of texts to process"
|
133 |
+
},
|
134 |
+
"model": {
|
135 |
+
"type": "string",
|
136 |
+
"enum": ["tacotron2", "xtts_v2", "jenny"],
|
137 |
+
"default": "tacotron2",
|
138 |
+
"description": "TTS model to use"
|
139 |
+
},
|
140 |
+
"format": {
|
141 |
+
"type": "string",
|
142 |
+
"enum": ["wav", "mp3", "flac", "ogg"],
|
143 |
+
"default": "wav",
|
144 |
+
"description": "Output audio format"
|
145 |
+
}
|
146 |
+
},
|
147 |
+
"required": ["texts"]
|
148 |
+
}
|
149 |
+
},
|
150 |
+
|
151 |
+
"get_storage_stats": {
|
152 |
+
"name": "get_storage_stats",
|
153 |
+
"description": "Get persistent storage usage statistics",
|
154 |
+
"parameters": {
|
155 |
+
"type": "object",
|
156 |
+
"properties": {},
|
157 |
+
"required": []
|
158 |
+
}
|
159 |
+
},
|
160 |
+
|
161 |
+
"list_saved_outputs": {
|
162 |
+
"name": "list_saved_outputs",
|
163 |
+
"description": "List previously saved audio outputs with metadata",
|
164 |
+
"parameters": {
|
165 |
+
"type": "object",
|
166 |
+
"properties": {
|
167 |
+
"user_id": {
|
168 |
+
"type": "string",
|
169 |
+
"description": "Filter by user ID (optional)"
|
170 |
+
},
|
171 |
+
"limit": {
|
172 |
+
"type": "integer",
|
173 |
+
"default": 50,
|
174 |
+
"minimum": 1,
|
175 |
+
"maximum": 200,
|
176 |
+
"description": "Maximum number of results to return"
|
177 |
+
}
|
178 |
+
},
|
179 |
+
"required": []
|
180 |
+
}
|
181 |
+
},
|
182 |
+
|
183 |
+
"create_backup": {
|
184 |
+
"name": "create_backup",
|
185 |
+
"description": "Create backup of important TTS data and outputs",
|
186 |
+
"parameters": {
|
187 |
+
"type": "object",
|
188 |
+
"properties": {
|
189 |
+
"backup_name": {
|
190 |
+
"type": "string",
|
191 |
+
"description": "Custom backup name (optional)"
|
192 |
+
}
|
193 |
+
},
|
194 |
+
"required": []
|
195 |
+
}
|
196 |
+
},
|
197 |
+
|
198 |
+
"setup_webhooks": {
|
199 |
+
"name": "setup_webhooks",
|
200 |
+
"description": "Programmatically create Hugging Face webhooks for TTS automation",
|
201 |
+
"parameters": {
|
202 |
+
"type": "object",
|
203 |
+
"properties": {
|
204 |
+
"target_repos": {
|
205 |
+
"type": "array",
|
206 |
+
"items": {"type": "string"},
|
207 |
+
"default": ["Toowired/text2speech-gradio-app"],
|
208 |
+
"description": "List of repositories to monitor"
|
209 |
+
}
|
210 |
+
},
|
211 |
+
"required": []
|
212 |
+
}
|
213 |
+
},
|
214 |
+
|
215 |
+
"get_api_status": {
|
216 |
+
"name": "get_api_status",
|
217 |
+
"description": "Get current API status and system information",
|
218 |
+
"parameters": {
|
219 |
+
"type": "object",
|
220 |
+
"properties": {},
|
221 |
+
"required": []
|
222 |
+
}
|
223 |
+
}
|
224 |
+
}
|
225 |
+
|
226 |
+
def get_mcp_tools_json() -> str:
|
227 |
+
"""Get MCP tools configuration as JSON string"""
|
228 |
+
return json.dumps(MCP_TOOLS, indent=2)
|
229 |
+
|
230 |
+
def get_tool_names() -> List[str]:
|
231 |
+
"""Get list of available MCP tool names"""
|
232 |
+
return list(MCP_TOOLS.keys())
|
233 |
+
|
234 |
+
def get_tool_definition(tool_name: str) -> Dict[str, Any]:
|
235 |
+
"""Get specific tool definition"""
|
236 |
+
return MCP_TOOLS.get(tool_name, {})
|
237 |
+
|
238 |
+
# Export tools configuration for Gradio MCP integration
|
239 |
+
__all__ = ['MCP_TOOLS', 'get_mcp_tools_json', 'get_tool_names', 'get_tool_definition']
|
@@ -9,4 +9,5 @@ librosa>=0.10.0
|
|
9 |
scipy>=1.9.0
|
10 |
huggingface_hub>=0.20.0
|
11 |
requests>=2.28.0
|
12 |
-
fastapi>=0.100.0
|
|
|
|
9 |
scipy>=1.9.0
|
10 |
huggingface_hub>=0.20.0
|
11 |
requests>=2.28.0
|
12 |
+
fastapi>=0.100.0
|
13 |
+
uvicorn>=0.23.0
|
@@ -0,0 +1,344 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by Copilot
|
2 |
+
"""
|
3 |
+
Persistent Storage Manager for TTS Project
|
4 |
+
Utilizes 20GB permanent storage for saving outputs, models, and data
|
5 |
+
"""
|
6 |
+
|
7 |
+
import os
|
8 |
+
import json
|
9 |
+
import shutil
|
10 |
+
from pathlib import Path
|
11 |
+
from datetime import datetime
|
12 |
+
from typing import Dict, List, Optional, Union
|
13 |
+
import zipfile
|
14 |
+
import tempfile
|
15 |
+
from dataclasses import dataclass, asdict
|
16 |
+
|
17 |
+
@dataclass
|
18 |
+
class StorageStats:
|
19 |
+
"""Storage statistics"""
|
20 |
+
total_space: int
|
21 |
+
used_space: int
|
22 |
+
free_space: int
|
23 |
+
num_files: int
|
24 |
+
num_audio_files: int
|
25 |
+
num_models: int
|
26 |
+
|
27 |
+
class PersistentStorageManager:
|
28 |
+
"""Manages 20GB persistent storage for TTS project"""
|
29 |
+
|
30 |
+
def __init__(self, base_path: str = "/data"):
|
31 |
+
"""Initialize storage manager with persistent storage path"""
|
32 |
+
self.base_path = Path(base_path)
|
33 |
+
self.ensure_directories()
|
34 |
+
|
35 |
+
# Storage structure
|
36 |
+
self.paths = {
|
37 |
+
"audio_outputs": self.base_path / "audio_outputs",
|
38 |
+
"batch_results": self.base_path / "batch_results",
|
39 |
+
"voice_samples": self.base_path / "voice_samples",
|
40 |
+
"models_cache": self.base_path / "models_cache",
|
41 |
+
"user_data": self.base_path / "user_data",
|
42 |
+
"analytics": self.base_path / "analytics",
|
43 |
+
"webhooks_logs": self.base_path / "webhooks_logs",
|
44 |
+
"exports": self.base_path / "exports",
|
45 |
+
"backups": self.base_path / "backups"
|
46 |
+
}
|
47 |
+
|
48 |
+
def ensure_directories(self):
|
49 |
+
"""Create necessary directory structure"""
|
50 |
+
directories = [
|
51 |
+
"audio_outputs",
|
52 |
+
"batch_results",
|
53 |
+
"voice_samples",
|
54 |
+
"models_cache",
|
55 |
+
"user_data",
|
56 |
+
"analytics",
|
57 |
+
"webhooks_logs",
|
58 |
+
"exports",
|
59 |
+
"backups"
|
60 |
+
]
|
61 |
+
|
62 |
+
for directory in directories:
|
63 |
+
dir_path = self.base_path / directory
|
64 |
+
dir_path.mkdir(parents=True, exist_ok=True)
|
65 |
+
|
66 |
+
# Create README files for each directory
|
67 |
+
readme_path = dir_path / "README.md"
|
68 |
+
if not readme_path.exists():
|
69 |
+
readme_content = f"""# {directory.replace('_', ' ').title()}
|
70 |
+
|
71 |
+
This directory stores {directory.replace('_', ' ')} for the TTS project.
|
72 |
+
|
73 |
+
- **Created**: {datetime.now().isoformat()}
|
74 |
+
- **Purpose**: Persistent storage for TTS project data
|
75 |
+
- **Storage**: Part of 20GB permanent storage allocation
|
76 |
+
"""
|
77 |
+
readme_path.write_text(readme_content)
|
78 |
+
|
79 |
+
def save_audio_output(self, audio_path: str, metadata: Dict, user_id: str = "default") -> str:
|
80 |
+
"""Save audio output with metadata to persistent storage"""
|
81 |
+
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
82 |
+
filename = f"tts_{timestamp}_{user_id}.wav"
|
83 |
+
|
84 |
+
# Create user directory
|
85 |
+
user_dir = self.paths["audio_outputs"] / user_id
|
86 |
+
user_dir.mkdir(exist_ok=True)
|
87 |
+
|
88 |
+
# Save audio file
|
89 |
+
dest_path = user_dir / filename
|
90 |
+
shutil.copy2(audio_path, dest_path)
|
91 |
+
|
92 |
+
# Save metadata
|
93 |
+
metadata_path = user_dir / f"{filename}.json"
|
94 |
+
metadata_with_info = {
|
95 |
+
**metadata,
|
96 |
+
"saved_at": datetime.now().isoformat(),
|
97 |
+
"file_size": dest_path.stat().st_size,
|
98 |
+
"original_path": audio_path
|
99 |
+
}
|
100 |
+
|
101 |
+
with open(metadata_path, 'w') as f:
|
102 |
+
json.dump(metadata_with_info, f, indent=2)
|
103 |
+
|
104 |
+
return str(dest_path)
|
105 |
+
|
106 |
+
def save_batch_results(self, batch_files: List[str], batch_metadata: Dict) -> str:
|
107 |
+
"""Save batch processing results as ZIP with metadata"""
|
108 |
+
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
109 |
+
batch_name = f"batch_{timestamp}"
|
110 |
+
|
111 |
+
# Create batch directory
|
112 |
+
batch_dir = self.paths["batch_results"] / batch_name
|
113 |
+
batch_dir.mkdir(exist_ok=True)
|
114 |
+
|
115 |
+
# Copy files to batch directory
|
116 |
+
saved_files = []
|
117 |
+
for i, file_path in enumerate(batch_files):
|
118 |
+
if os.path.exists(file_path):
|
119 |
+
dest_name = f"batch_{i:03d}.wav"
|
120 |
+
dest_path = batch_dir / dest_name
|
121 |
+
shutil.copy2(file_path, dest_path)
|
122 |
+
saved_files.append(str(dest_path))
|
123 |
+
|
124 |
+
# Create ZIP archive
|
125 |
+
zip_path = self.paths["exports"] / f"{batch_name}.zip"
|
126 |
+
with zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
127 |
+
for file_path in saved_files:
|
128 |
+
zipf.write(file_path, Path(file_path).name)
|
129 |
+
|
130 |
+
# Save metadata
|
131 |
+
metadata_path = batch_dir / "metadata.json"
|
132 |
+
full_metadata = {
|
133 |
+
**batch_metadata,
|
134 |
+
"batch_id": batch_name,
|
135 |
+
"created_at": datetime.now().isoformat(),
|
136 |
+
"num_files": len(saved_files),
|
137 |
+
"zip_path": str(zip_path),
|
138 |
+
"files": saved_files
|
139 |
+
}
|
140 |
+
|
141 |
+
with open(metadata_path, 'w') as f:
|
142 |
+
json.dump(full_metadata, f, indent=2)
|
143 |
+
|
144 |
+
return str(zip_path)
|
145 |
+
|
146 |
+
def save_voice_sample(self, audio_path: str, voice_name: str, metadata: Dict) -> str:
|
147 |
+
"""Save voice cloning reference samples"""
|
148 |
+
voice_dir = self.paths["voice_samples"] / voice_name
|
149 |
+
voice_dir.mkdir(exist_ok=True)
|
150 |
+
|
151 |
+
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
152 |
+
filename = f"{voice_name}_{timestamp}.wav"
|
153 |
+
dest_path = voice_dir / filename
|
154 |
+
|
155 |
+
shutil.copy2(audio_path, dest_path)
|
156 |
+
|
157 |
+
# Save voice metadata
|
158 |
+
voice_metadata = {
|
159 |
+
**metadata,
|
160 |
+
"voice_name": voice_name,
|
161 |
+
"saved_at": datetime.now().isoformat(),
|
162 |
+
"file_path": str(dest_path),
|
163 |
+
"file_size": dest_path.stat().st_size
|
164 |
+
}
|
165 |
+
|
166 |
+
metadata_path = voice_dir / f"{filename}.json"
|
167 |
+
with open(metadata_path, 'w') as f:
|
168 |
+
json.dump(voice_metadata, f, indent=2)
|
169 |
+
|
170 |
+
return str(dest_path)
|
171 |
+
|
172 |
+
def cache_model(self, model_name: str, model_path: str) -> str:
|
173 |
+
"""Cache downloaded models for faster loading"""
|
174 |
+
model_dir = self.paths["models_cache"] / model_name.replace("/", "_")
|
175 |
+
model_dir.mkdir(exist_ok=True)
|
176 |
+
|
177 |
+
if os.path.isdir(model_path):
|
178 |
+
# Copy entire model directory
|
179 |
+
dest_path = model_dir / "model"
|
180 |
+
if dest_path.exists():
|
181 |
+
shutil.rmtree(dest_path)
|
182 |
+
shutil.copytree(model_path, dest_path)
|
183 |
+
else:
|
184 |
+
# Copy single model file
|
185 |
+
dest_path = model_dir / Path(model_path).name
|
186 |
+
shutil.copy2(model_path, dest_path)
|
187 |
+
|
188 |
+
# Save model info
|
189 |
+
info_path = model_dir / "model_info.json"
|
190 |
+
model_info = {
|
191 |
+
"model_name": model_name,
|
192 |
+
"cached_at": datetime.now().isoformat(),
|
193 |
+
"original_path": model_path,
|
194 |
+
"cached_path": str(dest_path),
|
195 |
+
"size": self._get_directory_size(dest_path) if dest_path.is_dir() else dest_path.stat().st_size
|
196 |
+
}
|
197 |
+
|
198 |
+
with open(info_path, 'w') as f:
|
199 |
+
json.dump(model_info, f, indent=2)
|
200 |
+
|
201 |
+
return str(dest_path)
|
202 |
+
|
203 |
+
def log_webhook_event(self, event_data: Dict) -> str:
|
204 |
+
"""Log webhook events to persistent storage"""
|
205 |
+
date_str = datetime.now().strftime("%Y%m%d")
|
206 |
+
log_file = self.paths["webhooks_logs"] / f"webhooks_{date_str}.jsonl"
|
207 |
+
|
208 |
+
event_entry = {
|
209 |
+
**event_data,
|
210 |
+
"logged_at": datetime.now().isoformat()
|
211 |
+
}
|
212 |
+
|
213 |
+
with open(log_file, 'a') as f:
|
214 |
+
f.write(json.dumps(event_entry) + '\n')
|
215 |
+
|
216 |
+
return str(log_file)
|
217 |
+
|
218 |
+
def save_analytics_data(self, analytics_data: Dict, data_type: str = "usage") -> str:
|
219 |
+
"""Save analytics data for long-term analysis"""
|
220 |
+
date_str = datetime.now().strftime("%Y%m%d")
|
221 |
+
analytics_file = self.paths["analytics"] / f"{data_type}_{date_str}.json"
|
222 |
+
|
223 |
+
# Load existing data if file exists
|
224 |
+
if analytics_file.exists():
|
225 |
+
with open(analytics_file, 'r') as f:
|
226 |
+
existing_data = json.load(f)
|
227 |
+
else:
|
228 |
+
existing_data = {"entries": []}
|
229 |
+
|
230 |
+
# Add new entry
|
231 |
+
entry = {
|
232 |
+
**analytics_data,
|
233 |
+
"timestamp": datetime.now().isoformat()
|
234 |
+
}
|
235 |
+
existing_data["entries"].append(entry)
|
236 |
+
|
237 |
+
# Save updated data
|
238 |
+
with open(analytics_file, 'w') as f:
|
239 |
+
json.dump(existing_data, f, indent=2)
|
240 |
+
|
241 |
+
return str(analytics_file)
|
242 |
+
|
243 |
+
def create_backup(self, backup_name: str = None) -> str:
|
244 |
+
"""Create backup of important data"""
|
245 |
+
if backup_name is None:
|
246 |
+
backup_name = f"backup_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
|
247 |
+
|
248 |
+
backup_path = self.paths["backups"] / f"{backup_name}.zip"
|
249 |
+
|
250 |
+
with zipfile.ZipFile(backup_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
251 |
+
# Backup audio outputs (recent ones)
|
252 |
+
for audio_file in self.paths["audio_outputs"].rglob("*.wav"):
|
253 |
+
# Only backup files from last 30 days
|
254 |
+
if (datetime.now().timestamp() - audio_file.stat().st_mtime) < (30 * 24 * 3600):
|
255 |
+
arcname = str(audio_file.relative_to(self.base_path))
|
256 |
+
zipf.write(audio_file, arcname)
|
257 |
+
|
258 |
+
# Backup voice samples
|
259 |
+
for voice_file in self.paths["voice_samples"].rglob("*"):
|
260 |
+
if voice_file.is_file():
|
261 |
+
arcname = str(voice_file.relative_to(self.base_path))
|
262 |
+
zipf.write(voice_file, arcname)
|
263 |
+
|
264 |
+
# Backup analytics
|
265 |
+
for analytics_file in self.paths["analytics"].rglob("*.json"):
|
266 |
+
arcname = str(analytics_file.relative_to(self.base_path))
|
267 |
+
zipf.write(analytics_file, arcname)
|
268 |
+
|
269 |
+
return str(backup_path)
|
270 |
+
|
271 |
+
def get_storage_stats(self) -> StorageStats:
|
272 |
+
"""Get storage usage statistics"""
|
273 |
+
total_size = 20 * 1024 * 1024 * 1024 # 20GB in bytes
|
274 |
+
used_size = self._get_directory_size(self.base_path)
|
275 |
+
|
276 |
+
# Count files
|
277 |
+
audio_files = len(list(self.paths["audio_outputs"].rglob("*.wav")))
|
278 |
+
total_files = len(list(self.base_path.rglob("*")))
|
279 |
+
model_dirs = len(list(self.paths["models_cache"].iterdir()))
|
280 |
+
|
281 |
+
return StorageStats(
|
282 |
+
total_space=total_size,
|
283 |
+
used_space=used_size,
|
284 |
+
free_space=total_size - used_size,
|
285 |
+
num_files=total_files,
|
286 |
+
num_audio_files=audio_files,
|
287 |
+
num_models=model_dirs
|
288 |
+
)
|
289 |
+
|
290 |
+
def cleanup_old_files(self, days: int = 30):
|
291 |
+
"""Clean up files older than specified days"""
|
292 |
+
cutoff_time = datetime.now().timestamp() - (days * 24 * 3600)
|
293 |
+
cleaned_files = []
|
294 |
+
|
295 |
+
for file_path in self.base_path.rglob("*"):
|
296 |
+
if file_path.is_file() and file_path.stat().st_mtime < cutoff_time:
|
297 |
+
# Don't delete model cache or voice samples
|
298 |
+
if "models_cache" not in str(file_path) and "voice_samples" not in str(file_path):
|
299 |
+
file_path.unlink()
|
300 |
+
cleaned_files.append(str(file_path))
|
301 |
+
|
302 |
+
return cleaned_files
|
303 |
+
|
304 |
+
def _get_directory_size(self, directory: Path) -> int:
|
305 |
+
"""Get total size of directory"""
|
306 |
+
total_size = 0
|
307 |
+
for file_path in directory.rglob("*"):
|
308 |
+
if file_path.is_file():
|
309 |
+
total_size += file_path.stat().st_size
|
310 |
+
return total_size
|
311 |
+
|
312 |
+
def list_saved_outputs(self, user_id: str = None, limit: int = 50) -> List[Dict]:
|
313 |
+
"""List saved audio outputs with metadata"""
|
314 |
+
outputs = []
|
315 |
+
|
316 |
+
search_path = self.paths["audio_outputs"]
|
317 |
+
if user_id:
|
318 |
+
search_path = search_path / user_id
|
319 |
+
if not search_path.exists():
|
320 |
+
return outputs
|
321 |
+
|
322 |
+
# Find audio files and their metadata
|
323 |
+
for audio_file in search_path.rglob("*.wav"):
|
324 |
+
metadata_file = audio_file.with_suffix(".wav.json")
|
325 |
+
if metadata_file.exists():
|
326 |
+
try:
|
327 |
+
with open(metadata_file, 'r') as f:
|
328 |
+
metadata = json.load(f)
|
329 |
+
|
330 |
+
outputs.append({
|
331 |
+
"file_path": str(audio_file),
|
332 |
+
"metadata": metadata,
|
333 |
+
"size": audio_file.stat().st_size,
|
334 |
+
"created": datetime.fromtimestamp(audio_file.stat().st_ctime).isoformat()
|
335 |
+
})
|
336 |
+
except Exception as e:
|
337 |
+
print(f"Error reading metadata for {audio_file}: {e}")
|
338 |
+
|
339 |
+
# Sort by creation time (newest first) and limit results
|
340 |
+
outputs.sort(key=lambda x: x["created"], reverse=True)
|
341 |
+
return outputs[:limit]
|
342 |
+
|
343 |
+
# Global storage manager instance
|
344 |
+
storage_manager = PersistentStorageManager()
|
@@ -0,0 +1,210 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by Copilot
|
2 |
+
"""
|
3 |
+
Programmatic Webhook Management for TTS Project
|
4 |
+
Creates and manages Hugging Face webhooks automatically
|
5 |
+
"""
|
6 |
+
|
7 |
+
import os
|
8 |
+
import json
|
9 |
+
import asyncio
|
10 |
+
from typing import List, Dict, Optional
|
11 |
+
from huggingface_hub import HfApi, HfFolder
|
12 |
+
import requests
|
13 |
+
|
14 |
+
class WebhookManager:
|
15 |
+
"""Manages Hugging Face webhooks programmatically"""
|
16 |
+
|
17 |
+
def __init__(self, token: Optional[str] = None):
|
18 |
+
self.api = HfApi(token=token)
|
19 |
+
self.token = token or HfFolder.get_token()
|
20 |
+
self.base_url = "https://huggingface.co/api/webhooks"
|
21 |
+
self.space_url = "https://toowired-text2speech-gradio-app.hf.space"
|
22 |
+
|
23 |
+
def get_headers(self) -> Dict[str, str]:
|
24 |
+
"""Get API headers with authentication"""
|
25 |
+
return {
|
26 |
+
"Authorization": f"Bearer {self.token}",
|
27 |
+
"Content-Type": "application/json"
|
28 |
+
}
|
29 |
+
|
30 |
+
def create_webhook(self,
|
31 |
+
endpoint: str,
|
32 |
+
name: str,
|
33 |
+
secret: str,
|
34 |
+
events: List[str],
|
35 |
+
target_repos: List[str],
|
36 |
+
description: str = "") -> Dict:
|
37 |
+
"""Create a new webhook programmatically"""
|
38 |
+
|
39 |
+
webhook_data = {
|
40 |
+
"url": f"{self.space_url}/webhooks/{endpoint}",
|
41 |
+
"name": name,
|
42 |
+
"secret": secret,
|
43 |
+
"events": events,
|
44 |
+
"repos": target_repos,
|
45 |
+
"description": description,
|
46 |
+
"active": True
|
47 |
+
}
|
48 |
+
|
49 |
+
try:
|
50 |
+
response = requests.post(
|
51 |
+
self.base_url,
|
52 |
+
headers=self.get_headers(),
|
53 |
+
json=webhook_data
|
54 |
+
)
|
55 |
+
|
56 |
+
if response.status_code == 201:
|
57 |
+
print(f"✅ Created webhook: {name}")
|
58 |
+
return response.json()
|
59 |
+
else:
|
60 |
+
print(f"❌ Failed to create webhook {name}: {response.status_code}")
|
61 |
+
print(f"Response: {response.text}")
|
62 |
+
return {"error": response.text}
|
63 |
+
|
64 |
+
except Exception as e:
|
65 |
+
print(f"❌ Error creating webhook {name}: {e}")
|
66 |
+
return {"error": str(e)}
|
67 |
+
|
68 |
+
def list_webhooks(self) -> List[Dict]:
|
69 |
+
"""List all existing webhooks"""
|
70 |
+
try:
|
71 |
+
response = requests.get(
|
72 |
+
self.base_url,
|
73 |
+
headers=self.get_headers()
|
74 |
+
)
|
75 |
+
|
76 |
+
if response.status_code == 200:
|
77 |
+
return response.json()
|
78 |
+
else:
|
79 |
+
print(f"❌ Failed to list webhooks: {response.status_code}")
|
80 |
+
return []
|
81 |
+
|
82 |
+
except Exception as e:
|
83 |
+
print(f"❌ Error listing webhooks: {e}")
|
84 |
+
return []
|
85 |
+
|
86 |
+
def delete_webhook(self, webhook_id: str) -> bool:
|
87 |
+
"""Delete a webhook by ID"""
|
88 |
+
try:
|
89 |
+
response = requests.delete(
|
90 |
+
f"{self.base_url}/{webhook_id}",
|
91 |
+
headers=self.get_headers()
|
92 |
+
)
|
93 |
+
|
94 |
+
if response.status_code == 204:
|
95 |
+
print(f"✅ Deleted webhook: {webhook_id}")
|
96 |
+
return True
|
97 |
+
else:
|
98 |
+
print(f"❌ Failed to delete webhook {webhook_id}: {response.status_code}")
|
99 |
+
return False
|
100 |
+
|
101 |
+
except Exception as e:
|
102 |
+
print(f"❌ Error deleting webhook {webhook_id}: {e}")
|
103 |
+
return False
|
104 |
+
|
105 |
+
def setup_tts_webhooks(self) -> Dict[str, Dict]:
|
106 |
+
"""Set up all TTS project webhooks automatically"""
|
107 |
+
|
108 |
+
webhook_secret = "tts_webhook_secret_2024"
|
109 |
+
target_repos = [
|
110 |
+
"Toowired/text2speech-gradio-app",
|
111 |
+
# Add any model repos you want to monitor
|
112 |
+
# "microsoft/speecht5_tts",
|
113 |
+
# "suno/bark",
|
114 |
+
]
|
115 |
+
|
116 |
+
webhooks_config = {
|
117 |
+
"main_automation": {
|
118 |
+
"endpoint": "tts_automation",
|
119 |
+
"name": "TTS Main Automation",
|
120 |
+
"description": "Main automation webhook for TTS project",
|
121 |
+
"events": [
|
122 |
+
"repo.content.update",
|
123 |
+
"repo.content.create",
|
124 |
+
"space.runtime.restart",
|
125 |
+
"discussion.create",
|
126 |
+
"discussion.comment.create"
|
127 |
+
]
|
128 |
+
},
|
129 |
+
"model_sync": {
|
130 |
+
"endpoint": "model_sync",
|
131 |
+
"name": "TTS Model Synchronization",
|
132 |
+
"description": "Automatically sync new TTS models",
|
133 |
+
"events": [
|
134 |
+
"repo.create",
|
135 |
+
"repo.content.update",
|
136 |
+
"model.create"
|
137 |
+
]
|
138 |
+
},
|
139 |
+
"usage_tracker": {
|
140 |
+
"endpoint": "usage_tracker",
|
141 |
+
"name": "TTS Usage Analytics",
|
142 |
+
"description": "Track usage patterns and performance",
|
143 |
+
"events": [
|
144 |
+
"space.runtime.start",
|
145 |
+
"space.runtime.stop",
|
146 |
+
"space.runtime.restart"
|
147 |
+
]
|
148 |
+
},
|
149 |
+
"error_monitor": {
|
150 |
+
"endpoint": "error_monitor",
|
151 |
+
"name": "TTS Error Monitoring",
|
152 |
+
"description": "Monitor for deployment errors and issues",
|
153 |
+
"events": [
|
154 |
+
"space.runtime.failed",
|
155 |
+
"space.build.failed",
|
156 |
+
"repo.content.failed"
|
157 |
+
]
|
158 |
+
}
|
159 |
+
}
|
160 |
+
|
161 |
+
results = {}
|
162 |
+
|
163 |
+
for webhook_key, config in webhooks_config.items():
|
164 |
+
result = self.create_webhook(
|
165 |
+
endpoint=config["endpoint"],
|
166 |
+
name=config["name"],
|
167 |
+
secret=webhook_secret,
|
168 |
+
events=config["events"],
|
169 |
+
target_repos=target_repos,
|
170 |
+
description=config["description"]
|
171 |
+
)
|
172 |
+
results[webhook_key] = result
|
173 |
+
|
174 |
+
return results
|
175 |
+
|
176 |
+
def cleanup_old_webhooks(self, name_pattern: str = "TTS"):
|
177 |
+
"""Remove old TTS webhooks to avoid duplicates"""
|
178 |
+
webhooks = self.list_webhooks()
|
179 |
+
|
180 |
+
for webhook in webhooks:
|
181 |
+
if name_pattern in webhook.get("name", ""):
|
182 |
+
print(f"🗑️ Removing old webhook: {webhook['name']}")
|
183 |
+
self.delete_webhook(webhook["id"])
|
184 |
+
|
185 |
+
def setup_webhooks_programmatically():
|
186 |
+
"""Main function to set up webhooks"""
|
187 |
+
print("🔗 Setting up TTS webhooks programmatically...")
|
188 |
+
|
189 |
+
manager = WebhookManager()
|
190 |
+
|
191 |
+
# Clean up old webhooks first
|
192 |
+
print("🗑️ Cleaning up old webhooks...")
|
193 |
+
manager.cleanup_old_webhooks("TTS")
|
194 |
+
|
195 |
+
# Create new webhooks
|
196 |
+
print("🆕 Creating new webhooks...")
|
197 |
+
results = manager.setup_tts_webhooks()
|
198 |
+
|
199 |
+
# Show results
|
200 |
+
print("\n📊 Webhook Setup Results:")
|
201 |
+
for webhook_name, result in results.items():
|
202 |
+
if "error" in result:
|
203 |
+
print(f"❌ {webhook_name}: {result['error']}")
|
204 |
+
else:
|
205 |
+
print(f"✅ {webhook_name}: Created successfully")
|
206 |
+
|
207 |
+
return results
|
208 |
+
|
209 |
+
if __name__ == "__main__":
|
210 |
+
setup_webhooks_programmatically()
|