Skip to main content
Glama
__init__.py1.11 kB
"""Pydantic models for MCP Server Whisper.""" from .audio import ( CompressAudioInputParams, ConvertAudioInputParams, FilePathSupportParams, ListAudioFilesInputParams, ) from .base import BaseAudioInputParams, BaseInputPath from .responses import AudioProcessingResult, ChatResult, TranscriptionResult, TTSResult from .transcription import ( ChatWithAudioInputParams, TranscribeAudioInputParams, TranscribeAudioInputParamsBase, TranscribeWithEnhancementInputParams, ) from .tts import CreateClaudecastInputParams __all__ = [ # Base models "BaseInputPath", "BaseAudioInputParams", # Audio models "ConvertAudioInputParams", "CompressAudioInputParams", "FilePathSupportParams", "ListAudioFilesInputParams", # Transcription models "TranscribeAudioInputParamsBase", "TranscribeAudioInputParams", "ChatWithAudioInputParams", "TranscribeWithEnhancementInputParams", # TTS models "CreateClaudecastInputParams", # Response models "AudioProcessingResult", "TranscriptionResult", "ChatResult", "TTSResult", ]

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/arcaputo3/mcp-server-whisper'

If you have feedback or need assistance with the MCP directory API, please join our Discord server