Skip to main content
Glama
balloonf

Windows TTS MCP Server

by balloonf

speak

Convert text to speech using Windows' built-in Speech API. Control playback, adjust speed, and manage volume for clear audio output directly from your desktop.

Instructions

텍스트를 음성으로 읽어줍니다

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYes

Implementation Reference

  • The handler function for the 'speak' tool. Decorated with @mcp.tool() which registers it in the MCP server. Splits long text into chunks and spawns a background thread to speak each chunk using the powershell_tts helper.
    @mcp.tool()
    def speak(text: str) -> str:
        """텍스트를 음성으로 읽어줍니다"""
        try:
            # 텍스트 분할
            text_chunks = split_text_for_tts(text, 500)
            total_chunks = len(text_chunks)
            
            def _speak_thread():
                for i, chunk in enumerate(text_chunks, 1):
                    safe_print(f"[TTS] {i}/{total_chunks} 부분 재생 중: {chunk[:50]}...")
                    success = powershell_tts(chunk)
                    if not success:
                        safe_print(f"[RETRY] TTS 재시도: {chunk[:30]}...")
                        # 한 번 더 시도
                        powershell_tts(chunk)
                    
                    # 각 청크 사이에 짧은 간격
                    if i < total_chunks:
                        time.sleep(0.5)
            
            # 백그라운드에서 실행
            thread = threading.Thread(target=_speak_thread, daemon=True)
            thread.start()
            
            if total_chunks > 1:
                return f"[START] 음성 재생 시작 ({total_chunks}개 부분으로 분할): '{text[:50]}...'"
            else:
                return f"[START] 음성 재생 시작: '{text[:50]}...'"
            
        except Exception as e:
            return f"[ERROR] 음성 재생 오류: {str(e)}"
  • Core helper function that runs the actual TTS via PowerShell, invoking System.Speech.Synthesis.SpeechSynthesizer.Speak(). Handles process management, escaping, timeouts, and errors.
    def powershell_tts(text: str, rate: int = 0, volume: int = 100) -> bool:
        """PowerShell을 사용한 TTS 실행"""
        process = None
        try:
            if platform.system() != "Windows":
                safe_print("[ERROR] Windows가 아닙니다")
                return False
            
            # 텍스트에서 작은따옴표 이스케이프 처리
            escaped_text = text.replace("'", "''")
            
            # PowerShell TTS 명령어
            cmd = [
                "powershell", "-Command",
                f"Add-Type -AssemblyName System.Speech; "
                f"$synth = New-Object System.Speech.Synthesis.SpeechSynthesizer; "
                f"$synth.Rate = {rate}; "
                f"$synth.Volume = {volume}; "
                f"$synth.Speak('{escaped_text}'); "
                f"$synth.Dispose()"
            ]
            
            # 프로세스 시작
            process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
            
            # 실행 중인 프로세스 목록에 추가
            with process_lock:
                running_processes.append(process)
            
            # 프로세스 완료 대기
            stdout, stderr = process.communicate(timeout=180)
            
            # 완료된 프로세스 목록에서 제거
            with process_lock:
                if process in running_processes:
                    running_processes.remove(process)
            
            if process.returncode == 0:
                safe_print(f"[SUCCESS] TTS 완료: {text[:30]}...")
                return True
            else:
                safe_print(f"[ERROR] TTS 오류: {stderr}")
                return False
                
        except subprocess.TimeoutExpired:
            safe_print("[WARNING] TTS 시간 초과")
            if process:
                process.kill()
                with process_lock:
                    if process in running_processes:
                        running_processes.remove(process)
            return False
        except Exception as e:
            safe_print(f"[ERROR] TTS 예외: {e}")
            if process:
                try:
                    process.kill()
                    with process_lock:
                        if process in running_processes:
                            running_processes.remove(process)
                except:
                    pass
            return False
  • Helper function to intelligently split long text into smaller chunks suitable for TTS, prioritizing sentence boundaries.
    def split_text_for_tts(text: str, max_length: int = 500) -> list:
        """텍스트를 TTS용으로 적절히 분할"""
        if len(text) <= max_length:
            return [text]
        
        # 문장 단위로 분할 시도
        import re
        sentences = re.split(r'[.!?。!?]\s*', text)
        
        chunks = []
        current_chunk = ""
        
        for sentence in sentences:
            # 문장이 너무 긴 경우 더 작게 분할
            if len(sentence) > max_length:
                # 쉼표나 기타 구두점으로 분할
                sub_parts = re.split(r'[,;:\s]\s*', sentence)
                for part in sub_parts:
                    if len(current_chunk + part) <= max_length:
                        current_chunk += part + " "
                    else:
                        if current_chunk.strip():
                            chunks.append(current_chunk.strip())
                        current_chunk = part + " "
            else:
                # 현재 청크에 문장을 추가할 수 있는지 확인
                if len(current_chunk + sentence) <= max_length:
                    current_chunk += sentence + ". "
                else:
                    if current_chunk.strip():
                        chunks.append(current_chunk.strip())
                    current_chunk = sentence + ". "
        
        # 마지막 청크 추가
        if current_chunk.strip():
            chunks.append(current_chunk.strip())
        
        return chunks
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool reads text aloud but does not describe any behavioral traits such as whether it interrupts ongoing speech, requires specific permissions, has rate limits, or what happens on invocation (e.g., does it play immediately or queue?). For a tool with no annotation coverage, this leaves significant gaps in understanding its operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single sentence that directly states the tool's function without any fluff. It is front-loaded and wastes no words, making it efficient for quick comprehension. Every part of the sentence earns its place by clearly conveying the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of having multiple sibling tools and no annotations or output schema, the description is incomplete. It does not address how this tool fits among alternatives, what behavioral outcomes to expect, or any error conditions. For a tool that likely involves audio output and potential system interactions, more context is needed to use it effectively without trial and error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description implies a 'text' parameter but does not add meaning beyond what the input schema provides. With 0% schema description coverage, the schema only defines 'text' as a required string without details. The description does not compensate by explaining constraints (e.g., length limits, language support) or usage context. However, with only one parameter, the baseline is higher, but it fails to enhance understanding beyond the basic schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: converting text to speech ('텍스트를 음성으로 읽어줍니다' translates to 'Reads text aloud as speech'). It specifies the verb ('read') and resource ('text'), making the purpose unambiguous. However, it does not differentiate from siblings like speak_fast or speak_slow, which offer variations on the same core function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools (e.g., speak_fast, speak_quiet, speak_short, speak_slow), there is no indication of how this default 'speak' differs in context, speed, volume, or other parameters. It lacks any mention of prerequisites, exclusions, or comparative use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/balloonf/widows_tts_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server