Skip to main content
Glama
balloonf

Windows TTS MCP Server

by balloonf

test_tts

Evaluate and test text-to-speech functionality using Windows' built-in Speech API, enabling playback control, speed, and volume adjustments for accurate system performance assessment.

Instructions

TTS 시스템 테스트

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function for the 'test_tts' MCP tool. It tests the Windows TTS system by playing a test message using powershell_tts in a background thread and returns a status message.
    @mcp.tool()
    def test_tts() -> str:
        """TTS 시스템 테스트"""
        try:
            if platform.system() != "Windows":
                return "[ERROR] 이 TTS 서버는 Windows에서만 작동합니다"
            
            test_text = "Windows TTS MCP 서버 테스트입니다"
            
            def _test():
                success = powershell_tts(test_text)
                if success:
                    safe_print("[SUCCESS] TTS 테스트 성공")
                else:
                    safe_print("[ERROR] TTS 테스트 실패")
            
            thread = threading.Thread(target=_test, daemon=True)
            thread.start()
            
            return "[TEST] TTS 테스트를 시작했습니다"
            
        except Exception as e:
            return f"[ERROR] TTS 테스트 오류: {str(e)}"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. 'TTS 시스템 테스트' suggests a read-only diagnostic operation, but it doesn't specify whether this test is destructive (e.g., interrupts current speech), has side effects (e.g., generates audio output), requires specific permissions, or provides detailed results. The description lacks critical behavioral context for a tool in a TTS system with control siblings like 'kill_all_tts' and 'stop_speech.'

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise ('TTS 시스템 테스트'), which is efficient for a simple tool. However, it's under-specified rather than appropriately sized—it lacks necessary detail about what the test entails. While front-loaded, it doesn't earn its place by providing sufficient value beyond the tool name. Conciseness here borders on inadequacy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a TTS system with multiple control tools (e.g., 'kill_all_tts', 'speak'), no annotations, no output schema, and 0 parameters, the description is incomplete. It doesn't explain what the test does, what it returns, or how it interacts with other tools. For a diagnostic tool in this context, more information is needed to understand its role and outcomes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100% (though trivial since there are no parameters). The description doesn't need to add parameter semantics, as there are none to document. A baseline of 4 is appropriate for a parameterless tool, as the description cannot compensate for missing parameter info that doesn't exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'TTS 시스템 테스트' (TTS system test) is a tautology that essentially restates the tool name 'test_tts' in Korean. While it indicates this is a testing operation related to TTS, it doesn't specify what the test actually does (e.g., validates functionality, checks quality, runs diagnostics) or what resource it tests. It distinguishes minimally from siblings by focusing on testing rather than speaking or controlling TTS, but remains vague.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_tts_status' (for checking status) or 'speak' (for actual TTS output). There's no indication of prerequisites, expected outcomes, or scenarios where testing is appropriate (e.g., after configuration changes, during troubleshooting). Usage is implied only by the word 'test,' which is insufficient for clear decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/balloonf/widows_tts_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server