Skip to main content
Glama
Ripnrip

Quake Coding Arena MCP

by Ripnrip

set_enhanced_volume

Adjust global achievement sound volume in Quake Coding Arena MCP to control audio feedback for coding milestones. Set volume from 0-100 to customize your development environment experience.

Instructions

🔊 Adjust the global soundboard volume for all achievement sounds. This setting persists for the session and affects all subsequent audio playback until changed. Volume range is 0-100, where 0 is silent and 100 is maximum volume.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
volumeYes🔊 Volume level (0-100). 0 = silent, 100 = maximum volume. Default is 80. This setting applies to all achievement sounds until changed. Examples: 50, 75, 80, 100

Implementation Reference

  • The core handler function for the 'set_enhanced_volume' tool. It takes a volume parameter, updates the global enhancedStats.volume, and returns a text response confirming the change along with the new volume value.
    async ({ volume }) => {
        enhancedStats.volume = volume;
    
        return {
            content: [{
                type: "text",
                text: `🔊 Enhanced volume set to ${volume}%`
            }],
            volume: enhancedStats.volume
        };
    }
  • Registration of the 'set_enhanced_volume' tool on the MCP server, including detailed description, Zod-based input schema for volume (0-100), annotations, and the handler function.
    server.registerTool(
        "set_enhanced_volume",
        {
            description: "🔊 Adjust the global soundboard volume for all achievement sounds. This setting persists for the session and affects all subsequent audio playback until changed. Volume range is 0-100, where 0 is silent and 100 is maximum volume.",
            inputSchema: {
                volume: z.number().min(0).max(100).describe("🔊 Volume level (0-100). 0 = silent, 100 = maximum volume. Default is 80. This setting applies to all achievement sounds until changed. Examples: 50, 75, 80, 100"),
            },
            annotations: {
                title: "🔊 Set Volume",
                readOnlyHint: false,
                destructiveHint: false,
                idempotentHint: false,
                openWorldHint: false
            }
        },
        async ({ volume }) => {
            enhancedStats.volume = volume;
    
            return {
                content: [{
                    type: "text",
                    text: `🔊 Enhanced volume set to ${volume}%`
                }],
                volume: enhancedStats.volume
            };
        }
    );
  • The input schema definition using Zod for validating the 'volume' parameter as a number between 0 and 100.
    inputSchema: {
        volume: z.number().min(0).max(100).describe("🔊 Volume level (0-100). 0 = silent, 100 = maximum volume. Default is 80. This setting applies to all achievement sounds until changed. Examples: 50, 75, 80, 100"),
    },
  • src/index.ts:57-57 (registration)
    Call to registerSettingsTools in the main server setup, which includes the set_enhanced_volume tool registration.
    registerSettingsTools(server);
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a non-readOnly, non-destructive operation. The description adds valuable behavioral context beyond annotations: it specifies that the setting persists for the session, affects all achievement sounds globally, and provides the volume range semantics (0-100 with silent/maximum definitions). However, it doesn't mention potential side effects like audio interruption or performance impact.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero waste: first states purpose, second explains persistence and scope, third defines range. Every sentence earns its place by adding distinct information. The description is appropriately sized and front-loaded with the core function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter mutation tool with good annotations but no output schema, the description provides solid context about what the tool does and its behavioral impact. It could be more complete by mentioning what happens when volume is set (e.g., immediate effect on current playback) or error conditions, but covers the essential scope and persistence aspects well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the 'volume' parameter with range, default, and examples. The description adds minimal extra meaning by repeating the range and silent/maximum definitions, but doesn't provide additional context beyond what's in the schema. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Adjust the global soundboard volume'), the resource ('for all achievement sounds'), and distinguishes this from siblings by focusing on volume control rather than retrieval or playback functions. It goes beyond the title 'Set Volume' to explain what is being set.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use it ('affects all subsequent audio playback until changed'), but doesn't explicitly mention when not to use it or name alternatives like 'set_voice_pack' for different audio settings. It implies usage for volume adjustment but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Ripnrip/Quake-Coding-Arena-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server