Skip to main content
Glama

get_icecast_best_practices

Get tailored Icecast configuration recommendations based on listener count: small (under 50), medium (50-500), or large (500+). Optimize streaming server performance and reliability for your audience size.

Instructions

Get general best practices and recommendations for Icecast configuration based on use case

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
useCaseYesUse case: 'small' (< 50 listeners), 'medium' (50-500 listeners), 'large' (500+ listeners)

Implementation Reference

  • src/index.ts:340-354 (registration)
    Tool registration in the ListToolsRequestHandler, defining the tool name, description, and input schema with a 'useCase' enum parameter.
    {
      name: "get_icecast_best_practices",
      description: "Get general best practices and recommendations for Icecast configuration based on use case",
      inputSchema: {
        type: "object",
        properties: {
          useCase: {
            type: "string",
            description: "Use case: 'small' (< 50 listeners), 'medium' (50-500 listeners), 'large' (500+ listeners)",
            enum: ["small", "medium", "large"],
          },
        },
        required: ["useCase"],
      },
    },
  • Handler logic that receives the useCase argument and returns pre-defined best practices markdown content for small, medium, or large Icecast deployments.
      if (name === "get_icecast_best_practices") {
        const useCase = args.useCase as string;
    
        const practices: Record<string, string> = {
          small: `# Best Practices for Small Streams (< 50 listeners)
    
    ## Limits
    - clients: 64-128
    - sources: 4
    - queue-size: 524288 (512KB)
    - burst-size: 65535 (64KB)
    - threadpool: 4-8
    
    ## Security
    - Always set source-password and admin-password
    - Change admin username from default 'admin'
    - Use strong passwords (16+ characters)
    
    ## Mount Points
    - Configure explicit mount point with metadata
    - Set fallback mount for reliability
    - Consider dump-file for recording
    
    ## Performance
    - Keep log level at 3 (info) or lower
    - Enable log archiving
    - Monitor log files regularly
    
    ## Behind Reverse Proxy
    - Set use-x-forwarded-for to 1
    - Configure hostname to your domain
    - Let proxy handle SSL/TLS`,
    
          medium: `# Best Practices for Medium Streams (50-500 listeners)
    
    ## Limits
    - clients: 256-512
    - sources: 8
    - queue-size: 1048576 (1MB)
    - burst-size: 131072 (128KB)
    - threadpool: 16-32
    
    ## Security
    - Use strong unique passwords
    - Consider IP-based restrictions for admin
    - Enable relay authentication if using relays
    - Regular password rotation
    
    ## Mount Points
    - Multiple mount points for different bitrates
    - Fallback mounts configured
    - Consider on-demand relays for scaling
    
    ## Performance
    - Monitor resource usage
    - Consider multiple listen-sockets if needed
    - Use appropriate timeouts (client: 30s)
    - Enable burst-on-connect for better UX
    
    ## Reliability
    - Set up monitoring/alerts
    - Regular log analysis
    - Consider backup stream source
    
    ## Behind Reverse Proxy
    - use-x-forwarded-for: 1
    - Proper hostname configuration
    - Load balancing if needed`,
    
          large: `# Best Practices for Large Streams (500+ listeners)
    
    ## Limits
    - clients: 1024-2048+
    - sources: 16+
    - queue-size: 2097152+ (2MB+)
    - burst-size: 262144+ (256KB+)
    - threadpool: 32-64
    
    ## Security
    - Strict authentication on all endpoints
    - IP whitelisting for admin access
    - Separate relay passwords
    - Regular security audits
    
    ## Architecture
    - Multiple Icecast instances with load balancing
    - Relay/edge servers for geographic distribution
    - Dedicated source server
    - CDN integration consideration
    
    ## Mount Points
    - Multiple bitrate options
    - Separate mobile/desktop streams
    - Fallback chain configured
    - Metadata management system
    
    ## Performance
    - Dedicated hardware/VMs
    - Network bandwidth monitoring
    - Multiple listen-sockets on different IPs
    - Optimized timeouts
    - Minimal logging in production
    
    ## Monitoring
    - Real-time listener analytics
    - Resource monitoring (CPU, RAM, bandwidth)
    - Automated alerting
    - Log aggregation
    
    ## Reliability
    - Redundant source connections
    - Automated failover
    - Geographic redundancy
    - Regular backup testing
    
    ## Behind Reverse Proxy/CDN
    - use-x-forwarded-for: 1
    - Proper hostname for directory listings
    - Consider HLS/DASH for better scaling
    - Cache static content aggressively`,
        };
    
        const content = practices[useCase] || "Invalid use case. Use: small, medium, or large";
    
        return {
          content: [
            {
              type: "text",
              text: content,
            },
          ],
        };
      }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of disclosing behavioral traits. It only states the function is to 'get' recommendations, which implies a read-only operation, but does not detail any side effects, data source, or operational constraints. The description adds minimal value beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no redundancy. It is front-loaded but very brief. While efficient, it could benefit from a slight expansion to include behavioral or usage context without sacrificing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one parameter, no output schema, and no annotations, the description provides the minimum viable information. It explains what the tool does but does not specify the format or structure of the returned recommendations, leaving gaps for an agent to interpret correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with a detailed description for the 'useCase' parameter including enum values and listener ranges. The tool description ('based on use case') does not add meaning beyond the schema, so the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get'), the resource ('general best practices and recommendations for Icecast configuration'), and the condition ('based on use case'). It effectively distinguishes from the sibling tool 'analyze_icecast_config', which implies a focus on specific config analysis versus general recommendations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for obtaining best practices by use case but does not explicitly state when to use this tool versus its sibling or alternatives. No 'when not to use' guidance is provided, leaving the decision to the agent's inference.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/splinesreticulating/icecast-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server