Skip to main content
Glama

analyze_master_loudness

Measure the integrated, short-term, and momentary loudness of your master mix within a specified time range using a non-destructive render. Returns LUFS and true peak values for audio mastering.

Instructions

Measure the loudness of the full master mix over a time range using a non-destructive dry-run render (action 42441). No tracks or files are created. Returns:

  • lufs_i: integrated loudness in LUFS

  • lufs_s_max: maximum short-term loudness in LUFS

  • lufs_m_max: maximum momentary loudness in LUFS

  • true_peak_db: true peak in dBTP

  • raw_stats: raw key=value string from REAPER for any additional fields

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
start_timeYes
end_timeYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The MCP tool registration and handler entry point for 'analyze_master_loudness'.
    @mcp.tool()
    def analyze_master_loudness(
        start_time: float,
        end_time: float,
    ) -> dict[str, Any]:
        """
        Measure the loudness of the full master mix over a time range using a
        non-destructive dry-run render (action 42441). No tracks or files are created.
        Returns:
        - lufs_i: integrated loudness in LUFS
        - lufs_s_max: maximum short-term loudness in LUFS
        - lufs_m_max: maximum momentary loudness in LUFS
        - true_peak_db: true peak in dBTP
        - raw_stats: raw key=value string from REAPER for any additional fields
        """
        try:
            return _wrap(adapter.analyze_master_loudness(start_time=start_time, end_time=end_time))
        except Exception as exc:
            return _err(exc)
  • The adapter implementation that forwards the 'analyze_master_loudness' call to the REAPER client.
    def analyze_master_loudness(self, start_time: float, end_time: float) -> dict[str, Any]:
        return self._client.call("analyze_master_loudness", start_time=start_time, end_time=end_time)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral burden excellently: explains the dry-run render mechanism, explicit safety guarantee (no file creation), and comprehensively documents all 5 return values including units (LUFS, dBTP) and raw_stats format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose front-loaded in sentence one, safety in sentence two, then return documentation. The return list is lengthy but necessary given the rich output details provided; no tautology or wasted phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite 0% input schema coverage and no annotations, the description succeeds in covering tool purpose, operational safety, and output semantics. Minor gap: parameter units and range constraints are absent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage (only titles). Description mentions 'over a time range' implying the parameters, but fails to document critical semantics like units (seconds vs beats) or constraints, leaving parameter interpretation to the schema titles alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Measure the loudness of the full master mix' uses precise verb+resource, and 'master mix' clearly distinguishes this from sibling tool 'analyze_track_loudness'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Strong safety context with 'non-destructive dry-run render' and 'No tracks or files are created,' but lacks explicit guidance on when to prefer this over analyze_track_loudness (e.g., 'use this for the final stereo master vs individual tracks').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/danielkinahan/ReaMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server