Skip to main content
Glama
EdgeworthHitbox

Colorado DWR MCP Server

get_water_rights_net_amount

Calculate net water amounts for specific rights in Colorado by providing water right name and division number to retrieve accurate allocation data.

Instructions

Get net amounts for water rights

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
waterRightNameNoName of the water right
divisionNoWater division number
pageSizeNoNumber of results to return

Implementation Reference

  • Handler for the get_water_rights_net_amount tool. Extracts arguments from the request and delegates to the shared handleApiCall method with the specific API endpoint 'waterrights/netamount'.
    case "get_water_rights_net_amount": {
        const args = request.params.arguments as any;
        return await this.handleApiCall("waterrights/netamount", args);
    }
  • Zod schema defining the input parameters for the tool: waterRightName (optional string), division (optional number), pageSize (optional number). This is converted to JSON schema for the tool definition.
    z.object({
        waterRightName: z.string().optional().describe("Name of the water right"),
        division: z.number().optional().describe("Water division number"),
        pageSize: z.number().optional().describe("Number of results to return"),
    })
  • src/index.ts:84-94 (registration)
    Registration of the 'get_water_rights_net_amount' tool in the ListTools response, including name, description, and input schema.
    {
        name: "get_water_rights_net_amount",
        description: "Get net amounts for water rights",
        inputSchema: zodToJsonSchema(
            z.object({
                waterRightName: z.string().optional().describe("Name of the water right"),
                division: z.number().optional().describe("Water division number"),
                pageSize: z.number().optional().describe("Number of results to return"),
            })
        ),
    },
  • Shared helper method that performs the actual API call to the Colorado DWR REST API. Constructs the URL, formats parameters (including optional apiKey), fetches data using axios, and returns the JSON response as tool output. This is the core logic executed for the tool.
    public async handleApiCall(endpoint: string, params: any) {
        const url = `${BASE_URL}/${endpoint}`;
        const headers: Record<string, string> = {};
        if (this.apiKey) {
            headers["Authorization"] = this.apiKey; // Or however DWR expects it, docs say 'Token: ...' or query param
        }
    
        // DWR docs say: "Token: B9xxxxx-xxxx-4D47-y" in header OR apiKey query param
        // I'll use query param if apiKey is present to be safe/easy, or header if I can confirm.
        // Docs: "Request Header: ... Token: ..."
        // Let's stick to query params for simplicity if header format is custom.
        // Actually, let's use the params object.
    
        const finalParams = formatParams(params);
        if (this.apiKey) {
            finalParams["apiKey"] = this.apiKey;
        }
    
        console.error(`Fetching ${url} with params ${JSON.stringify(finalParams)}`);
    
        const response = await axios.get(url, {
            params: finalParams,
            headers,
        });
    
        return {
            content: [
                {
                    type: "text",
                    text: JSON.stringify(response.data, null, 2),
                },
            ],
        };
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It only states the action ('Get') without detailing whether this is a read-only query, if it requires authentication, what the output format might be (e.g., list, single value), or any rate limits. For a tool with no annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words, making it appropriately concise. However, it lacks front-loading of critical details (e.g., purpose could be more specific), and the brevity contributes to gaps in other dimensions like guidelines and transparency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, no output schema, no annotations), the description is incomplete. It doesn't explain what 'net amounts' entail, how results are returned (e.g., paginated with 'pageSize'), or any behavioral traits. Without annotations or an output schema, the description should provide more context to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for each parameter (e.g., 'Name of the water right', 'Water division number', 'Number of results to return'). The description adds no additional meaning beyond this, such as explaining relationships between parameters or usage examples. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get net amounts for water rights' states a clear verb ('Get') and resource ('net amounts for water rights'), which establishes the basic purpose. However, it lacks specificity about what 'net amounts' means (e.g., current balance, historical data) and doesn't distinguish this tool from potential siblings like 'query_dwr_api' that might also retrieve water-related data. This makes it vague but not tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, context (e.g., for reporting, analysis), or exclusions, and it doesn't reference sibling tools like 'get_surface_water_stations' or 'query_dwr_api' that might handle related queries. This leaves the agent with no usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/EdgeworthHitbox/dwr-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server