Skip to main content
Glama
dkmaker

mcp-rest-api

test_request

Test REST API endpoints by sending HTTP requests and receiving detailed response data including status codes, headers, timing, and validation information.

Instructions

Test a REST API endpoint and get detailed response information. Base URL: https://api.example.com | SSL Verification enabled (see config resource for SSL settings) | Authentication: No authentication configured | No custom headers defined (see config resource for headers) | The tool automatically: - Normalizes endpoints (adds leading slash, removes trailing slashes) - Handles authentication header injection - Applies custom headers from HEADER_* environment variables - Accepts any HTTP status code as valid - Limits response size to 10000 bytes (see config resource for size limit settings) - Returns detailed response information including: * Full URL called * Status code and text * Response headers * Response body * Request details (method, headers, body) * Response timing * Validation messages | Error Handling: - Network errors are caught and returned with descriptive messages - Invalid status codes are still returned with full response details - Authentication errors include the attempted auth method | See the config resource for all configuration options, including header configuration.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
methodYesHTTP method to use
endpointYesEndpoint path (e.g. "/users"). Do not include full URLs - only the path. Example: "/api/users" will resolve to "https://api.example.com/api/users"
bodyNoOptional request body for POST/PUT requests
headersNoOptional request headers for one-time use. IMPORTANT: Do not use for sensitive data like API keys - those should be configured via environment variables. This parameter is intended for dynamic, non-sensitive headers that may be needed for specific requests.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It comprehensively details automatic behaviors (e.g., endpoint normalization, authentication header injection, response size limiting), error handling (network errors, invalid status codes), and response details returned. This goes well beyond basic functionality, covering operational traits and constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose but becomes verbose with detailed lists of automatic behaviors and response details. While informative, some sentences could be more streamlined (e.g., the bulleted list of response information is lengthy). It earns its place by covering necessary details but lacks optimal brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 parameters, no output schema, no annotations), the description is highly complete. It explains behavioral traits, error handling, configuration references, and parameter semantics thoroughly. This compensates for the lack of structured data, making it sufficient for an agent to understand and use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by clarifying parameter usage: it specifies that the 'endpoint' should not include full URLs and gives an example, and it warns about sensitive data in 'headers' (directing to environment variables instead). However, it does not elaborate on 'method' or 'body' beyond the schema, keeping it from a perfect score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Test a REST API endpoint and get detailed response information') and identifies the resource (REST API endpoint). It distinguishes itself by specifying the base URL, authentication status, and automatic behaviors, making it highly specific even without sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for testing REST API endpoints but does not explicitly state when to use this tool versus alternatives (e.g., other API testing tools or direct HTTP clients). It mentions configuration resources for settings, which provides some context, but lacks explicit guidance on scenarios or prerequisites for optimal use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dkmaker/mcp-rest-api'

If you have feedback or need assistance with the MCP directory API, please join our Discord server