Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
LLM_URLNoLLM API endpoint URL (required for execute_prompt_with_llm)
LLM_API_KEYNoAPI key for authentication
LLM_MODEL_NAMENoModel name to use
MCP_TEST_LOG_LEVELNoSet logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)INFO
MCP_TEST_CONNECT_TIMEOUTNoConnection timeout in seconds30.0

Tools

Functions exposed to the LLM to take actions

NameDescription
connect_to_server

Connect to an MCP server for testing.

Establishes a connection to a target MCP server using the appropriate transport protocol (stdio for file paths, streamable-http for URLs). Only one connection can be active at a time.

Returns: Dictionary with connection details including: - success: Always True on successful connection - connection: Full ConnectionState with server info and statistics - message: Human-readable success message - metadata: Request timing information

Raises: Returns error dict on failure with: - success: False - error: Error details (type, message, suggestion) - metadata: Request timing information

disconnect

Close the current MCP server connection.

Safely disconnects from the active MCP server and clears all connection state and statistics. This method is safe to call even if no connection exists.

Returns: Dictionary with disconnection details including: - success: Always True - message: Human-readable status message - was_connected: Whether a connection existed before disconnect - metadata: Request timing information and previous connection info

get_connection_status

Check the current MCP server connection state.

Returns detailed information about the active connection including server information, transport type, connection duration, and usage statistics.

Returns: Dictionary with connection status including: - success: Always True - connected: Boolean indicating if currently connected - connection: Full ConnectionState if connected, None otherwise - message: Human-readable status message - metadata: Request timing and connection duration info

list_tools

List all tools available on the connected MCP server.

Retrieves comprehensive information about all tools exposed by the target server, including full input schemas to enable accurate tool invocation.

Returns: Dictionary with tool listing including: - success: True on successful retrieval - tools: List of tool objects with name, description, and full input_schema - metadata: Total count, server info, timing information

Raises: Returns error dict if not connected or retrieval fails

call_tool

Execute a tool on the connected MCP server.

Calls a tool by name with the provided arguments and returns the result along with execution timing and metadata.

Returns: Dictionary with tool execution results including: - success: True if tool executed successfully - tool_call: Object with tool_name, arguments, result, and execution metadata - metadata: Request timing and server information

Raises: Returns error dict for various failure scenarios: - not_connected: No active connection - tool_not_found: Tool doesn't exist on server - invalid_arguments: Arguments don't match tool schema - execution_error: Tool execution failed

list_resources

List all resources available on the connected MCP server.

Retrieves comprehensive information about all resources exposed by the target server, including URIs, names, descriptions, and MIME types to enable accurate resource access.

Returns: Dictionary with resource listing including: - success: True on successful retrieval - resources: List of resource objects with uri, name, description, mimeType - metadata: Total count, server info, timing information

Raises: Returns error dict if not connected or retrieval fails

read_resource

Read a specific resource from the connected MCP server.

Reads a resource by URI and returns its content along with metadata.

Returns: Dictionary with resource content including: - success: True if resource was read successfully - resource: Object with uri, mimeType, and content - metadata: Content size and request timing

Raises: Returns error dict for various failure scenarios: - not_connected: No active connection - resource_not_found: Resource doesn't exist on server - execution_error: Resource read failed

list_prompts

List all prompts available on the connected MCP server.

Retrieves comprehensive information about all prompts exposed by the target server, including names, descriptions, and complete argument schemas to enable accurate prompt invocation.

Returns: Dictionary with prompt listing including: - success: True on successful retrieval - prompts: List of prompt objects with name, description, and arguments schema - metadata: Total count, server info, timing information

Raises: Returns error dict if not connected or retrieval fails

get_prompt

Get a rendered prompt from the connected MCP server.

Retrieves a prompt by name with the provided arguments and returns the rendered prompt messages.

Returns: Dictionary with rendered prompt including: - success: True if prompt was retrieved successfully - prompt: Object with name, description, and rendered messages - metadata: Request timing and server information

Raises: Returns error dict for various failure scenarios: - not_connected: No active connection - prompt_not_found: Prompt doesn't exist on server - invalid_arguments: Arguments don't match prompt schema - execution_error: Prompt retrieval failed

execute_prompt_with_llm

Execute a prompt with an LLM and return the response.

This tool performs the complete workflow:

  1. Retrieves the prompt from the connected MCP server with prompt_arguments

  2. Optionally fills template variables in the prompt messages

  3. Sends the prompt messages to an LLM

  4. Returns the LLM's response along with metadata

Supports two prompt patterns:

  • Standard MCP prompts: Pass arguments via prompt_arguments, server handles substitution

  • Template variables: Use fill_variables to replace {variable} placeholders in messages

Args: prompt_name: Name of the prompt to execute prompt_arguments: Dictionary of arguments to pass to the MCP prompt (default: {}) fill_variables: Dictionary of template variables to fill in prompt messages (default: None) Used for manual string replacement of {variable_name} patterns. Values are JSON-serialized before substitution if they're not strings. llm_config: Optional LLM configuration with keys: - url: LLM endpoint URL (default: from LLM_URL env var) - model: Model name (default: from LLM_MODEL_NAME env var) - api_key: API key (default: from LLM_API_KEY env var) - max_tokens: Maximum tokens in response (default: 1000) - temperature: Sampling temperature (default: 0.7)

Returns: Dictionary with execution results including: - success: True if execution succeeded - prompt: Original prompt information - llm_request: The request sent to the LLM - llm_response: The LLM's response - parsed_response: Attempted JSON parsing if response looks like JSON - metadata: Timing and configuration information

Raises: Returns error dict for various failure scenarios: - not_connected: No active MCP connection - prompt_not_found: Prompt doesn't exist - llm_config_error: Missing or invalid LLM configuration - llm_request_error: LLM request failed

health_check

Health check endpoint that verifies the server is running.

Returns: Dictionary with status and server information

ping

Simple ping tool that responds with 'pong'.

Useful for testing basic connectivity and server responsiveness.

Returns: The string 'pong'

echo

Echo back a message.

Args: message: The message to echo back

Returns: The same message that was provided

add

Add two numbers together.

Args: a: First number b: Second number

Returns: The sum of a and b

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rdwj/mcp-test-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server