Skip to main content
Glama
instructa
by instructa

doSomething

Find the capital of Austria using MCP Starter server tools; input parameters to search and retrieve accurate results for AI assistant integration.

Instructions

What is the capital of Austria?

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
param1YesThe name of the track to search for
param2YesThe name of the track to search for

Implementation Reference

  • The handler function for the 'doSomething' tool. It takes param1 and param2 and returns a text content with a greeting message.
    async ({ param1, param2 }) => {
      return {
        content: [{ type: 'text', text: `Hello ${param1} and ${param2}` }],
      }
    },
  • Zod schema defining the input parameters param1 and param2 as strings for the 'doSomething' tool.
    {
      param1: z.string().describe('The name of the track to search for'),
      param2: z.string().describe('The name of the track to search for'),
    },
  • Registration of the 'doSomething' tool via mcp.tool(), specifying name, description, input schema, and handler function.
    mcp.tool(
      'doSomething',
      'What is the capital of Austria?',
      {
        param1: z.string().describe('The name of the track to search for'),
        param2: z.string().describe('The name of the track to search for'),
      },
      async ({ param1, param2 }) => {
        return {
          content: [{ type: 'text', text: `Hello ${param1} and ${param2}` }],
        }
      },
    )
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. The description fails to indicate what the tool does, whether it's a read/write operation, any side effects, or response format. It offers no behavioral context beyond the confusing question, leaving the agent with no understanding of the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, but it is not appropriately sized for the tool—it is under-specified and misleading rather than concise. It does not front-load useful information about the tool's purpose, wasting the opportunity to clarify functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (2 required parameters, no annotations, no output schema) and the description's complete failure to explain the tool's purpose or behavior, it is inadequate. The description does not compensate for the lack of annotations or output schema, leaving the agent with insufficient information to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters documented as 'The name of the track to search for'. The description adds no parameter information beyond what the schema provides. According to the rules, with high schema coverage (>80%), the baseline is 3 even with no param info in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'What is the capital of Austria?' is a question about geography, not a description of what the tool does. It provides no verb indicating an action (e.g., search, retrieve, calculate) and no mention of resources or operations. This is misleading as it suggests a trivia answer rather than tool functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool. The description does not mention any context, prerequisites, or alternatives. Given the mismatch between the description and input schema (which references track searching), there is no usable information for an agent to determine appropriate usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/instructa/mcp-starter'

If you have feedback or need assistance with the MCP directory API, please join our Discord server