Skip to main content
Glama

generate_exam

Create custom exams with up to 20 questions on any topic, specifying difficulty levels and question types for assessment preparation.

Instructions

Generate a full exam with up to 20 questions on any topic. Cost: $0.020 USDC. Service: examforge.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
topicYes
num_questionsNo
difficultyNo
question_typesNo

Implementation Reference

  • The tool is not defined as a static function in the codebase. This server is dynamic: it fetches tool definitions (including 'generate_exam' if it exists) from an external registry URL defined in `REGISTRY_URL`. The `CallToolRequestSchema` handler in `src/index.ts` dynamically routes execution by finding the matching tool in the registry and calling the associated endpoint.
    server.setRequestHandler(CallToolRequestSchema, async (request) => {
      const { name, arguments: args } = request.params;
    
      let registry: Registry;
      try {
        registry = await fetchRegistry();
      } catch (error) {
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify({ error: "Failed to fetch tool registry", detail: String(error) }),
            },
          ],
        };
      }
    
      const tool = registry.tools.find((t) => t.name === name);
      if (!tool) {
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify({
                error: `Tool '${name}' not found`,
                available_tools: registry.tools.map((t) => t.name),
              }),
            },
          ],
        };
      }
    
      try {
        const result = await callTool(tool, args as Record<string, unknown>);
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify(result, null, 2),
            },
          ],
        };
      } catch (error) {
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify({
                error: "Tool call failed",
                tool: name,
                service: tool.service,
                detail: String(error),
              }),
            },
          ],
        };
      }
    });
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions cost ('$0.020 USDC') and service ('examforge'), which adds some context, but fails to describe critical behaviors like response format, error handling, rate limits, or authentication needs for a generative tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded with the core purpose in the first sentence. The second sentence adds cost and service details efficiently, though these could be integrated more smoothly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a generative tool with 4 parameters, no annotations, and no output schema, the description is incomplete. It lacks details on output format, error cases, and parameter semantics, making it inadequate for reliable agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate for undocumented parameters but does not. It mentions 'up to 20 questions' which hints at 'num_questions', but doesn't explain 'topic', 'difficulty', or 'question_types'. The description adds minimal value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Generate a full exam with up to 20 questions on any topic.' It specifies the verb ('generate'), resource ('full exam'), and scope ('up to 20 questions on any topic'), though it doesn't explicitly differentiate from sibling tools, which appear unrelated to exam generation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions cost and service details, but offers no context about prerequisites, ideal scenarios, or comparisons to other tools, leaving the agent without usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/yantrix-ai/yantrix-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server