Skip to main content
Glama

Server Details

Create AI surveys with dynamic follow-up probing directly from your AI assistant.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
feedbk-ai/feedbk-mcp-server
GitHub Stars
2
Server Listing
AI Survey Creator MCP Server

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

9 tools
analyze_questionAnalyze QuestionA
Read-only
Inspect

Show a visual analysis of a single survey question. Renders a horizontal bar chart for closed questions (single/multiple choice) or an answer explorer for open text questions.

ParametersJSON Schema
NameRequiredDescriptionDefault
question_idYesThe question ID to analyze (e.g., q1, q2)
project_tokenYesYour project token (format: projectId:secret)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true, the description adds valuable behavioral context: it discloses conditional rendering logic (horizontal bar chart for closed questions vs answer explorer for open text) that is not present in annotations or schema. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes core purpose, second details visualization behavior. Appropriately front-loaded and sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only visualization tool with 2 simple parameters, the description adequately covers the essentials: input requirements (implied), processing logic (conditional rendering), and output type (visual analysis). No output schema exists, but the description sufficiently characterizes the return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (both results_token and question_id fully documented), the baseline is 3. The description implies the need for a question identifier through 'single survey question' but adds no explicit parameter guidance beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Show a visual analysis' plus resource 'survey question' and explicitly scopes to 'single' question, clearly distinguishing from sibling 'analyze_results' (which implies bulk analysis). The rendering details (bar chart vs answer explorer) further clarify the output format.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context by specifying 'single survey question,' implicitly distinguishing from bulk analysis siblings like 'analyze_results.' However, lacks explicit when-not guidance or named alternatives for scenarios requiring multi-question analysis or raw data export.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

analyze_surveyAnalyze SurveyB
Read-only
Inspect

Start the analysis workflow for a survey. Returns the analysis prompt and question IDs to analyze.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_tokenYesYour project token (format: projectId:secret)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, indicating a safe, non-destructive operation that may have variable outcomes. The description adds that it 'Returns the analysis prompt and question IDs to analyze,' which provides useful context about the output format beyond annotations. However, it doesn't disclose other behavioral traits like rate limits, authentication needs (implied by project_token but not stated), or whether the analysis is synchronous/asynchronous.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (two short sentences) and front-loaded with the core purpose. Every word earns its place: the first sentence states the action and resource, and the second specifies the return values. There is zero redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (initiating an analysis workflow), rich annotations (covering safety and openness), and lack of output schema, the description is minimally adequate. It explains what the tool does and what it returns, but doesn't cover error conditions, side effects, or how the returned prompt/IDs should be used (e.g., with 'analyze_question'). For a workflow-initiating tool, more context on next steps would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'project_token' fully documented in the schema. The description adds no parameter-specific information beyond what the schema provides (e.g., no clarification on token format or where to obtain it). Baseline score of 3 is appropriate since the schema carries the full burden of parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Start the analysis workflow') and resource ('for a survey'), and specifies the return values ('analysis prompt and question IDs to analyze'). It distinguishes from siblings like 'analyze_question' by focusing on the survey-level workflow initiation rather than individual question analysis. However, it doesn't explicitly contrast with other analysis-related tools like 'view_responses' or 'simulate_responses'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a completed survey), exclusions (e.g., not for archived surveys), or relationships to sibling tools like 'analyze_question' (which might be used after this tool). The agent must infer usage from the tool name and context alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

archive_surveyArchive SurveyAInspect

Archive or reactivate a survey. Archived surveys stop accepting new responses but keep existing data intact.

ParametersJSON Schema
NameRequiredDescriptionDefault
activeYestrue to reactivate, false to archive
project_tokenYesYour project token (format: projectId:secret)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false, openWorldHint=false, and destructiveHint=false, covering basic traits. The description adds valuable context: it specifies that archiving stops new responses while preserving existing data, which is not covered by annotations. However, it lacks details on permissions, rate limits, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and effect. Every word contributes to understanding the tool's purpose without redundancy or unnecessary detail, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (mutation with two parameters), annotations provide safety hints, and schema covers parameters fully. The description adds key behavioral context about data preservation. However, without an output schema, it could benefit from mentioning return values or confirmation messages, slightly limiting completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear parameter descriptions in the schema. The description does not add any additional meaning beyond the schema, such as explaining the implications of the 'active' parameter or the format of 'project_token'. Baseline score of 3 is appropriate since the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Archive or reactivate') on a specific resource ('a survey'), distinguishing it from siblings like create_survey, share_survey, or view_responses. It also specifies the effect ('stop accepting new responses but keep existing data intact'), which further clarifies its purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for changing survey status (archive/reactivate) but does not explicitly state when to use this tool versus alternatives like save_survey or analyze_survey. No exclusions or prerequisites are mentioned, leaving some ambiguity about appropriate contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_surveyCreate SurveyA
Read-only
Inspect

Create a new survey or edit an existing one. Call this to start the survey workflow. If the user provides a project_token, include it to load the existing survey for editing.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_tokenNoOptional: the user's project token (format: projectId:secret) to load an existing survey for editing
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, so the agent knows this is a safe, non-destructive operation with closed-world assumptions. The description adds useful context about the 'survey workflow' and editing functionality, but doesn't disclose additional behavioral traits like rate limits, authentication needs, or what 'editing' entails beyond loading. With annotations covering the safety profile, this earns a baseline score for adding some value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences that each serve a clear purpose: stating the tool's function and providing usage guidance. It's front-loaded with the core purpose and avoids unnecessary elaboration, though it could be slightly more concise by combining ideas.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (create/edit functionality), lack of output schema, and rich annotations, the description is adequate but has gaps. It explains the basic workflow and parameter use, but doesn't detail what happens after creation/editing, error conditions, or how it interacts with sibling tools like 'save_survey'. The annotations help, but more completeness would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter fully documented in the schema. The description adds marginal value by explaining that project_token is used 'to load an existing survey for editing,' which reinforces but doesn't significantly expand upon the schema's description. This meets the baseline when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Create a new survey or edit an existing one.' It specifies the verb ('create'/'edit') and resource ('survey'), making the intent unambiguous. However, it doesn't explicitly differentiate this from sibling tools like 'save_survey' or 'archive_survey', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context: 'Call this to start the survey workflow' and explains when to include the project_token parameter for editing. It gives practical guidance on when to use the tool, though it doesn't explicitly state when NOT to use it or mention alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_simulated_responseSave Simulated ResponseAInspect

Save a single simulated response to a survey. Called by the simulation workflow for each generated respondent.

ParametersJSON Schema
NameRequiredDescriptionDefault
answersYesAnswers keyed by question ID (e.g. q1, q2).
project_tokenYesYour project token (format: projectId:secret)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a write operation (readOnlyHint: false) that is non-destructive. The description adds workflow context ('simulation workflow') but does not disclose additional behavioral traits like idempotency, rate limits, or what happens to the saved data after persistence.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. The first sentence front-loads the core purpose, while the second provides essential usage context. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 params, nested objects) and available annotations, the description is adequately complete. It explains the 'what' and 'when' sufficiently for a data persistence tool. Minor gaps remain regarding success indicators or persistence guarantees, but no output schema exists that would require documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents both parameters including the results_token format and answers structure. The description implies the singular nature of the operation ('single simulated response') aligning with the answers parameter representing one respondent, but does not add syntax details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (save), resource (simulated response), and scope (single, to a survey). It effectively distinguishes from sibling tools like 'save_survey' (which saves the survey structure) and 'simulate_responses' (which likely generates responses) by emphasizing this persists individual generated responses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear contextual guidance by stating it is 'Called by the simulation workflow for each generated respondent,' indicating it should be used within an iterative simulation process. However, it does not explicitly state when NOT to use it or directly reference sibling alternatives like 'simulate_responses'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_surveySave SurveyAInspect

Save a survey. Creates a new survey if no token is provided, or updates an existing one. Returns the survey URL and token.

ParametersJSON Schema
NameRequiredDescriptionDefault
guideYes
project_tokenNoProject token (format: projectId:secret). Omit to create a new survey.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-destructive write operation (readOnlyHint: false, destructiveHint: false), which the description confirms with 'save changes'. It adds the critical context that the target must be 'published', but omits details about validation behavior, version handling, or failure modes that would help an agent handle errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero redundancy. The first states purpose; the second states usage context. Perfectly front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity of the nested 'guide' object (containing questions, choices, logic conditions) and lack of output schema, the description is minimally adequate. It establishes the tool's role but leaves agents to infer the semantics of the survey structure from the schema alone.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 50% schema coverage, the description neither helps nor hinders. It completely ignores the complex 'guide' parameter (a deeply nested survey structure) and adds nothing beyond the schema's description of 'results_token'. At 50% coverage, the baseline is 3, and the description meets but does not exceed this.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (save changes), the resource (existing published survey), and the scope (editing workflow). It effectively distinguishes from siblings by specifying this is for already-published surveys, contrasting with publish_survey (initial publication) and edit_survey (the editing step itself).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow positioning ('Use this after editing'), clarifying when to invoke the tool in the sequence. However, it lacks explicit exclusions (e.g., 'do not use for new/unpublished surveys') or named alternatives for other scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

share_surveyShare SurveyA
Read-only
Inspect

Show QR code and share link for an existing survey. Use this when the user is ready to distribute their survey to respondents. The survey must already be saved with save_survey.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_tokenYesProject token (format: projectId:secret)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds useful context about the prerequisite (survey must be saved) and the distribution purpose, which helps the agent understand when this tool is appropriate, though it doesn't detail output format or limitations like rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states the purpose, and the second provides usage guidelines. It's front-loaded with the core action and efficiently conveys necessary information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema), annotations cover safety, and the description adds key context (prerequisite, distribution timing). It's nearly complete, but could slightly improve by hinting at output types (e.g., QR code and link formats) since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the single parameter (project_token). The description doesn't add any parameter-specific details beyond what the schema provides, such as format examples or constraints, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Show QR code and share link') and resource ('for an existing survey'), distinguishing it from siblings like create_survey (creation) or view_responses (analysis). It explicitly identifies what the tool does beyond just the name/title.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit guidance on when to use ('when the user is ready to distribute their survey to respondents') and a prerequisite ('The survey must already be saved with save_survey'), clearly differentiating from alternatives like create_survey or save_survey.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

simulate_responsesSimulate ResponsesA
Read-only
Inspect

Start the simulation workflow for a survey. Returns the simulation prompt and survey guide so you can generate responses client-side. Use when the user wants to simulate responses, simulate an interview, or generate test data.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_tokenYesYour project token (format: projectId:secret)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds crucial context beyond annotations: clarifies that responses are generated client-side (not server-side) and specifies return values ('simulation prompt and survey guide'). Annotations confirm read-only safety, while description explains the actual behavioral pattern.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with core action and return value, followed by usage conditions. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter read operation. Explains return values compensating for missing output schema. Could strengthen by mentioning relationship to 'save_simulated_response' for the full workflow context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete parameter documentation. Description adds no parameter details, but none are needed given the schema self-documents. Baseline score appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb ('Start') and resource ('simulation workflow'), distinguishes from siblings like 'save_simulated_response' or 'start_survey' by clarifying it returns prompts/guides rather than executing the simulation or actual survey launch.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Use when...' trigger conditions ('simulate responses', 'simulate an interview', 'generate test data'). Lacks explicit 'when not to use' or comparison to sibling 'save_simulated_response', but context is clear enough for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

view_responsesView Survey ResultsB
Read-only
Inspect

View survey responses and transcripts interactively. Opens a dashboard showing all responses with the ability to view individual transcripts.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_tokenYesYour project token (format: projectId:secret)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds value by specifying that it 'opens a dashboard' and allows 'viewing individual transcripts,' which are behavioral traits not captured in annotations. However, it lacks details on permissions, rate limits, or dashboard specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and efficient, using two sentences that directly convey the tool's function and interactive nature without unnecessary details. Every sentence earns its place by adding clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (interactive dashboard), lack of output schema, and rich annotations, the description is minimally adequate. It covers the core action but omits details on dashboard behavior, response formats, or error handling, leaving gaps for an agent to infer.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'project_token' fully documented in the schema. The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline of 3 without compensating for any gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('view survey responses and transcripts interactively') and resource ('survey responses'), distinguishing it from sibling tools like analyze_survey or simulate_responses. However, it doesn't explicitly differentiate from potential overlaps like 'analyze_question' for viewing specific response data, keeping it at a 4 rather than a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like analyze_survey or simulate_responses, nor does it mention prerequisites or exclusions. It merely describes what the tool does without contextual usage advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.