Skip to main content
Glama

Server Details

Create AI surveys with dynamic follow-up probing directly from your AI assistant.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
feedbk-ai/feedbk-mcp-server
GitHub Stars
2
Server Listing
AI Survey Creator MCP Server

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation4/5

The tools have distinct purposes with minimal overlap: create_survey initiates or edits, save_survey persists changes, share_survey distributes, and view_responses analyzes data. However, create_survey and save_survey could be slightly confusing as both handle creation, but descriptions clarify that create_survey starts the workflow while save_survey finalizes it.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (create_survey, save_survey, share_survey, view_responses), using snake_case uniformly. This predictability makes it easy for agents to understand and select tools based on their naming conventions.

Tool Count5/5

With 4 tools, the set is well-scoped for a survey creation server, covering the core lifecycle: creation, saving, sharing, and viewing responses. Each tool earns its place without redundancy, making it manageable and focused on the domain's essential operations.

Completeness4/5

The tool surface covers the main survey workflow comprehensively, including create, save, share, and view responses. A minor gap exists in the lack of explicit update or delete operations for surveys, but save_survey handles updates, and the domain does not necessarily require deletion for basic functionality.

Available Tools

5 tools
create_edit_surveyCreate or Edit SurveyB
Read-only
Inspect

Create a new survey or edit an existing one. Call this to start the survey workflow. If the user provides a project_token, include it to load the existing survey for editing.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_tokenNoOptional: the user's project token (format: projectId:secret) to load an existing survey for editing
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, destructiveHint=false, and openWorldHint=false, which already convey that this is a safe, non-destructive operation with limited scope. The description adds minimal behavioral context beyond this, such as 'start the survey workflow,' but doesn't detail what that entails (e.g., UI interactions, state changes). No contradiction with annotations exists, but the description doesn't enrich behavioral understanding significantly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences that directly address purpose and parameter usage, avoiding redundancy. It's front-loaded with the core function, though it could be slightly more structured (e.g., separating creation vs. editing scenarios). Overall, it's efficient with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (mutation-like but annotated as read-only), 1 parameter with full schema coverage, no output schema, and annotations covering safety, the description is moderately complete. It explains the dual create/edit function and parameter role but lacks details on workflow outcomes, error handling, or integration with siblings, leaving room for improvement in guiding the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'project_token' fully documented in the schema. The description adds marginal value by reiterating its optional nature and linking it to editing, but doesn't provide additional semantics beyond what the schema already states (e.g., format details or usage nuances). Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Create a new survey or edit an existing one.' It specifies the verb ('create'/'edit') and resource ('survey'), making the function unambiguous. However, it doesn't explicitly differentiate from siblings like 'save_survey' or 'preview_survey', which might handle overlapping aspects of survey management.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage guidance: 'Call this to start the survey workflow' and mentions using 'project_token' for editing. This implies context but lacks explicit when-to-use rules or alternatives (e.g., when to use 'save_survey' instead). It doesn't specify exclusions or prerequisites, leaving gaps in agent decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

preview_surveyPreview SurveyA
Read-only
Inspect

Get a preview link for an existing survey so the user can try it out before sharing. Requires the project_token.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_tokenYesYour project token (format: projectId:secret)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare this as read-only, non-destructive, and closed-world, which the description doesn't contradict. The description adds valuable context beyond annotations by specifying that it generates a preview link for testing purposes, which helps the agent understand the tool's behavioral output even without an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place: the first states the purpose and action, the second states the requirement. No wasted words, and the information is front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with comprehensive annotations and clear purpose, the description is mostly complete. The main gap is the lack of output schema, so the description doesn't specify what the preview link looks like or how it's returned. However, given the tool's simplicity and good annotations, this is a minor omission.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single required parameter (project_token). The description mentions the parameter requirement but doesn't add semantic meaning beyond what's in the schema, such as explaining why this token is needed or how it relates to survey previews.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get a preview link'), the resource ('an existing survey'), and the purpose ('so the user can try it out before sharing'). It explicitly distinguishes from sibling tools like share_survey and view_responses by focusing on preview functionality rather than distribution or analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('to try it out before sharing') and mentions the prerequisite requirement ('Requires the project_token'). However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools, though the purpose implies it's for preview rather than final sharing or editing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_surveySave SurveyAInspect

Save a survey. Creates a new survey if no token is provided, or updates an existing one. Returns the survey URL and token.

ParametersJSON Schema
NameRequiredDescriptionDefault
guideYes
project_tokenNoProject token (format: projectId:secret). Omit to create a new survey.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-destructive write operation (readOnlyHint: false, destructiveHint: false), which the description confirms with 'save changes'. It adds the critical context that the target must be 'published', but omits details about validation behavior, version handling, or failure modes that would help an agent handle errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero redundancy. The first states purpose; the second states usage context. Perfectly front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity of the nested 'guide' object (containing questions, choices, logic conditions) and lack of output schema, the description is minimally adequate. It establishes the tool's role but leaves agents to infer the semantics of the survey structure from the schema alone.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 50% schema coverage, the description neither helps nor hinders. It completely ignores the complex 'guide' parameter (a deeply nested survey structure) and adds nothing beyond the schema's description of 'results_token'. At 50% coverage, the baseline is 3, and the description meets but does not exceed this.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (save changes), the resource (existing published survey), and the scope (editing workflow). It effectively distinguishes from siblings by specifying this is for already-published surveys, contrasting with publish_survey (initial publication) and edit_survey (the editing step itself).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow positioning ('Use this after editing'), clarifying when to invoke the tool in the sequence. However, it lacks explicit exclusions (e.g., 'do not use for new/unpublished surveys') or named alternatives for other scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

share_surveyShare SurveyA
Read-only
Inspect

Get the public share link for an existing survey to distribute to respondents. Use when the user is ready to collect responses and says 'share it' (or similar). Requires the project_token.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_tokenYesYour project token (format: projectId:secret)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds useful context about the prerequisite (survey must be saved) and the distribution purpose, which helps the agent understand when this tool is appropriate, though it doesn't detail output format or limitations like rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states the purpose, and the second provides usage guidelines. It's front-loaded with the core action and efficiently conveys necessary information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema), annotations cover safety, and the description adds key context (prerequisite, distribution timing). It's nearly complete, but could slightly improve by hinting at output types (e.g., QR code and link formats) since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the single parameter (project_token). The description doesn't add any parameter-specific details beyond what the schema provides, such as format examples or constraints, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Show QR code and share link') and resource ('for an existing survey'), distinguishing it from siblings like create_survey (creation) or view_responses (analysis). It explicitly identifies what the tool does beyond just the name/title.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit guidance on when to use ('when the user is ready to distribute their survey to respondents') and a prerequisite ('The survey must already be saved with save_survey'), clearly differentiating from alternatives like create_survey or save_survey.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

view_responsesView Survey ResultsB
Read-only
Inspect

View survey responses and transcripts. Returns a summary plus a dashboard URL for interactive browsing.

ParametersJSON Schema
NameRequiredDescriptionDefault
project_tokenYesYour project token (format: projectId:secret)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds value by specifying that it 'opens a dashboard' and allows 'viewing individual transcripts,' which are behavioral traits not captured in annotations. However, it lacks details on permissions, rate limits, or dashboard specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and efficient, using two sentences that directly convey the tool's function and interactive nature without unnecessary details. Every sentence earns its place by adding clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (interactive dashboard), lack of output schema, and rich annotations, the description is minimally adequate. It covers the core action but omits details on dashboard behavior, response formats, or error handling, leaving gaps for an agent to infer.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'project_token' fully documented in the schema. The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline of 3 without compensating for any gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('view survey responses and transcripts interactively') and resource ('survey responses'), distinguishing it from sibling tools like analyze_survey or simulate_responses. However, it doesn't explicitly differentiate from potential overlaps like 'analyze_question' for viewing specific response data, keeping it at a 4 rather than a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like analyze_survey or simulate_responses, nor does it mention prerequisites or exclusions. It merely describes what the tool does without contextual usage advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.