create-survey
Server Details
Create AI surveys with dynamic follow-up probing directly from your AI assistant.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- feedbk-ai/feedbk-mcp-server
- GitHub Stars
- 2
- Server Listing
- AI Survey Creator MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 4 of 4 tools scored.
The tools have distinct purposes with minimal overlap: create_survey initiates or edits, save_survey persists changes, share_survey distributes, and view_responses analyzes data. However, create_survey and save_survey could be slightly confusing as both handle creation, but descriptions clarify that create_survey starts the workflow while save_survey finalizes it.
All tool names follow a consistent verb_noun pattern (create_survey, save_survey, share_survey, view_responses), using snake_case uniformly. This predictability makes it easy for agents to understand and select tools based on their naming conventions.
With 4 tools, the set is well-scoped for a survey creation server, covering the core lifecycle: creation, saving, sharing, and viewing responses. Each tool earns its place without redundancy, making it manageable and focused on the domain's essential operations.
The tool surface covers the main survey workflow comprehensively, including create, save, share, and view responses. A minor gap exists in the lack of explicit update or delete operations for surveys, but save_survey handles updates, and the domain does not necessarily require deletion for basic functionality.
Available Tools
5 toolscreate_edit_surveyCreate or Edit SurveyBRead-onlyInspect
Create a new survey or edit an existing one. Call this to start the survey workflow. If the user provides a project_token, include it to load the existing survey for editing.
| Name | Required | Description | Default |
|---|---|---|---|
| project_token | No | Optional: the user's project token (format: projectId:secret) to load an existing survey for editing |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, destructiveHint=false, and openWorldHint=false, which already convey that this is a safe, non-destructive operation with limited scope. The description adds minimal behavioral context beyond this, such as 'start the survey workflow,' but doesn't detail what that entails (e.g., UI interactions, state changes). No contradiction with annotations exists, but the description doesn't enrich behavioral understanding significantly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences that directly address purpose and parameter usage, avoiding redundancy. It's front-loaded with the core function, though it could be slightly more structured (e.g., separating creation vs. editing scenarios). Overall, it's efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mutation-like but annotated as read-only), 1 parameter with full schema coverage, no output schema, and annotations covering safety, the description is moderately complete. It explains the dual create/edit function and parameter role but lacks details on workflow outcomes, error handling, or integration with siblings, leaving room for improvement in guiding the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'project_token' fully documented in the schema. The description adds marginal value by reiterating its optional nature and linking it to editing, but doesn't provide additional semantics beyond what the schema already states (e.g., format details or usage nuances). Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Create a new survey or edit an existing one.' It specifies the verb ('create'/'edit') and resource ('survey'), making the function unambiguous. However, it doesn't explicitly differentiate from siblings like 'save_survey' or 'preview_survey', which might handle overlapping aspects of survey management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage guidance: 'Call this to start the survey workflow' and mentions using 'project_token' for editing. This implies context but lacks explicit when-to-use rules or alternatives (e.g., when to use 'save_survey' instead). It doesn't specify exclusions or prerequisites, leaving gaps in agent decision-making.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
preview_surveyPreview SurveyARead-onlyInspect
Get a preview link for an existing survey so the user can try it out before sharing. Requires the project_token.
| Name | Required | Description | Default |
|---|---|---|---|
| project_token | Yes | Your project token (format: projectId:secret) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare this as read-only, non-destructive, and closed-world, which the description doesn't contradict. The description adds valuable context beyond annotations by specifying that it generates a preview link for testing purposes, which helps the agent understand the tool's behavioral output even without an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place: the first states the purpose and action, the second states the requirement. No wasted words, and the information is front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with comprehensive annotations and clear purpose, the description is mostly complete. The main gap is the lack of output schema, so the description doesn't specify what the preview link looks like or how it's returned. However, given the tool's simplicity and good annotations, this is a minor omission.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents the single required parameter (project_token). The description mentions the parameter requirement but doesn't add semantic meaning beyond what's in the schema, such as explaining why this token is needed or how it relates to survey previews.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get a preview link'), the resource ('an existing survey'), and the purpose ('so the user can try it out before sharing'). It explicitly distinguishes from sibling tools like share_survey and view_responses by focusing on preview functionality rather than distribution or analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool ('to try it out before sharing') and mentions the prerequisite requirement ('Requires the project_token'). However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools, though the purpose implies it's for preview rather than final sharing or editing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_surveySave SurveyAInspect
Save a survey. Creates a new survey if no token is provided, or updates an existing one. Returns the survey URL and token.
| Name | Required | Description | Default |
|---|---|---|---|
| guide | Yes | ||
| project_token | No | Project token (format: projectId:secret). Omit to create a new survey. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-destructive write operation (readOnlyHint: false, destructiveHint: false), which the description confirms with 'save changes'. It adds the critical context that the target must be 'published', but omits details about validation behavior, version handling, or failure modes that would help an agent handle errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero redundancy. The first states purpose; the second states usage context. Perfectly front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity of the nested 'guide' object (containing questions, choices, logic conditions) and lack of output schema, the description is minimally adequate. It establishes the tool's role but leaves agents to infer the semantics of the survey structure from the schema alone.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 50% schema coverage, the description neither helps nor hinders. It completely ignores the complex 'guide' parameter (a deeply nested survey structure) and adds nothing beyond the schema's description of 'results_token'. At 50% coverage, the baseline is 3, and the description meets but does not exceed this.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (save changes), the resource (existing published survey), and the scope (editing workflow). It effectively distinguishes from siblings by specifying this is for already-published surveys, contrasting with publish_survey (initial publication) and edit_survey (the editing step itself).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow positioning ('Use this after editing'), clarifying when to invoke the tool in the sequence. However, it lacks explicit exclusions (e.g., 'do not use for new/unpublished surveys') or named alternatives for other scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
view_responsesView Survey ResultsBRead-onlyInspect
View survey responses and transcripts. Returns a summary plus a dashboard URL for interactive browsing.
| Name | Required | Description | Default |
|---|---|---|---|
| project_token | Yes | Your project token (format: projectId:secret) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds value by specifying that it 'opens a dashboard' and allows 'viewing individual transcripts,' which are behavioral traits not captured in annotations. However, it lacks details on permissions, rate limits, or dashboard specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and efficient, using two sentences that directly convey the tool's function and interactive nature without unnecessary details. Every sentence earns its place by adding clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (interactive dashboard), lack of output schema, and rich annotations, the description is minimally adequate. It covers the core action but omits details on dashboard behavior, response formats, or error handling, leaving gaps for an agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'project_token' fully documented in the schema. The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline of 3 without compensating for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('view survey responses and transcripts interactively') and resource ('survey responses'), distinguishing it from sibling tools like analyze_survey or simulate_responses. However, it doesn't explicitly differentiate from potential overlaps like 'analyze_question' for viewing specific response data, keeping it at a 4 rather than a perfect 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like analyze_survey or simulate_responses, nor does it mention prerequisites or exclusions. It merely describes what the tool does without contextual usage advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.