create-survey
Server Details
Create AI surveys with dynamic follow-up probing directly from your AI assistant.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- feedbk-ai/feedbk-mcp-server
- GitHub Stars
- 2
- Server Listing
- AI Survey Creator MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
9 toolsanalyze_questionAnalyze QuestionARead-onlyInspect
Show a visual analysis of a single survey question. Renders a horizontal bar chart for closed questions (single/multiple choice) or an answer explorer for open text questions.
| Name | Required | Description | Default |
|---|---|---|---|
| question_id | Yes | The question ID to analyze (e.g., q1, q2) | |
| project_token | Yes | Your project token (format: projectId:secret) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true, the description adds valuable behavioral context: it discloses conditional rendering logic (horizontal bar chart for closed questions vs answer explorer for open text) that is not present in annotations or schema. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes core purpose, second details visualization behavior. Appropriately front-loaded and sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only visualization tool with 2 simple parameters, the description adequately covers the essentials: input requirements (implied), processing logic (conditional rendering), and output type (visual analysis). No output schema exists, but the description sufficiently characterizes the return value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (both results_token and question_id fully documented), the baseline is 3. The description implies the need for a question identifier through 'single survey question' but adds no explicit parameter guidance beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Show a visual analysis' plus resource 'survey question' and explicitly scopes to 'single' question, clearly distinguishing from sibling 'analyze_results' (which implies bulk analysis). The rendering details (bar chart vs answer explorer) further clarify the output format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context by specifying 'single survey question,' implicitly distinguishing from bulk analysis siblings like 'analyze_results.' However, lacks explicit when-not guidance or named alternatives for scenarios requiring multi-question analysis or raw data export.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
analyze_surveyAnalyze SurveyBRead-onlyInspect
Start the analysis workflow for a survey. Returns the analysis prompt and question IDs to analyze.
| Name | Required | Description | Default |
|---|---|---|---|
| project_token | Yes | Your project token (format: projectId:secret) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, indicating a safe, non-destructive operation that may have variable outcomes. The description adds that it 'Returns the analysis prompt and question IDs to analyze,' which provides useful context about the output format beyond annotations. However, it doesn't disclose other behavioral traits like rate limits, authentication needs (implied by project_token but not stated), or whether the analysis is synchronous/asynchronous.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (two short sentences) and front-loaded with the core purpose. Every word earns its place: the first sentence states the action and resource, and the second specifies the return values. There is zero redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (initiating an analysis workflow), rich annotations (covering safety and openness), and lack of output schema, the description is minimally adequate. It explains what the tool does and what it returns, but doesn't cover error conditions, side effects, or how the returned prompt/IDs should be used (e.g., with 'analyze_question'). For a workflow-initiating tool, more context on next steps would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'project_token' fully documented in the schema. The description adds no parameter-specific information beyond what the schema provides (e.g., no clarification on token format or where to obtain it). Baseline score of 3 is appropriate since the schema carries the full burden of parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Start the analysis workflow') and resource ('for a survey'), and specifies the return values ('analysis prompt and question IDs to analyze'). It distinguishes from siblings like 'analyze_question' by focusing on the survey-level workflow initiation rather than individual question analysis. However, it doesn't explicitly contrast with other analysis-related tools like 'view_responses' or 'simulate_responses'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a completed survey), exclusions (e.g., not for archived surveys), or relationships to sibling tools like 'analyze_question' (which might be used after this tool). The agent must infer usage from the tool name and context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
archive_surveyArchive SurveyAInspect
Archive or reactivate a survey. Archived surveys stop accepting new responses but keep existing data intact.
| Name | Required | Description | Default |
|---|---|---|---|
| active | Yes | true to reactivate, false to archive | |
| project_token | Yes | Your project token (format: projectId:secret) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false, openWorldHint=false, and destructiveHint=false, covering basic traits. The description adds valuable context: it specifies that archiving stops new responses while preserving existing data, which is not covered by annotations. However, it lacks details on permissions, rate limits, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and effect. Every word contributes to understanding the tool's purpose without redundancy or unnecessary detail, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (mutation with two parameters), annotations provide safety hints, and schema covers parameters fully. The description adds key behavioral context about data preservation. However, without an output schema, it could benefit from mentioning return values or confirmation messages, slightly limiting completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear parameter descriptions in the schema. The description does not add any additional meaning beyond the schema, such as explaining the implications of the 'active' parameter or the format of 'project_token'. Baseline score of 3 is appropriate since the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Archive or reactivate') on a specific resource ('a survey'), distinguishing it from siblings like create_survey, share_survey, or view_responses. It also specifies the effect ('stop accepting new responses but keep existing data intact'), which further clarifies its purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for changing survey status (archive/reactivate) but does not explicitly state when to use this tool versus alternatives like save_survey or analyze_survey. No exclusions or prerequisites are mentioned, leaving some ambiguity about appropriate contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_surveyCreate SurveyARead-onlyInspect
Create a new survey or edit an existing one. Call this to start the survey workflow. If the user provides a project_token, include it to load the existing survey for editing.
| Name | Required | Description | Default |
|---|---|---|---|
| project_token | No | Optional: the user's project token (format: projectId:secret) to load an existing survey for editing |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, so the agent knows this is a safe, non-destructive operation with closed-world assumptions. The description adds useful context about the 'survey workflow' and editing functionality, but doesn't disclose additional behavioral traits like rate limits, authentication needs, or what 'editing' entails beyond loading. With annotations covering the safety profile, this earns a baseline score for adding some value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences that each serve a clear purpose: stating the tool's function and providing usage guidance. It's front-loaded with the core purpose and avoids unnecessary elaboration, though it could be slightly more concise by combining ideas.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (create/edit functionality), lack of output schema, and rich annotations, the description is adequate but has gaps. It explains the basic workflow and parameter use, but doesn't detail what happens after creation/editing, error conditions, or how it interacts with sibling tools like 'save_survey'. The annotations help, but more completeness would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter fully documented in the schema. The description adds marginal value by explaining that project_token is used 'to load an existing survey for editing,' which reinforces but doesn't significantly expand upon the schema's description. This meets the baseline when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Create a new survey or edit an existing one.' It specifies the verb ('create'/'edit') and resource ('survey'), making the intent unambiguous. However, it doesn't explicitly differentiate this from sibling tools like 'save_survey' or 'archive_survey', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: 'Call this to start the survey workflow' and explains when to include the project_token parameter for editing. It gives practical guidance on when to use the tool, though it doesn't explicitly state when NOT to use it or mention alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_simulated_responseSave Simulated ResponseAInspect
Save a single simulated response to a survey. Called by the simulation workflow for each generated respondent.
| Name | Required | Description | Default |
|---|---|---|---|
| answers | Yes | Answers keyed by question ID (e.g. q1, q2). | |
| project_token | Yes | Your project token (format: projectId:secret) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a write operation (readOnlyHint: false) that is non-destructive. The description adds workflow context ('simulation workflow') but does not disclose additional behavioral traits like idempotency, rate limits, or what happens to the saved data after persistence.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first sentence front-loads the core purpose, while the second provides essential usage context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 params, nested objects) and available annotations, the description is adequately complete. It explains the 'what' and 'when' sufficiently for a data persistence tool. Minor gaps remain regarding success indicators or persistence guarantees, but no output schema exists that would require documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both parameters including the results_token format and answers structure. The description implies the singular nature of the operation ('single simulated response') aligning with the answers parameter representing one respondent, but does not add syntax details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (save), resource (simulated response), and scope (single, to a survey). It effectively distinguishes from sibling tools like 'save_survey' (which saves the survey structure) and 'simulate_responses' (which likely generates responses) by emphasizing this persists individual generated responses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear contextual guidance by stating it is 'Called by the simulation workflow for each generated respondent,' indicating it should be used within an iterative simulation process. However, it does not explicitly state when NOT to use it or directly reference sibling alternatives like 'simulate_responses'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_surveySave SurveyAInspect
Save a survey. Creates a new survey if no token is provided, or updates an existing one. Returns the survey URL and token.
| Name | Required | Description | Default |
|---|---|---|---|
| guide | Yes | ||
| project_token | No | Project token (format: projectId:secret). Omit to create a new survey. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-destructive write operation (readOnlyHint: false, destructiveHint: false), which the description confirms with 'save changes'. It adds the critical context that the target must be 'published', but omits details about validation behavior, version handling, or failure modes that would help an agent handle errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero redundancy. The first states purpose; the second states usage context. Perfectly front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity of the nested 'guide' object (containing questions, choices, logic conditions) and lack of output schema, the description is minimally adequate. It establishes the tool's role but leaves agents to infer the semantics of the survey structure from the schema alone.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 50% schema coverage, the description neither helps nor hinders. It completely ignores the complex 'guide' parameter (a deeply nested survey structure) and adds nothing beyond the schema's description of 'results_token'. At 50% coverage, the baseline is 3, and the description meets but does not exceed this.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (save changes), the resource (existing published survey), and the scope (editing workflow). It effectively distinguishes from siblings by specifying this is for already-published surveys, contrasting with publish_survey (initial publication) and edit_survey (the editing step itself).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow positioning ('Use this after editing'), clarifying when to invoke the tool in the sequence. However, it lacks explicit exclusions (e.g., 'do not use for new/unpublished surveys') or named alternatives for other scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
simulate_responsesSimulate ResponsesARead-onlyInspect
Start the simulation workflow for a survey. Returns the simulation prompt and survey guide so you can generate responses client-side. Use when the user wants to simulate responses, simulate an interview, or generate test data.
| Name | Required | Description | Default |
|---|---|---|---|
| project_token | Yes | Your project token (format: projectId:secret) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds crucial context beyond annotations: clarifies that responses are generated client-side (not server-side) and specifies return values ('simulation prompt and survey guide'). Annotations confirm read-only safety, while description explains the actual behavioral pattern.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. Front-loaded with core action and return value, followed by usage conditions. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter read operation. Explains return values compensating for missing output schema. Could strengthen by mentioning relationship to 'save_simulated_response' for the full workflow context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete parameter documentation. Description adds no parameter details, but none are needed given the schema self-documents. Baseline score appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Start') and resource ('simulation workflow'), distinguishes from siblings like 'save_simulated_response' or 'start_survey' by clarifying it returns prompts/guides rather than executing the simulation or actual survey launch.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use when...' trigger conditions ('simulate responses', 'simulate an interview', 'generate test data'). Lacks explicit 'when not to use' or comparison to sibling 'save_simulated_response', but context is clear enough for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
view_responsesView Survey ResultsBRead-onlyInspect
View survey responses and transcripts interactively. Opens a dashboard showing all responses with the ability to view individual transcripts.
| Name | Required | Description | Default |
|---|---|---|---|
| project_token | Yes | Your project token (format: projectId:secret) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=true, covering safety and scope. The description adds value by specifying that it 'opens a dashboard' and allows 'viewing individual transcripts,' which are behavioral traits not captured in annotations. However, it lacks details on permissions, rate limits, or dashboard specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and efficient, using two sentences that directly convey the tool's function and interactive nature without unnecessary details. Every sentence earns its place by adding clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (interactive dashboard), lack of output schema, and rich annotations, the description is minimally adequate. It covers the core action but omits details on dashboard behavior, response formats, or error handling, leaving gaps for an agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'project_token' fully documented in the schema. The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline of 3 without compensating for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('view survey responses and transcripts interactively') and resource ('survey responses'), distinguishing it from sibling tools like analyze_survey or simulate_responses. However, it doesn't explicitly differentiate from potential overlaps like 'analyze_question' for viewing specific response data, keeping it at a 4 rather than a perfect 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like analyze_survey or simulate_responses, nor does it mention prerequisites or exclusions. It merely describes what the tool does without contextual usage advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.