Skip to main content
Glama
sunsiyuan

humansurvey-mcp

Create Survey

create_survey

Create surveys using JSON schema to collect structured feedback from groups. Returns a shareable response URL with support for conditional questions, response caps, and scheduled expiration.

Instructions

Use this when an agent task involves collecting structured feedback or data from a group of people. Common cases: post-event attendee feedback, product satisfaction after a launch, team health checks, customer ratings after support resolution. Provide a JSON schema object describing the survey. Returns a survey_url to share with respondents and a survey_id to pass to get_results later. The survey accepts responses immediately and stays open until you close it or it expires.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
schemaYesSurvey as a JSON schema object: { title: string, sections: [{ questions: [{ type, label, ...}] }] }. Question types: single_choice (needs options), multi_choice (needs options), text, scale (needs min/max, range ≤ 11), matrix (needs rows + columns). Add showIf: { questionId, operator: "eq"|"neq"|"contains"|"answered", value } to any question for conditional logic.
max_responsesNoOptional. Close the survey automatically after this many responses.
expires_atNoOptional. ISO 8601 datetime — close the survey automatically at this time (e.g. "2026-04-14T00:00:00Z").
webhook_urlNoOptional. URL to POST to once when the survey closes. Payload: { survey_id, status: "closed", closed_reason: "manual" | "max_responses", response_count, closed_at }. Fires when you call close_survey or when max_responses is reached. Does not fire when expires_at elapses.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It successfully discloses lifecycle behavior (accepts responses immediately, stays open until closed/expires) and return structure. However, it lacks safety disclosures (permissions required, validation behavior, idempotency, what happens on duplicate creation) expected for a resource-creating tool without annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five well-structured sentences that front-load use cases, specify input requirements, document return values, and explain lifecycle behavior. Each sentence earns its place by adding distinct information not redundant with structured fields, though the 'Use this when' opening is slightly less direct than an action verb.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description appropriately documents return values (survey_url, survey_id) and their purposes. It covers the complex nested input sufficiently by reference to 'JSON schema object' and explains integration points with siblings (get_results, close_survey). Minor gap: high-level webhook behavior isn't mentioned in description text (only in parameter schema).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with detailed JSON schema specification. The description adds minimal semantics beyond the schema, primarily noting to 'Provide a JSON schema object' which partially overlaps with the schema parameter description. Baseline score of 3 appropriate given schema carries the detailed parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool's purpose through use cases and return values (survey_url, survey_id), though it relies on the title for the explicit 'Create' verb using 'Use this when' construction instead of an action-first statement. It effectively distinguishes from sibling get_results by mentioning it as the destination for survey_id.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance with concrete examples (post-event feedback, product satisfaction, team health checks). References sibling tool get_results in the workflow context (where to pass survey_id), establishing clear usage sequencing, though lacks explicit 'when not to use' exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sunsiyuan/human-survey'

If you have feedback or need assistance with the MCP directory API, please join our Discord server