Skip to main content
Glama
aidesignblueprint

AI Design Blueprint Doctrine

Official

signals.feedback

Record explicit user feedback about Blueprint guidance. Collect ratings for clarity and usefulness, report what helped or was missing, and indicate re-use intent. Captures contact email only with permission for follow-up.

Instructions

Records explicit user feedback — open to all callers, no auth required. Call ONLY when the user explicitly says they want to give feedback; never proactively. contact_email stored only when permission_to_follow_up is true — confirmed in response.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
task_typeNoWhat the user was doing when they decided to give feedback. Use plain English — e.g. 'code-review', 'architecture-design', 'agent-setup', 'onboarding', 'validation'. Infer from context.
surfaceNoWhich Blueprint surface the feedback is about. Use 'mcp' if the session was via Claude Code or another MCP client. Use 'principles', 'examples', 'guides', 'coaching', or 'validation' based on what the user interacted with.
rating_clarityNoAsk the user: 'How clear was the Blueprint guidance? Rate 1–5.' 1 = very unclear, 5 = very clear. Only set if the user gives an explicit number.
rating_usefulnessNoAsk the user: 'How useful was the Blueprint for this task? Rate 1–5.' 1 = not useful, 5 = very useful. Only set if the user gives an explicit number.
what_helpedNoAsk the user: 'What was most helpful?' Record their answer verbatim or paraphrased in plain English. Max 1000 chars. No code snippets, no proprietary content.
what_missingNoAsk the user: 'What was missing or could be improved?' Record their answer verbatim or paraphrased. Max 1000 chars.
would_use_againNoAsk the user: 'Would you use the Blueprint again for a similar task?' Set true/false based on their answer. Only set if they answer explicitly.
contact_emailNoOnly ask for this if the user explicitly says they want a follow-up response. Never prompt for email unprompted. Only stored when permission_to_follow_up=true.
permission_to_follow_upNoSet to true only if the user explicitly said they want a follow-up. Must be confirmed before storing contact_email.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool is a write operation (readOnlyHint=false) and adds critical behavioral details: no auth required, contact_email only stored when permission_to_follow_up is true, and confirmation in response. This goes well beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long and front-loaded. The first sentence states purpose and auth, the second gives a critical usage guideline. Every word is necessary; no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 9 optional parameters and no output schema, the description covers the essential behavioral context: auth, when to call, and the conditional storage of contact_email. This is sufficient for an AI agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description does not add parameter semantics; it focuses on usage. The schema descriptions are detailed and sufficient, so the tool description adds marginal value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool records explicit user feedback and specifies it's open to all callers with no auth required. It distinguishes itself by focusing solely on feedback, unlike siblings such as 'signals.report' which likely handle other signals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to call only when the user explicitly wants to give feedback and never proactively. It also mentions no auth required, providing clear usage context. However, it does not directly compare to sibling tools or mention when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aidesignblueprint/integrations'

If you have feedback or need assistance with the MCP directory API, please join our Discord server