Skip to main content
Glama
launchnotes

LaunchNotes MCP Server

Official
by launchnotes

Get LaunchNotes Feedback Item

launchnotes_get_feedback
Read-onlyIdempotent

Retrieve detailed feedback information including customer data, reporter details, sentiment analysis, and associated items to understand user input context.

Instructions

Retrieve complete details for a specific feedback item including customer info, reporter, and associations.

Args:

  • feedback_id (string): The ID of the feedback item (required)

  • response_format ('json' | 'markdown'): Output format (default: 'markdown')

Returns: Complete feedback details including:

  • Content and internal notes

  • Sentiment (reaction) and importance

  • Affected customer information

  • Reporter information

  • Associated announcement/idea/work item

  • Timestamps

Use Cases:

  • "Show me details for feedback #abc123"

  • "Get the full context of this feedback item"

  • "What announcement is this feedback associated with?"

Error Handling:

  • Returns "Feedback not found" if ID doesn't exist

  • Returns "Authentication failed" if API token is invalid

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
feedback_idYesThe ID of the feedback item to retrieve
response_formatNoOutput format: 'json' for structured data, 'markdown' for human-readablemarkdown

Implementation Reference

  • Registration of the launchnotes_get_feedback tool with the MCP server, including tool metadata, input schema reference, annotations, and the complete handler function that fetches and formats the feedback item.
      server.registerTool(
        "launchnotes_get_feedback",
        {
          title: "Get LaunchNotes Feedback Item",
          description: `Retrieve complete details for a specific feedback item including customer info, reporter, and associations.
    
    Args:
      - feedback_id (string): The ID of the feedback item (required)
      - response_format ('json' | 'markdown'): Output format (default: 'markdown')
    
    Returns:
      Complete feedback details including:
      - Content and internal notes
      - Sentiment (reaction) and importance
      - Affected customer information
      - Reporter information
      - Associated announcement/idea/work item
      - Timestamps
    
    Use Cases:
      - "Show me details for feedback #abc123"
      - "Get the full context of this feedback item"
      - "What announcement is this feedback associated with?"
    
    Error Handling:
      - Returns "Feedback not found" if ID doesn't exist
      - Returns "Authentication failed" if API token is invalid`,
          inputSchema: GetFeedbackSchema,
          annotations: {
            readOnlyHint: true,
            destructiveHint: false,
            idempotentHint: true,
            openWorldHint: true,
          },
        },
        async (params: GetFeedbackInput) => {
          try {
            const result = await getFeedback(client, params.feedback_id);
            const feedback = result.feedback;
    
            if (params.response_format === RESPONSE_FORMAT.JSON) {
              return {
                content: [
                  {
                    type: "text",
                    text: JSON.stringify(feedback, null, 2),
                  },
                ],
              };
            }
    
            // Markdown format
            return {
              content: [
                {
                  type: "text",
                  text: formatFeedbackMarkdown(feedback),
                },
              ],
            };
          } catch (error) {
            return {
              isError: true,
              content: [
                {
                  type: "text",
                  text: `Error retrieving feedback: ${error instanceof Error ? error.message : "Unknown error"}`,
                },
              ],
            };
          }
        }
      );
  • Zod input schema (GetFeedbackSchema) for validating parameters of launchnotes_get_feedback tool: required feedback_id and optional response_format.
    export const GetFeedbackSchema = z
      .object({
        feedback_id: z
          .string()
          .min(1, "Feedback ID is required")
          .describe("The ID of the feedback item to retrieve"),
        response_format: responseFormatSchema,
      })
      .strict();
  • Helper function that executes the GraphQL query to fetch a specific feedback item by ID, called by the tool handler.
    export async function getFeedback(
      client: GraphQLClient,
      feedbackId: string
    ): Promise<{
      feedback: LaunchNotesFeedback;
    }> {
      return client.execute(GET_FEEDBACK_QUERY, { id: feedbackId });
    }
  • Helper function to format the retrieved feedback data into human-readable Markdown, used when response_format is 'markdown'.
    export function formatFeedbackMarkdown(feedback: LaunchNotesFeedback): string {
      const sections: string[] = [];
    
      // Header
      sections.push(`# Feedback Details`);
      sections.push(`**ID:** ${feedback.id}`);
    
      // Sentiment indicators
      const reactionEmoji = {
        happy: "😊 Happy",
        meh: "😐 Neutral",
        sad: "😞 Unhappy",
      };
      if (feedback.reaction) {
        sections.push(`**Reaction:** ${reactionEmoji[feedback.reaction]}`);
      }
    
      if (feedback.importance) {
        const importanceLabel =
          feedback.importance.charAt(0).toUpperCase() + feedback.importance.slice(1);
        sections.push(`**Importance:** ${importanceLabel}`);
      }
    
      sections.push(`**Starred:** ${feedback.starred ? "⭐ Yes" : "No"}`);
      sections.push(`**Archived:** ${feedback.archived ? "📦 Yes" : "No"}`);
    
      // Customer info
      sections.push(`\n## Affected Customer`);
      sections.push(`**ID:** ${feedback.affectedCustomer.id}`);
      sections.push(`**Email:** ${feedback.affectedCustomer.email}`);
      if (feedback.affectedCustomer.initials) {
        sections.push(`**Initials:** ${feedback.affectedCustomer.initials}`);
      }
      if (feedback.affectedCustomer.confirmedAt) {
        sections.push(`**Confirmed:** ${new Date(feedback.affectedCustomer.confirmedAt).toLocaleString()}`);
      }
    
      // Reporter info
      if (feedback.reporter) {
        sections.push(`\n## Reporter`);
        sections.push(`**ID:** ${feedback.reporter.id}`);
        sections.push(`**Email:** ${feedback.reporter.email}`);
      }
    
      // Associated item
      if (feedback.feedbackable) {
        sections.push(`\n## Associated With`);
        sections.push(`**Type:** ${feedback.feedbackable.__typename}`);
        sections.push(`**ID:** ${feedback.feedbackable.id}`);
        if (feedback.feedbackable.name) {
          sections.push(`**Name:** ${feedback.feedbackable.name}`);
        }
      }
    
      // Content
      sections.push(`\n## Content`);
      sections.push(feedback.content);
    
      // Notes
      if (feedback.notes) {
        sections.push(`\n## Internal Notes`);
        sections.push(feedback.notes);
      }
    
      // Timestamps
      sections.push(
        `\n*Created: ${new Date(feedback.createdAt).toLocaleString()}*`
      );
      sections.push(`*Updated: ${new Date(feedback.updatedAt).toLocaleString()}*`);
    
      return sections.join("\n");
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable behavioral context beyond annotations by specifying error handling scenarios ('Feedback not found', 'Authentication failed'), which helps the agent understand failure modes and response patterns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Args, Returns, Use Cases, Error Handling) and front-loads the core purpose. While comprehensive, some redundancy exists between the Args section and the schema, preventing a perfect score for conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only retrieval tool with comprehensive annotations and clear parameters, the description provides complete context. It covers purpose, usage examples, error handling, and return details, making it fully adequate for an agent to understand and use this tool correctly without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents both parameters. The description's 'Args' section repeats what's in the schema without adding meaningful semantic context beyond what's already provided in the structured fields, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Retrieve complete details') and resource ('for a specific feedback item'), distinguishing it from sibling tools like 'launchnotes_search_feedback' which searches multiple items. It explicitly lists what details are included, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance with 'Use Cases' section containing three concrete examples of when to use this tool, such as 'Show me details for feedback #abc123'. This clearly differentiates it from search/list tools and tells the agent exactly when this retrieval tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/launchnotes/mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server