Skip to main content
Glama
deifos

FeedbackBasket MCP Server

by deifos

list_projects

Retrieve all accessible FeedbackBasket projects with summary statistics to analyze feedback and bug reports.

Instructions

List all FeedbackBasket projects accessible by your API key with summary statistics

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The primary handler function `listProjects()` in the `FeedbackBasketClient` class. It makes an API call to retrieve projects, processes statistics, handles empty results, formats a comprehensive markdown summary with project details, stats, and API key info, and returns structured content for MCP.
    async listProjects(): Promise<{ content: Array<{ type: string; text: string }> }> {
      try {
        const response = await this.api.post<ProjectsResponse>('/projects', {});
        
        const projects = response.data.projects;
        if (projects.length === 0) {
          return {
            content: [{
              type: 'text',
              text: 'No projects found. Make sure your API key has access to projects in your FeedbackBasket dashboard.'
            }]
          };
        }
    
        const projectList = projects.map(project => {
          const totalFeedback = project.stats.totalFeedback;
          const pendingCount = project.stats.byStatus.PENDING;
          const bugCount = project.stats.byCategory.BUG;
          
          return [
            `**${project.name}**`,
            `  URL: ${project.url}`,
            `  Total Feedback: ${totalFeedback}`,
            `  Pending: ${pendingCount} | Bugs: ${bugCount}`,
            `  Created: ${new Date(project.createdAt).toLocaleDateString()}`,
            ''
          ].join('\n');
        }).join('\n');
    
        const summary = [
          `# FeedbackBasket Projects (${projects.length} total)\n`,
          projectList,
          `\n*API Key: ${response.data.apiKeyInfo.name} (${response.data.apiKeyInfo.usageCount} uses)*`
        ].join('\n');
    
        return {
          content: [{
            type: 'text',
            text: summary
          }]
        };
      } catch (error) {
        throw this.handleError('Failed to fetch projects', error);
      }
    }
  • src/index.ts:64-72 (registration)
    Tool registration in the `ListToolsRequest` handler, defining the tool name, description, and input schema (empty object, no parameters required).
    {
      name: 'list_projects',
      description: 'List all FeedbackBasket projects accessible by your API key with summary statistics',
      inputSchema: {
        type: 'object',
        properties: {},
        additionalProperties: false,
      },
    },
  • src/index.ts:194-196 (registration)
    Dispatch logic in the `CallToolRequest` handler switch statement that routes 'list_projects' tool calls to the client handler.
    case 'list_projects':
      return await client.listProjects();
  • Input schema definition for the 'list_projects' tool: an empty object indicating no input parameters are required.
    inputSchema: {
      type: 'object',
      properties: {},
      additionalProperties: false,
    },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the API key access scope and summary statistics inclusion, which adds some context. However, it doesn't address important behavioral aspects like pagination, rate limits, sorting, or what happens with large result sets. For a list operation with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that communicates the core purpose, scope, and additional value ('summary statistics') without any wasted words. It's appropriately sized for a simple list operation and front-loads the essential information. Every element of the sentence serves a clear purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is adequate but has clear gaps. It covers what the tool does and its scope, but doesn't address behavioral aspects like response format, pagination, or error conditions. Without annotations or output schema, the description should ideally provide more complete context about what to expect from the operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema already fully documents the absence of parameters. The description appropriately doesn't waste space discussing non-existent parameters. It adds value by clarifying what the tool returns ('summary statistics') without needing to detail inputs. The baseline for 0 parameters with full schema coverage is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List all') and resource ('FeedbackBasket projects'), making the purpose immediately understandable. It specifies scope ('accessible by your API key') and includes additional detail ('with summary statistics'). However, it doesn't explicitly differentiate from sibling tools like get_bug_reports or get_feedback, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like search_feedback or get_feedback. It mentions the scope ('accessible by your API key') but doesn't explain when this listing approach is preferred over more targeted sibling tools. No explicit when/when-not instructions or alternative recommendations are included.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/deifos/feedbackbasket-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server