Skip to main content
Glama
concavegit

App Store Connect MCP Server

by concavegit

get_beta_feedback_screenshot

Retrieve beta feedback screenshots from App Store Connect to analyze tester submissions. Download images and access associated build or tester details for review.

Instructions

Get detailed information about a specific beta feedback screenshot submission. By default, downloads and returns the screenshot image.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
feedbackIdYesThe ID of the beta feedback screenshot submission
includeBuildsNoInclude build information in response (optional)
includeTestersNoInclude tester information in response (optional)
downloadScreenshotNoDownload and return the screenshot as an image (default: true)

Implementation Reference

  • The handler function that executes the tool: fetches beta feedback screenshot data from App Store Connect API and optionally downloads/embeds the image as base64.
    async getBetaFeedbackScreenshot(args: { 
      feedbackId: string;
      includeBuilds?: boolean;
      includeTesters?: boolean;
      downloadScreenshot?: boolean;
    }): Promise<BetaFeedbackScreenshotSubmissionResponse | any> {
      const { feedbackId, includeBuilds = false, includeTesters = false, downloadScreenshot = true } = args;
      
      if (!feedbackId) {
        throw new Error('feedbackId is required');
      }
    
      const params: Record<string, any> = {};
    
      // Add includes if requested
      const includes: string[] = [];
      if (includeBuilds) includes.push('build');
      if (includeTesters) includes.push('tester');
      if (includes.length > 0) {
        params.include = includes.join(',');
      }
    
      // Add field selections
      params['fields[betaFeedbackScreenshotSubmissions]'] = 'createdDate,comment,email,deviceModel,osVersion,locale,timeZone,architecture,connectionType,pairedAppleWatch,appUptimeInMilliseconds,diskBytesAvailable,diskBytesTotal,batteryPercentage,screenWidthInPoints,screenHeightInPoints,appPlatform,devicePlatform,deviceFamily,buildBundleId,screenshots,build,tester';
    
      const response = await this.client.get<BetaFeedbackScreenshotSubmissionResponse>(
        `/betaFeedbackScreenshotSubmissions/${feedbackId}`, 
        params
      );
    
      // If downloadScreenshot is true, download and include the screenshot as base64
      const screenshots = response.data.attributes?.screenshots;
      if (downloadScreenshot && screenshots && screenshots.length > 0) {
        try {
          const screenshot = screenshots[0];
          console.error(`Downloading screenshot from: ${screenshot.url.substring(0, 100)}...`);
          const axios = (await import('axios')).default;
          
          const imageResponse = await axios.get(screenshot.url, {
            responseType: 'arraybuffer',
            timeout: 10000, // 10 second timeout
            maxContentLength: 5 * 1024 * 1024, // 5MB max
            headers: {
              'User-Agent': 'App-Store-Connect-MCP-Server/1.0'
            }
          });
    
          // Convert to base64
          const base64Data = Buffer.from(imageResponse.data).toString('base64');
          const mimeType = imageResponse.headers['content-type'] || 'image/jpeg';
    
          // Return response with both data and image content
          return {
            toolResult: response,
            content: [
              {
                type: "text",
                text: `Beta feedback screenshot (${screenshot.width}x${screenshot.height}) - ${response.data.attributes.comment || 'No comment'}`
              },
              {
                type: "image",
                data: base64Data,
                mimeType: mimeType
              }
            ]
          };
        } catch (error: any) {
          // If download fails, just return the normal response
          console.error('Failed to download screenshot:', error.message);
          return response;
        }
      }
    
      return response;
    }
  • JSON schema definition for the tool inputs in the tools list response.
    {
      name: "get_beta_feedback_screenshot",
      description: "Get detailed information about a specific beta feedback screenshot submission. By default, downloads and returns the screenshot image.",
      inputSchema: {
        type: "object",
        properties: {
          feedbackId: {
            type: "string",
            description: "The ID of the beta feedback screenshot submission"
          },
          includeBuilds: {
            type: "boolean",
            description: "Include build information in response (optional)",
            default: false
          },
          includeTesters: {
            type: "boolean",
            description: "Include tester information in response (optional)",
            default: false
          },
          downloadScreenshot: {
            type: "boolean",
            description: "Download and return the screenshot as an image (default: true)",
            default: true
          }
        },
        required: ["feedbackId"]
      }
    },
  • src/index.ts:1338-1345 (registration)
    Dispatch in CallToolRequestHandler switch statement that calls the handler method for this tool.
    case "get_beta_feedback_screenshot":
      const result = await this.betaHandlers.getBetaFeedbackScreenshot(args as any);
      // If the result already contains content (image), return it directly
      if (result.content) {
        return result;
      }
      // Otherwise format as text
      return formatResponse(result);
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only mentions the default behavior of downloading the screenshot. It doesn't disclose critical behavioral traits like authentication requirements, rate limits, error conditions, response format, or whether this is a read-only operation (though implied by 'get').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste - first states the core purpose, second clarifies the default behavior. Perfectly front-loaded with essential information in minimal space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter tool with no annotations and no output schema, the description is incomplete. It doesn't explain what 'detailed information' includes beyond the screenshot, how the response is structured, or what happens when optional parameters are used. The schema covers parameter definitions well, but behavioral context is lacking.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema by mentioning the default download behavior, but doesn't provide additional context about parameter interactions or usage scenarios.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get detailed information') and resource ('beta feedback screenshot submission'), specifying it's for a specific submission. It distinguishes from sibling 'list_beta_feedback_screenshots' by focusing on individual retrieval rather than listing multiple items.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when detailed info about a specific screenshot is needed, but doesn't explicitly state when to use this vs alternatives like 'list_beta_feedback_screenshots' or other beta-related tools. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/concavegit/app-store-connect-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server