Skip to main content
Glama
JiantaoFu

App Market Intelligence MCP

google-play-similar

Find similar apps on Google Play by analyzing app data to identify comparable applications for market research and competitor analysis.

Instructions

Get similar apps from Google Play. Returns a list of apps with:

  • url: Play Store URL

  • appId: Package name (e.g. 'com.company.app')

  • summary: Short description

  • developer: Developer name

  • developerId: Developer ID

  • icon: Icon image URL

  • score: Rating (0-5)

  • scoreText: Rating display text

  • priceText: Price display text

  • free: Boolean indicating if app is free

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
appIdYesGoogle Play package name (e.g., 'com.dxco.pandavszombies')
langNoLanguage code for result text (default: en)en
countryNoCountry code to get results from (default: us)us
fullDetailNoInclude full app details in results (default: false), If fullDetail is true, includes all fields from app details endpoint.

Implementation Reference

  • Handler function that executes the tool: calls gplay.similar() with provided parameters and returns the JSON-stringified list of similar apps.
    async ({ appId, lang, country, fullDetail }) => {
      const similar = await gplay.similar({ appId, lang, country, fullDetail });
      return { content: [{ type: "text", text: JSON.stringify(similar) }] };
    }
  • Zod schema defining input parameters: appId (required), lang, country, fullDetail.
    {
      appId: z.string().describe("Google Play package name (e.g., 'com.dxco.pandavszombies')"),
      lang: z.string().default("en").describe("Language code for result text (default: en)"),
      country: z.string().default("us").describe("Country code to get results from (default: us)"),
      fullDetail: z.boolean().default(false).describe("Include full app details in results (default: false), If fullDetail is true, includes all fields from app details endpoint.")
    }, 
  • src/server.js:536-558 (registration)
    Full registration of the 'google-play-similar' tool using McpServer.tool(), including description, input schema, and inline handler function.
    server.tool("google-play-similar", 
      "Get similar apps from Google Play. Returns a list of apps with:\n" +
      "- url: Play Store URL\n" +
      "- appId: Package name (e.g. 'com.company.app')\n" +
      "- summary: Short description\n" +
      "- developer: Developer name\n" +
      "- developerId: Developer ID\n" +
      "- icon: Icon image URL\n" +
      "- score: Rating (0-5)\n" +
      "- scoreText: Rating display text\n" +
      "- priceText: Price display text\n" +
      "- free: Boolean indicating if app is free\n",
      {
        appId: z.string().describe("Google Play package name (e.g., 'com.dxco.pandavszombies')"),
        lang: z.string().default("en").describe("Language code for result text (default: en)"),
        country: z.string().default("us").describe("Country code to get results from (default: us)"),
        fullDetail: z.boolean().default(false).describe("Include full app details in results (default: false), If fullDetail is true, includes all fields from app details endpoint.")
      }, 
      async ({ appId, lang, country, fullDetail }) => {
        const similar = await gplay.similar({ appId, lang, country, fullDetail });
        return { content: [{ type: "text", text: JSON.stringify(similar) }] };
      }
    );
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. While it describes the return format in detail, it doesn't address important behavioral aspects like rate limits, authentication requirements, error conditions, pagination, or whether this is a read-only operation. For a tool with no annotation coverage, this leaves significant gaps in understanding how the tool behaves in practice.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement followed by a bulleted list of return fields. Every sentence earns its place, though the bulleted list could be slightly more concise by grouping related fields. The information is front-loaded with the core purpose first, making it easy to scan.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with 100% schema coverage but no annotations and no output schema, the description provides adequate but incomplete context. The detailed return format documentation partially compensates for the lack of output schema, but important behavioral aspects (rate limits, errors, authentication) remain undocumented. For a tool with moderate complexity and no structured safety/behavior annotations, this leaves the agent with significant unknowns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all 4 parameters. The description adds no parameter-specific information beyond what's in the schema. The baseline score of 3 is appropriate when the schema does all the parameter documentation work, though the description could have added context about how parameters affect the similarity algorithm or result quality.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get similar apps from Google Play' which is a specific verb+resource combination. It distinguishes itself from sibling tools like 'google-play-details' or 'google-play-search' by focusing on similarity recommendations rather than direct lookups or searches. However, it doesn't explicitly differentiate from 'app-store-similar' which appears to be an Apple App Store equivalent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools available (like google-play-details, google-play-search, app-store-similar), there's no indication of when similarity recommendations are appropriate versus direct lookups, searches, or cross-platform comparisons. The agent receives no usage context beyond the basic purpose statement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/JiantaoFu/AppInsightMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server