Gadget MCP Server
Server Quality Checklist
- Disambiguation5/5
Each tool has a clearly distinct purpose with no overlap: get_record retrieves a single record, introspect_model lists field details, list_models enumerates available models, query_records queries multiple records, and run_graphql handles complex GraphQL queries. The descriptions explicitly differentiate use cases, such as recommending introspect_model before query_records and specifying when to use run_graphql over query_records.
Naming Consistency5/5All tool names follow a consistent verb_noun pattern (e.g., get_record, introspect_model, list_models, query_records, run_graphql) with clear, descriptive verbs that align with their functions. There are no deviations in naming conventions, making the set predictable and easy to understand.
Tool Count5/5With 5 tools, the server is well-scoped for interacting with Gadget models via GraphQL, covering introspection, querying, and complex operations. Each tool earns its place by addressing specific needs without redundancy, fitting typical use cases for a database or API interaction server.
Completeness4/5The tool set provides strong coverage for read-only operations, including introspection, querying, and complex GraphQL queries, with clear guidance on usage. However, it lacks write operations (e.g., create, update, delete records), which might be a gap if the domain expects full CRUD functionality, though this could be intentional for a read-only server.
Average 3.8/5 across 5 of 5 tools scored. Lowest: 3.2/5.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v1.0.6
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 5 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. While 'Query' implies read-only operation, the description discloses no other behavioral traits: return format/pagination (no output schema), error conditions, rate limits, or whether partial failures are possible. Does not mention that 'limit' constrains results or if pagination tokens are available.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences front-loaded with purpose. Third sentence provides crucial workflow context. Second sentence is somewhat redundant with schema parameter descriptions, but overall efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with 100% schema coverage and no output schema, the description covers the essential prerequisite workflow (introspect_model) but omits description of return values and pagination behavior which would be necessary for complete context without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed descriptions for all 4 parameters including examples (e.g., 'shopifyOrder', '{ "name": { "equals": "#59389" } }'). Description mentions 'Specify the model name and a GraphQL field selection' which aligns with schema but adds minimal semantic meaning beyond what schema already documents. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific action 'Query records from any Gadget model' with clear verb and resource. Mentions 'introspect_model' which begins to distinguish workflow from sibling discovery tools, though it doesn't explicitly differentiate from 'get_record' (single vs multiple) or 'run_graphql' (structured vs raw).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite 'Use introspect_model first to discover available fields' which establishes a workflow sequence. However, lacks explicit guidance on when to use 'get_record' (single record by ID) vs this tool, or when to prefer 'run_graphql' for complex operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Only states read operation ('Get') but omits error behavior (missing ID), return structure, rate limits, or Gadget-specific platform constraints that aren't obvious from the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. Front-loaded with action and resource, followed by parameter enumeration. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple read operation given rich input schema (100% coverage). However, lacks output description and behavioral edge cases that would be helpful given zero annotations and no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with complete descriptions for all three parameters. Description restates the three parameters needed but adds minimal semantic depth beyond what the schema already provides, meeting baseline expectations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Get', resource 'Gadget record', and scope constraint 'single...by ID'. Clear distinction from sibling 'query_records' (single vs. multiple) through explicit 'single' qualifier.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage pattern through 'single record by ID', distinguishing from bulk operations. However, lacks explicit when-to-use guidance versus 'query_records' or 'run_graphql' siblings for complex filtering.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It reveals the implementation mechanism (GraphQL introspection) but omits safety properties (read-only status), side effects, or return value structure. 'List' implies read-only behavior, but explicit confirmation would be expected given zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no waste. Front-loaded with the action verb, immediately followed by the object and implementation detail. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no annotations, the description should ideally disclose return value structure or safety characteristics. While the tool has low complexity (zero parameters), the absence of any behavioral guarantees or response format details leaves gaps in the agent's understanding of the tool's contract.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, which per evaluation rules sets a baseline of 4. The description does not need to compensate for parameter documentation, and the single sentence structure appropriately reflects the lack of configuration needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb 'List' and clear resource 'all models (types) available in this Gadget app'. The addition of 'via GraphQL introspection' distinguishes it from sibling run_graphql (arbitrary queries) and implicitly contrasts with introspect_model (likely single-model focus).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus siblings like introspect_model or run_graphql. While the scope 'all models' implies a broad discovery use case, the description does not state prerequisites or selection criteria for choosing this over alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. While 'List' implies read-only safety, the description lacks details about output format, pagination, or whether this requires specific permissions. It mentions what it does but not behavioral traits like side effects or return structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence defines purpose; the second provides usage context. Front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single-parameter structure and complete schema documentation, the description is nearly sufficient. Minor gap: without an output schema, a brief hint about the return value (e.g., 'returns field metadata') would improve completeness, but the current description is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the 'model' parameter fully documented in the schema (including format examples like shopifyOrder). The description adds no additional parameter guidance, which is acceptable given the schema completeness, earning baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all fields and their types') and resource ('for a Gadget model'), distinguishing it from siblings like list_models (which lists models) and query_records (which queries data records vs schema).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence provides explicit workflow guidance: 'Run this first when you're unsure what fields exist on a model.' This establishes the discovery use case clearly, though it doesn't explicitly mention alternatives if the schema is already known.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the safety disclosure burden and explicitly states 'read-only', which is critical behavioral context. However, it omits other behavioral details like return format, error handling, or pagination that would help an agent predict outcomes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. First sentence front-loads the core action and safety constraint; second sentence provides usage guidance. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high flexibility of 'arbitrary GraphQL' and lack of output schema, the description adequately covers purpose, safety (read-only), and sibling differentiation. Minor gap in not describing the response structure, though this is somewhat implied by 'query' context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% ('GraphQL query string', 'GraphQL variables'), establishing baseline 3. The description adds semantic context that the query parameter handles 'complex queries with nested relations', but doesn't add syntax details or format specifications beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Run' with clear resource 'GraphQL query' against 'Gadget app', and explicitly distinguishes from sibling tool 'query_records' by stating this handles cases 'that query_records can't express'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'complex queries with nested relations or custom filtering' and implicitly when not to use by referencing the limitation of the alternative. Explicitly names sibling alternative 'query_records'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Stronger-eCommerce/gadget-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server