Skip to main content
Glama

Server Quality Checklist

75%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool has a clearly distinct purpose: three metadata enumerators (categories/countries/values), two distinct search targets (brands vs products), and one refinement operation. No functional overlap exists between any pair.

    Naming Consistency5/5

    Perfectly consistent snake_case convention with clear verb_noun structure throughout: 'get_' prefix for metadata retrieval, 'search_' prefix for discovery operations, and 'refine_search' for the modification action.

    Tool Count5/5

    Six tools is ideal for this focused search/discovery domain. The surface covers metadata discovery, primary search vectors (brands and products), and result refinement without bloat or missing core capabilities.

    Completeness4/5

    Strong coverage for a read-only search API: filtering dimensions are fully exposed via metadata endpoints, and both major entity types (brands/products) are searchable. Minor gap: refine_search appears product-specific with no equivalent brand refinement, and lacks single-item retrieval by ID (though search may suffice).

  • Average 3.9/5 across 6 of 6 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v1.0.9

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 6 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full behavioral burden but offers minimal disclosure beyond 'List all.' It does not clarify return format (IDs vs names vs objects), list size/volume, caching, or rate limits. The word 'supported' implies a fixed enum set, but this is thin behavioral context.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence, front-loaded with the action verb, zero redundancy. Every word earns its place in explaining scope and utility.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple lookup tool with no parameters, the description is minimally adequate. However, the absence of an output schema means the description should ideally describe the return structure (e.g., 'returns list of category objects with id and name'), which is missing.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has zero parameters. Per scoring rules, 0 parameters establishes a baseline score of 4. The description correctly implies no filtering arguments are needed by stating 'List all.'

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('List') and resource ('product categories') with clear scope ('all supported'). It implicitly distinguishes from sibling search tools by positioning the output as filter values for 'products and brands,' though it does not explicitly contrast with get_countries.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    It provides implied context by stating categories 'can be used to filter products and brands,' hinting at the workflow (use before searching). However, it lacks explicit when-to-use/when-not-to-use guidance or direct references to sibling tools like search_products.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses return behavior ('scored and ranked results with match reasons') but omits operational traits like read-only safety, rate limits, pagination behavior, or error conditions. Partial compensation for missing annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with zero waste: first establishes purpose, second lists filtering capabilities, third describes return format. Front-loaded with the core action ('Search OriginSelect's curated catalog') and appropriately sized for the parameter complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 10 parameters with 100% schema coverage and no output schema, the description adequately compensates by explaining the return format ('scored and ranked results with match reasons'). Complete for a search tool of this complexity, though could explicitly note that all filters are optional.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing a baseline of 3. The description lists filterable fields (country, values, category, brand, price) matching schema parameters, and provides examples like 'women-owned, organic, b-corp' that mirror schema enums. Does not add significant semantic meaning beyond the comprehensive schema definitions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description explicitly states the tool searches 'OriginSelect's curated catalog of ethical, origin-verified products' with specific verb and resource. It clearly distinguishes from sibling tools like 'search_brands' (products vs brands) and 'get_categories'/'get_countries' (search vs metadata retrieval).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implicit usage guidance by listing filterable dimensions (country, values, category, brand, price) and noting that results are 'scored and ranked with match reasons.' However, lacks explicit guidance on when to use versus siblings like 'refine_search' or 'search_brands,' and doesn't mention that all parameters are optional.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It adds valuable behavioral context by disclosing the limited dataset ('Currently supports Canada and USA'). However, it omits other behavioral traits like caching, return format structure, or whether results are static/dynamic.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste: first establishes purpose and usage context, second provides current limitations. Information is front-loaded and appropriately sized for a simple enumeration tool.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a zero-parameter lookup tool, the description adequately covers purpose and data constraints. Minor gap: does not specify return format (e.g., ISO codes vs full names) despite lack of output schema, though 'Canada and USA' provides a reasonable hint.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema contains zero parameters, establishing a baseline of 4. The description appropriately does not mention parameters since none exist, maintaining the baseline without adding or subtracting value.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verb 'List' with resource 'countries of origin' and explicitly distinguishes from sibling get_categories by specifying the domain (countries vs categories). The scope is further clarified by 'Currently supports Canada and USA'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implied usage context by stating countries 'can be used to filter products and brands,' which hints at using this before search_products/search_brands. However, lacks explicit when-to-use vs when-not-to-use guidance or direct comparison to sibling tools like get_categories.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It discloses return structure ('canonical value tokens with display labels') which compensates for missing output_schema, but omits other behavioral traits like caching, pagination, or rate limiting.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficient sentences with zero waste: first establishes purpose, second explains return format. Perfectly front-loaded with no redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no output schema exists, the description adequately explains return values ('canonical value tokens with display labels'). Covers essential context for a simple enumeration tool, though explicit mention of relationship to search_products could strengthen completeness.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 0 parameters, establishing baseline 4 per scoring rules. Description appropriately requires no additional parameter explanation since the tool takes no arguments.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verb 'List' with clear resource 'ethical/ownership values' and distinguishes from siblings (get_categories, get_countries) by specifying the exact domain of values returned.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implied usage context ('can be used to filter products and brands') suggesting when these values are relevant, but lacks explicit guidance on when to call this versus siblings or prerequisites for search operations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully discloses return values ('brand details including values, website, and product count') compensating for the missing output schema, though it omits safety permissions or error handling specifics.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three well-structured sentences: purpose declaration, filtering capabilities, and return value disclosure. No redundant or filler text. Information is front-loaded with the core action in the first sentence.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the rich input schema (100% coverage, 3 enum fields) and lack of output schema, the description appropriately focuses on explaining the return structure and primary use case. It adequately covers the tool's functionality without needing to document individual parameters that are well-schematized.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% description coverage, establishing a baseline of 3. The description mentions filter categories (country, values, category) but primarily repeats enum examples already present in the schema without adding syntax guidance, validation rules, or semantic relationships between parameters.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description clearly states the specific action (search), resource (ethical and origin-verified brands), and platform (OriginSelect). It distinguishes from sibling 'search_products' by focusing on brands rather than individual products.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage through filter examples (country, values, category) but provides no explicit when-to-use guidance versus siblings like 'refine_search' or 'get_categories/get_values' discovery tools. No prerequisites or exclusions are mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It adds valuable behavioral context ('fast refinement', 'no new query parsing needed') but omits safety profile (read-only vs. destructive), error handling for invalid intent objects, or idempotency guarantees.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Every sentence earns its place: purpose statement, input requirement, performance characteristic, then actionable examples. No redundancy. The examples are formatted efficiently and demonstrate the parameter patterns clearly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's workflow complexity (dependency on prior search_products call, nested modification objects), the description adequately covers the main usage pattern and prerequisites. No output schema exists to describe. Minor gap in not addressing error states or invalid modification scenarios.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing a baseline of 3. The description adds significant value through concrete examples showing valid action/field/value combinations (e.g., 'modify' with priceMax vs. 'add' with values), clarifying the semantics of how modifications apply beyond the raw schema definitions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb ('Refine') + resource ('product search') with clear scope (adding/removing filters). Explicitly distinguishes from sibling 'search_products' by stating it takes 'the intent object from a prior search_products response' and requires 'no new query parsing'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear context on when to use (requires prior search_products response) and implies when not to use ('no new query parsing needed'). However, it does not explicitly state the alternative tool name for new searches, only implying it by referencing search_products as the source of the intent object.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

originselect-mcp-server MCP server

Copy to your README.md:

Score Badge

originselect-mcp-server MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/chhavimishra/originselect-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server