Skip to main content
Glama
cmcgrabby-hue

syndicate-links-mcp

Server Quality Checklist

83%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Tools have distinct purposes—search_programs (discovery), list_merchant_programs (merchant inventory), get_program_details (specific lookup), and track_agent_conversion vs verify_attribution_token (action vs validation). Minor overlap in 'program' retrieval methods, but authentication contexts (agent vs merchant keys) help separate concerns.

    Naming Consistency5/5

    Perfect consistency: all 7 tools use snake_case with clear verb_noun pattern (get_, list_, run_, search_, track_, verify_). Action verbs are precise and uniform throughout the set.

    Tool Count5/5

    7 tools is ideal for this scope. Covers affiliate discovery (search, details), conversion tracking (verify, track), commission monitoring, merchant program management, and admin payouts without bloat.

    Completeness3/5

    Core tracking workflow is present (token verification → conversion tracking → commission check), but significant gaps exist: no program CRUD for merchants (create/update), no conversion/payout history queries, and no affiliate enrollment mechanism. The payout tool is also limited to global batch runs only.

  • Average 4.4/5 across 7 of 7 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 7 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and successfully discloses the authentication mechanism (SYNDICATE_MERCHANT_KEY), the paginated response structure (data/cursor/hasMore), and the pagination protocol. It does not explicitly state that this is a read-only operation, though implied by the verb 'List'.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of two efficient sentences: the first establishes purpose and auth context, the second details the response structure and pagination behavior. Every sentence delivers distinct value with no redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of an output schema, the description appropriately documents the return shape and pagination mechanics. It successfully covers the tool's low complexity (2 simple parameters) and explains the merchant authentication context, leaving no critical gaps for invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema fully documents both 'cursor' and 'limit' parameters. The description does not add parameter-specific semantics beyond what the schema provides, meeting the baseline expectation for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with the specific verb 'List' followed by the resource 'affiliate programs' and scope 'belonging to the authenticated merchant', clearly distinguishing it from siblings like 'search_programs' (which implies broader filtering) and 'get_program_details' (single item retrieval).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the description clarifies this lists programs for the authenticated merchant (implied by SYNDICATE_MERCHANT_KEY), it does not explicitly state when to use this tool versus 'search_programs' or 'get_program_details', nor does it mention prerequisites beyond the implicit authentication requirement.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates critical side effects ('commission is auto-approved') and return values ('Returns eventId, commissionAmount, and commissionStatus') despite the absence of an output schema. It could be improved by mentioning idempotency behavior regarding duplicate order IDs.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of three tightly constructed sentences with zero redundancy: the first defines the action, the second states requirements, and the third discloses side effects/returns. Information is front-loaded and every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of output schema and annotations, the description adequately compensates by documenting return values and mutation side effects. For a financial/tracking operation with 100% schema coverage, the description provides sufficient context, though explicit error conditions or duplicate handling would elevate it to a 5.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the input schema fully documents all six parameters including validation patterns (e.g., '^slat_v1_'). The description adds minimal semantic value beyond the schema, merely reinforcing the attribution token prefix requirement which is already explicitly defined in the schema's pattern property.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Record a conversion event') and resource type, explicitly identifying this as an AI-agent-driven operation. It effectively distinguishes itself from sibling tools (which are read/query operations like get_commission_status or verify_attribution_token) by being the sole write operation for conversion tracking.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear prerequisites ('Requires a valid attribution token... previously issued'), implying the need for prior token generation/verification. However, it stops short of explicitly referencing the sibling verify_attribution_token tool for pre-validation or stating when NOT to use this tool (e.g., for duplicate orders).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It successfully discloses the authentication mechanism (SYNDICATE_AGENT_KEY) and return value structure (available, pending, lifetime totals in USD). Missing rate limits or error behaviors, but adequate for a simple read operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. Front-loaded with core purpose ('Return the commission balance'), followed by auth context and return value details. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    No output schema exists, and the description compensates by explicitly detailing the three return categories (available, pending, lifetime) and currency (USD). For a zero-parameter read tool, this is complete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters in schema (baseline 4). Description adds value by explaining how the affiliate is identified (via SYNDICATE_AGENT_KEY), providing semantic context for the implicit authentication context despite empty input schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Return' + resource 'commission balance' + scope 'for the authenticated affiliate'. Clearly distinguishes from siblings which focus on programs (get_program_details, list_merchant_programs), payouts (run_payout_cycle), or tracking (track_agent_conversion) rather than balance inquiry.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies authentication requirement by mentioning 'authenticated affiliate' and SYNDICATE_AGENT_KEY, but lacks explicit when-to-use guidance or contrast with run_payout_cycle (which executes payouts vs. this tool which only checks status). No explicit prerequisites stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden and succeeds: discloses SYNDICATE_ADMIN_SECRET requirement (critical auth), describes return value structure (counts processed/created/succeeded/failed) despite no output schema, and clarifies batch processing of 'approved' commissions only. Missing explicit destructive/irreversible warning typical of financial payouts.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four dense sentences with zero waste. Front-loaded with core action ('Trigger...'), followed by scope, auth, returns, and limitation note. Every sentence earns its place; no redundant or filler content.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a complex financial batch operation with no output schema, the description compensates well by describing return values and auth requirements. Covers the essential behavioral contract (global scope, secret requirement, aggregate counts). Minor gap: lacks explicit warning about transactionality or failure rollback behavior.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has zero parameters. Per rubric, 0 params = baseline 4. Description appropriately does not invent parameter semantics where none exist.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent specificity: 'Trigger the GLOBAL payout cycle' uses precise verb+resource, and 'not scoped to a single affiliate or commission' clearly distinguishes this from hypothetical per-affiliate alternatives. The scope (ALL affiliates, single batch run) is unambiguous.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Clear scope boundaries established by stating what it does NOT do (not scoped to single affiliate). The v1 limitation note with reference to future per-affiliate endpoint provides helpful context about current constraints, though it doesn't explicitly name current alternatives (if any exist among siblings).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It discloses success behavior (returns payload with specific fields: affiliateId, programId, trackingCode, issuedAt, expiry) and failure modes (error description if invalid/expired). Lacks mention of authentication requirements or rate limits, preventing a 5.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. First sentence covers purpose and return values; second sentence clarifies scope limitation and alternative tool. Information is front-loaded and every clause earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter validation tool without output schema, the description compensates well by enumerating return payload fields and error cases. Covers the essential behavioral contract and sibling relationships. Minor gap regarding authorization requirements.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100% ('The slat_v1_ prefixed attribution token to verify'), establishing baseline 3. Description reinforces the token format but does not add syntax details, validation rules, or examples beyond what the schema provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verbs ('Decode and validate') with the resource ('Syndicate Links attribution token') and identifies the specific token format ('slat_v1_ prefix'). It explicitly distinguishes from sibling tool track_agent_conversion by stating it 'Does not record a conversion.'

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit guidance on when NOT to use the tool ('Does not record a conversion') and names the correct alternative ('use track_agent_conversion for that'), clearly delineating the boundary between validation and conversion tracking operations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It successfully discloses pagination behavior ('pass the returned cursor to fetch subsequent pages'), response structure ('{ data: Program[], cursor: string | null, hasMore: boolean }'), and returned fields ('commission rates, category, and status'). Does not mention rate limits or caching, but covers essential operational behavior well.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with zero waste: sentence 1 defines purpose and return fields, sentence 2 provides usage guidelines with examples, sentence 3 explains pagination mechanics and response shape. Information is front-loaded and logically sequenced.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Excellent completeness given constraints. No output schema exists, but description manually documents the exact response shape and pagination structure. With 100% input schema coverage and clear behavioral documentation, the description provides everything an agent needs to invoke and handle responses correctly.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. Description adds value by providing concrete query examples ('electronics', 'SaaS', 'fitness') that illustrate the semantic intent of the 'q' parameter beyond the schema's technical description. Reinforces cursor usage for pagination, adding context to the schema's mechanical definition.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description explicitly states 'Full-text search across affiliate program names and descriptions' with specific verb (search) and resource (affiliate programs). It clearly differentiates from siblings like get_program_details (ID-based retrieval) and list_merchant_programs (merchant-scoped listing) by emphasizing full-text search capability across names and descriptions.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear usage context with 'Use this to find programs by keyword' and concrete examples ('electronics', 'SaaS', 'fitness'). However, it lacks explicit guidance on when NOT to use it (e.g., 'don't use if you have a program ID') or named alternatives for direct lookups.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and extensively documents the return payload (commission rates, attribution window, conversion stats, product catalog, etc.). However, it lacks explicit statements about safety (read-only nature), error handling, or rate limiting that would be necessary for a full behavioral profile.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences, each with distinct purpose: (1) core action, (2) return value specification, (3) usage timing, (4) parameter source. No redundancy; front-loaded with the essential action; every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter retrieval tool without output schema, the description is remarkably complete. It compensates for missing output schema by detailing the rich data structure returned, establishes relationships with sibling tools, and provides workflow context. Nothing essential is missing given the tool's simplicity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While the schema has 100% coverage for the single parameter, the description adds valuable semantic context by specifying the provenance of the program_id ('comes from search_programs or list_merchant_programs results'), which helps the agent understand the data flow between tools.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Get') and clearly identifies the resource ('comprehensive details for a specific affiliate program'). It effectively distinguishes from siblings by contrasting with search_programs (discovery) and list_merchant_programs (listing), establishing this as the deep-dive retrieval tool.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states when to use the tool ('after search_programs') and for what purpose ('to evaluate whether a program is worth promoting'). It also identifies the specific sibling tools that provide the required parameter, creating clear workflow guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

syndicate-links MCP server

Copy to your README.md:

Score Badge

syndicate-links MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cmcgrabby-hue/syndicate-links'

If you have feedback or need assistance with the MCP directory API, please join our Discord server