Skip to main content
Glama

Server Details

NCTR Alliance rewards — search bounties, check earning rates, and discover communities.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Hoya25/mcp-server
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose targeting different aspects of the NCTR ecosystem, such as tier requirements, promotions, earning rates, funding, communities, onboarding, and bounties. There is no overlap in functionality, making it easy for an agent to select the right tool without confusion.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with clear, descriptive verbs like 'check', 'get', 'generate', and 'search'. The naming is uniform throughout, using snake_case without any deviations or mixed conventions.

Tool Count5/5

With 7 tools, the server is well-scoped for its purpose of managing NCTR program information and operations. Each tool serves a unique and necessary function, covering key areas like member onboarding, rewards, and community engagement without being overly sparse or bloated.

Completeness4/5

The tool set provides comprehensive coverage for information retrieval and onboarding in the NCTR domain, including tier progression, promotions, earning rates, funding, communities, and bounties. A minor gap exists in lacking tools for actions like updating member status or managing bounties, but core workflows are well-supported.

Available Tools

7 tools
check_tier_requirementsCheck Tier RequirementsAInspect

Check what is needed to reach a specific Crescendo status tier, or see the full progression path. Shows NCTR lock thresholds, multipliers, and perks for each tier.

ParametersJSON Schema
NameRequiredDescriptionDefault
target_tierNoTarget tier to check requirements for
current_balanceNoCurrent NCTR balance to check tier status and progress to next tier
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. 'Shows' and 'Check' imply read-only operation, and description lists specific returned data (NCTR lock thresholds, multipliers, perks). However, lacks details on rate limits, caching behavior, or error conditions when invalid balances are provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. Front-loaded action ('Check what is needed...'), followed by scope clarification ('or see...'), then return value specification ('Shows...'). Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 2-parameter lookup tool with no output schema. Description compensates by detailing the specific business objects returned (thresholds, multipliers, perks). No annotations present, but description implies safe read-only behavior sufficiently for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear param descriptions. Description adds value by contextualizing the 'target_tier' parameter toward 'reach a specific...tier' and implies 'current_balance' supports viewing 'progress to next tier.' The 'full progression path' mention hints at optional parameter behavior, adding meaning beyond raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Check') and resource ('Crescendo status tier requirements'). Distinguishes scope by mentioning both specific tier targeting and full progression paths, plus specific return data (NCTR lock thresholds, multipliers). Implicitly differentiates from sibling earning/promotion tools by focusing on 'requirements' and 'progression path'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies two usage modes (specific tier vs full progression) but lacks explicit guidance that both parameters are optional (required: 0). Does not clarify when to use this versus get_earning_rates or search_bounties for economic planning.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_active_promotionsGet Active PromotionsBInspect

Get currently active promotions, limited-time offers, and special earning opportunities in the NCTR program.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full behavioral burden. Adds valuable temporal context ('currently active', 'limited-time') implying time-bound data, but omits operational details like pagination, caching, empty response handling, or authentication requirements critical for a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence front-loaded with action verb. No redundant phrases; 'limited-time offers' and 'special earning opportunities' add specific value beyond the tool name without wasting words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter read tool but incomplete given lack of output schema. Should indicate expected return structure (list of promotions with what fields?) or filtering behavior to compensate for missing structured return documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present; per rubric this is baseline 4. Description implies no filtering capabilities, which aligns with empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Get' with clear resource 'active promotions, limited-time offers, and special earning opportunities' scoped to 'NCTR program'. Differentiates from siblings like get_ecosystem_funding and get_onboarding_link via domain specificity, though could clarify distinction from search_bounties.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit guidance on when to use this tool versus alternatives (e.g., get_earning_rates for standard rates vs promotional rates) or prerequisites for invocation. Missing comparative context despite having relevant siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_earning_ratesGet Earning RatesAInspect

Get NCTR earning rates for a specific Crescendo status tier, or compare all tiers. Shows the multiplier applied to bounty earnings at each tier level. Higher tiers earn more NCTR per bounty.

ParametersJSON Schema
NameRequiredDescriptionDefault
tierNoSpecific tier to check. Omit to see all tiers.
bounty_idNoOptional bounty ID to calculate tier-adjusted earning amount
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full behavioral burden. It successfully explains the business logic (multipliers increase with tier levels) and return value semantics, but omits safety traits like idempotency, caching behavior, or error conditions (e.g., invalid tier strings).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste. Front-loaded with the core action ('Get NCTR earning rates'), followed by behavioral details ('Shows the multiplier'), and concluding with business logic ('Higher tiers earn more'). Appropriate size for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (2 parameters, no nesting) and lack of output schema, the description adequately explains the domain model (tiers, multipliers, NCTR). It sufficiently compensates for missing output schema by describing the return value concept (multipliers), though explicit return format would strengthen this further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds valuable semantic context by explaining that earning rates represent 'multipliers' applied to bounties, which clarifies the relationship between the 'tier' and 'bounty_id' parameters and their interaction.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Get), resource (NCTR earning rates), and domain (Crescendo status tier). It effectively scopes the tool to tier-based earning multipliers. However, it does not explicitly differentiate from the sibling tool 'check_tier_requirements,' which could confuse agents about whether this tool validates requirements or returns rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the two usage patterns (specific tier vs. all tiers via omission) but does not provide explicit guidance on when to use this tool versus siblings like 'check_tier_requirements' or 'search_bounties.' It lacks explicit 'when-not-to-use' or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ecosystem_fundingGet Ecosystem FundingAInspect

Learn how the NCTR rewards pool is funded. Covers the Torus circular contribution mechanism, DeFi yields, brand purchases, and other revenue sources that sustain Crescendo member rewards.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses what information is returned (funding sources and mechanisms) but omits operational details such as whether the data is real-time, cached, or if the tool is read-only/safe. The 'Learn how...' framing implies informational retrieval but doesn't explicitly confirm no side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. The first establishes the core purpose (learning about funding), while the second elaborates on specific content areas (Torus, DeFi, brand purchases), earning its place through concrete detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters) and lack of output schema, the description adequately compensates by detailing the substantive content returned (specific funding mechanisms). It would achieve a 5 by explicitly stating the response format (e.g., 'returns a detailed summary').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. According to the rubric, 0 parameters establishes a baseline score of 4, which applies here. The description correctly avoids inventing parameter semantics where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves information about NCTR rewards pool funding mechanisms, specifically naming the Torus circular contribution, DeFi yields, and brand purchases. It effectively distinguishes this from siblings like get_earning_rates (which covers member earnings) by focusing on revenue sources rather than distribution rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the content specificity (funding mechanisms vs. earning rates) implicitly guides usage, there is no explicit guidance on when to select this tool over related siblings like get_earning_rates or get_impact_engines. It lacks 'use this when...' framing or explicit comparisons to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_impact_enginesGet Impact EnginesAInspect

Discover NCTR Impact Engine communities. Impact Engines are passion-based communities within NCTR — each focused on a different lifestyle vertical. Members can join any Impact Engine to connect with like-minded people and access community-specific experiences.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoFilter by category (Powersports, Lacrosse, Entertainment, Skilled Trades, Recovery, Wellness)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully adds domain context (what Impact Engines are) but omits technical behavioral details like whether the operation is read-only, return format/pagination, or rate limits. 'Discover' implies safe read access but should be explicit without annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences front-loaded with the action ('Discover...'). Every sentence earns its place: first defines the operation, second explains the domain concept, third clarifies member value. No redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple discovery tool with one optional parameter and no output schema, the description is appropriately complete. It provides necessary domain context (what Impact Engines represent) that compensates for missing return value documentation, though it could briefly note the expected return structure (list of communities).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with the category parameter fully documented (type, description, and implicit enum values). Per calibration guidelines, this warrants a baseline score of 3. The description mentions 'lifestyle vertical' which loosely maps to the category concept but doesn't add syntax or filtering behavior beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb-resource pair ('Discover NCTR Impact Engine communities') and clearly defines what Impact Engines are ('passion-based communities'). While it doesn't explicitly name sibling tools to contrast against, the domain description inherently distinguishes it from the server's other functions (tier checks, promotions, bounties).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by explaining member benefits ('Members can join...'), suggesting when a user might want to discover these communities. However, it lacks explicit when-to-use guidance or contrasts with alternatives like search_bounties or get_onboarding_link.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_bountiesSearch BountiesAInspect

Search and filter available NCTR bounties. Filter by category (entry, revenue, merch, referral, engagement), minimum/maximum NCTR amount, or keyword. Returns bounty details including name, amount, lock period, and requirements. All bounties use 360LOCK — tokens stay yours after the lock period.

ParametersJSON Schema
NameRequiredDescriptionDefault
keywordNoSearch keyword to match against bounty name, description, or tags
categoryNoFilter by bounty category
max_amountNoMaximum NCTR amount
min_amountNoMinimum NCTR amount
repeatable_onlyNoIf true, only return bounties that can be earned multiple times
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so the description carries the full burden. It discloses important behavioral traits: (1) the return payload structure ('Returns bounty details including name, amount, lock period, and requirements'), and (2) the critical domain mechanic that 'All bounties use 360LOCK — tokens stay yours after the lock period.' This explains token ownership behavior not evident in the schema. It does not disclose rate limits, pagination, or auth requirements, preventing a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently structured: sentence 1 states purpose and available filters, sentence 2 describes return values, sentence 3 explains the 360LOCK mechanism. Every sentence earns its place; the lock explanation is essential domain context despite not being a filter parameter. Front-loaded with the action verb 'Search and filter.'

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description effectively compensates by detailing what the tool returns (name, amount, lock period, requirements) and explaining the 360LOCK ecosystem mechanic. It covers the purpose and all filterable parameters. Minor gap: does not explicitly state this is read-only/safe, though implied by 'Search.'

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds marginal value by inlining the enum values (entry, revenue, merch, referral, engagement) and grouping min/max amount, but does not expand on 'repeatable_only' semantics or provide deeper business logic beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Search and filter') + specific resource ('NCTR bounties'). Immediately distinguishes from siblings (check_tier_requirements, get_active_promotions, etc.) by focusing specifically on bounty discovery with filtering capabilities, not tier checking or promotion retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context on what the tool does (search/filter bounties by category, amount, keyword) and implies usage through the filter descriptions. However, lacks explicit 'when not to use' guidance or named alternatives to distinguish from sibling tools like get_active_promotions or get_earning_rates.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.