nctr-mcp-server
Server Details
NCTR Alliance rewards — search bounties, check earning rates, and discover communities.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Hoya25/mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 7 of 7 tools scored.
Each tool has a clearly distinct purpose targeting different aspects of the NCTR ecosystem, such as tier requirements, promotions, earning rates, funding, communities, onboarding, and bounties. There is no overlap in functionality, making it easy for an agent to select the right tool without confusion.
All tool names follow a consistent verb_noun pattern with clear, descriptive verbs like 'check', 'get', 'generate', and 'search'. The naming is uniform throughout, using snake_case without any deviations or mixed conventions.
With 7 tools, the server is well-scoped for its purpose of managing NCTR program information and operations. Each tool serves a unique and necessary function, covering key areas like member onboarding, rewards, and community engagement without being overly sparse or bloated.
The tool set provides comprehensive coverage for information retrieval and onboarding in the NCTR domain, including tier progression, promotions, earning rates, funding, communities, and bounties. A minor gap exists in lacking tools for actions like updating member status or managing bounties, but core workflows are well-supported.
Available Tools
7 toolscheck_tier_requirementsCheck Tier RequirementsAInspect
Check what is needed to reach a specific Crescendo status tier, or see the full progression path. Shows NCTR lock thresholds, multipliers, and perks for each tier.
| Name | Required | Description | Default |
|---|---|---|---|
| target_tier | No | Target tier to check requirements for | |
| current_balance | No | Current NCTR balance to check tier status and progress to next tier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. 'Shows' and 'Check' imply read-only operation, and description lists specific returned data (NCTR lock thresholds, multipliers, perks). However, lacks details on rate limits, caching behavior, or error conditions when invalid balances are provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded action ('Check what is needed...'), followed by scope clarification ('or see...'), then return value specification ('Shows...'). Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 2-parameter lookup tool with no output schema. Description compensates by detailing the specific business objects returned (thresholds, multipliers, perks). No annotations present, but description implies safe read-only behavior sufficiently for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear param descriptions. Description adds value by contextualizing the 'target_tier' parameter toward 'reach a specific...tier' and implies 'current_balance' supports viewing 'progress to next tier.' The 'full progression path' mention hints at optional parameter behavior, adding meaning beyond raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Check') and resource ('Crescendo status tier requirements'). Distinguishes scope by mentioning both specific tier targeting and full progression paths, plus specific return data (NCTR lock thresholds, multipliers). Implicitly differentiates from sibling earning/promotion tools by focusing on 'requirements' and 'progression path'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies two usage modes (specific tier vs full progression) but lacks explicit guidance that both parameters are optional (required: 0). Does not clarify when to use this versus get_earning_rates or search_bounties for economic planning.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_active_promotionsGet Active PromotionsBInspect
Get currently active promotions, limited-time offers, and special earning opportunities in the NCTR program.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full behavioral burden. Adds valuable temporal context ('currently active', 'limited-time') implying time-bound data, but omits operational details like pagination, caching, empty response handling, or authentication requirements critical for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence front-loaded with action verb. No redundant phrases; 'limited-time offers' and 'special earning opportunities' add specific value beyond the tool name without wasting words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a zero-parameter read tool but incomplete given lack of output schema. Should indicate expected return structure (list of promotions with what fields?) or filtering behavior to compensate for missing structured return documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present; per rubric this is baseline 4. Description implies no filtering capabilities, which aligns with empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Get' with clear resource 'active promotions, limited-time offers, and special earning opportunities' scoped to 'NCTR program'. Differentiates from siblings like get_ecosystem_funding and get_onboarding_link via domain specificity, though could clarify distinction from search_bounties.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit guidance on when to use this tool versus alternatives (e.g., get_earning_rates for standard rates vs promotional rates) or prerequisites for invocation. Missing comparative context despite having relevant siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_earning_ratesGet Earning RatesAInspect
Get NCTR earning rates for a specific Crescendo status tier, or compare all tiers. Shows the multiplier applied to bounty earnings at each tier level. Higher tiers earn more NCTR per bounty.
| Name | Required | Description | Default |
|---|---|---|---|
| tier | No | Specific tier to check. Omit to see all tiers. | |
| bounty_id | No | Optional bounty ID to calculate tier-adjusted earning amount |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full behavioral burden. It successfully explains the business logic (multipliers increase with tier levels) and return value semantics, but omits safety traits like idempotency, caching behavior, or error conditions (e.g., invalid tier strings).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste. Front-loaded with the core action ('Get NCTR earning rates'), followed by behavioral details ('Shows the multiplier'), and concluding with business logic ('Higher tiers earn more'). Appropriate size for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (2 parameters, no nesting) and lack of output schema, the description adequately explains the domain model (tiers, multipliers, NCTR). It sufficiently compensates for missing output schema by describing the return value concept (multipliers), though explicit return format would strengthen this further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds valuable semantic context by explaining that earning rates represent 'multipliers' applied to bounties, which clarifies the relationship between the 'tier' and 'bounty_id' parameters and their interaction.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Get), resource (NCTR earning rates), and domain (Crescendo status tier). It effectively scopes the tool to tier-based earning multipliers. However, it does not explicitly differentiate from the sibling tool 'check_tier_requirements,' which could confuse agents about whether this tool validates requirements or returns rates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the two usage patterns (specific tier vs. all tiers via omission) but does not provide explicit guidance on when to use this tool versus siblings like 'check_tier_requirements' or 'search_bounties.' It lacks explicit 'when-not-to-use' or alternative recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ecosystem_fundingGet Ecosystem FundingAInspect
Learn how the NCTR rewards pool is funded. Covers the Torus circular contribution mechanism, DeFi yields, brand purchases, and other revenue sources that sustain Crescendo member rewards.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses what information is returned (funding sources and mechanisms) but omits operational details such as whether the data is real-time, cached, or if the tool is read-only/safe. The 'Learn how...' framing implies informational retrieval but doesn't explicitly confirm no side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first establishes the core purpose (learning about funding), while the second elaborates on specific content areas (Torus, DeFi, brand purchases), earning its place through concrete detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters) and lack of output schema, the description adequately compensates by detailing the substantive content returned (specific funding mechanisms). It would achieve a 5 by explicitly stating the response format (e.g., 'returns a detailed summary').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. According to the rubric, 0 parameters establishes a baseline score of 4, which applies here. The description correctly avoids inventing parameter semantics where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves information about NCTR rewards pool funding mechanisms, specifically naming the Torus circular contribution, DeFi yields, and brand purchases. It effectively distinguishes this from siblings like get_earning_rates (which covers member earnings) by focusing on revenue sources rather than distribution rates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the content specificity (funding mechanisms vs. earning rates) implicitly guides usage, there is no explicit guidance on when to select this tool over related siblings like get_earning_rates or get_impact_engines. It lacks 'use this when...' framing or explicit comparisons to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_impact_enginesGet Impact EnginesAInspect
Discover NCTR Impact Engine communities. Impact Engines are passion-based communities within NCTR — each focused on a different lifestyle vertical. Members can join any Impact Engine to connect with like-minded people and access community-specific experiences.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | Filter by category (Powersports, Lacrosse, Entertainment, Skilled Trades, Recovery, Wellness) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully adds domain context (what Impact Engines are) but omits technical behavioral details like whether the operation is read-only, return format/pagination, or rate limits. 'Discover' implies safe read access but should be explicit without annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences front-loaded with the action ('Discover...'). Every sentence earns its place: first defines the operation, second explains the domain concept, third clarifies member value. No redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple discovery tool with one optional parameter and no output schema, the description is appropriately complete. It provides necessary domain context (what Impact Engines represent) that compensates for missing return value documentation, though it could briefly note the expected return structure (list of communities).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with the category parameter fully documented (type, description, and implicit enum values). Per calibration guidelines, this warrants a baseline score of 3. The description mentions 'lifestyle vertical' which loosely maps to the category concept but doesn't add syntax or filtering behavior beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb-resource pair ('Discover NCTR Impact Engine communities') and clearly defines what Impact Engines are ('passion-based communities'). While it doesn't explicitly name sibling tools to contrast against, the domain description inherently distinguishes it from the server's other functions (tier checks, promotions, bounties).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by explaining member benefits ('Members can join...'), suggesting when a user might want to discover these communities. However, it lacks explicit when-to-use guidance or contrasts with alternatives like search_bounties or get_onboarding_link.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_onboarding_linkGet Onboarding LinkAInspect
Generate a link for a new member to join NCTR Alliance. Optionally include a referral code. Returns the sign-up URL and a summary of what new members earn.
| Name | Required | Description | Default |
|---|---|---|---|
| referral_code | No | Optional referral code to credit the referring member |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It adequately compensates for the missing output schema by describing the return values ('sign-up URL and a summary'), but omits operational details like link expiration, single-use constraints, or validation behavior for invalid referral codes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero redundancy. The first sentence front-loads the core purpose, while the second covers the optional parameter behavior and return value disclosure. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (single optional string parameter, no nested objects) and absence of an output schema, the description provides sufficient completeness by disclosing the return structure. Minor gaps remain regarding link lifecycle (expiration/revocation), but the description is adequate for tool invocation decisions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the referral_code parameter. The description reinforces this with 'Optionally include,' but does not add additional semantic meaning—such as format constraints or validation rules—beyond what the schema already provides, warranting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific transitive verb ('Generate') with a clear resource ('link'), and the context ('for a new member to join NCTR Alliance') clearly distinguishes this from sibling tools like check_tier_requirements or search_bounties which retrieve information rather than generate enrollment resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context ('for a new member to join') and hints at the optional parameter use case ('Optionally include a referral code'), but lacks explicit guidance on when NOT to use this versus siblings like check_tier_requirements, or prerequisites for the referral code.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_bountiesSearch BountiesAInspect
Search and filter available NCTR bounties. Filter by category (entry, revenue, merch, referral, engagement), minimum/maximum NCTR amount, or keyword. Returns bounty details including name, amount, lock period, and requirements. All bounties use 360LOCK — tokens stay yours after the lock period.
| Name | Required | Description | Default |
|---|---|---|---|
| keyword | No | Search keyword to match against bounty name, description, or tags | |
| category | No | Filter by bounty category | |
| max_amount | No | Maximum NCTR amount | |
| min_amount | No | Minimum NCTR amount | |
| repeatable_only | No | If true, only return bounties that can be earned multiple times |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so the description carries the full burden. It discloses important behavioral traits: (1) the return payload structure ('Returns bounty details including name, amount, lock period, and requirements'), and (2) the critical domain mechanic that 'All bounties use 360LOCK — tokens stay yours after the lock period.' This explains token ownership behavior not evident in the schema. It does not disclose rate limits, pagination, or auth requirements, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: sentence 1 states purpose and available filters, sentence 2 describes return values, sentence 3 explains the 360LOCK mechanism. Every sentence earns its place; the lock explanation is essential domain context despite not being a filter parameter. Front-loaded with the action verb 'Search and filter.'
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description effectively compensates by detailing what the tool returns (name, amount, lock period, requirements) and explaining the 360LOCK ecosystem mechanic. It covers the purpose and all filterable parameters. Minor gap: does not explicitly state this is read-only/safe, though implied by 'Search.'
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds marginal value by inlining the enum values (entry, revenue, merch, referral, engagement) and grouping min/max amount, but does not expand on 'repeatable_only' semantics or provide deeper business logic beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Search and filter') + specific resource ('NCTR bounties'). Immediately distinguishes from siblings (check_tier_requirements, get_active_promotions, etc.) by focusing specifically on bounty discovery with filtering capabilities, not tier checking or promotion retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context on what the tool does (search/filter bounties by category, amount, keyword) and implies usage through the filter descriptions. However, lacks explicit 'when not to use' guidance or named alternatives to distinguish from sibling tools like get_active_promotions or get_earning_rates.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!