Skip to main content
Glama

Server Details

Canonical vocabulary server for autonomous business design. Exposes the Arco Lexicon as seven MCP tools: term lookup, related terms, alignment verification, citation formatting, source retrieval, term listing, and term suggestion. No authentication required. Streamable HTTP transport.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.4/5 across 7 of 7 tools scored. Lowest: 2.4/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: listing, lookup, citation, related terms, sources, text analysis, and alignment verification. There is no ambiguity or overlap.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in snake_case (e.g., list_terms, lookup_term, cite_term), making the tool set predictable and easy to navigate.

Tool Count5/5

With 7 tools, the server is well-scoped for a lexicon. Each tool provides essential functionality without redundancy, and the count is neither too sparse nor overwhelming.

Completeness5/5

The tool set covers all core use cases for a lexicon: browsing (list_terms), searching (lookup_term, suggest_terms), citing (cite_term), exploring relationships (get_related_terms, get_sources), and verifying usage (verify_alignment). No obvious gaps.

Available Tools

7 tools
cite_termCInspect

Returns citation-ready references for a Lexicon term in Chicago, MLA, and BibTeX formats. Access dates are injected at call time — never hardcoded. Read-only. Use this when producing academic papers, blog posts, or any content that requires a formatted reference to an Arco term. Use get_sources instead when you need a list of reading references rather than a formatted citation.

ParametersJSON Schema
NameRequiredDescriptionDefault
termYesThe Lexicon term to cite. Accepts canonical name or slug.
contextYesThe publication context for the citation — for example "academic paper", "blog post", or "investor memo". Used to tailor the citation format where applicable.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only states the output format but does not mention potential side effects, authentication needs, error handling (e.g., if term not found), or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, which is concise but lacks structure. It would benefit from separating parameter explanations or usage notes. The information is front-loaded but incomplete.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, no annotations, and unknown parameter semantics, the description should provide more context. It only covers the output format, leaving agents without enough information to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, and the description does not explain what the 'term' and 'context' parameters mean. For example, whether 'term' is a keyword or phrase, and what 'context' refers to (e.g., quote, source). Agents cannot infer parameter semantics from the description alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns citation-ready references in three formats (Chicago, MLA, BibTeX), indicating the verb 'Returns' and the resource 'formatted references'. However, it lacks specificity on what 'term' and 'context' represent, and doesn't differentiate from siblings like 'get_sources' which might also return references.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'lookup_term' or 'get_sources'. The description does not mention prerequisites, typical use cases, or scenarios where this tool is preferable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_sourcesBInspect

Returns all published Arco sources for a term — Lexicon entries, blog articles, wiki pages, and podcast episodes — ordered by recommended reading sequence. Read-only. Use this when you need a reading list or reference list for a term. Use cite_term instead when you need a formatted citation for a specific publication type.

ParametersJSON Schema
NameRequiredDescriptionDefault
termYesThe Lexicon term whose sources to retrieve. Accepts canonical name or slug.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description bears full burden for behavioral traits. It only states it returns 'published Arco sources' without detailing what that entails (e.g., read-only, ordering, or potential side effects).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence with no superfluous words. It efficiently conveys the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one parameter and no output schema, the description is minimally complete. However, it lacks details on output format, pagination, or what 'Arco sources' are, requiring the agent to infer.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter 'term' lacks schema description. The description adds that it's for 'published Arco sources' and 'recommended reading order', providing some context beyond the schema, but does not explain format or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns published Arco sources for a term with recommended reading order. It distinguishes from siblings like list_terms or lookup_term, which deal with terms themselves, not sources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus siblings. For example, it does not explain how it differs from lookup_term or suggest_terms, leaving the agent to infer.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_termsAInspect

Returns all published Arco Lexicon terms grouped by pillar, each with its slug and canonical short definition. Accepts an optional pillar filter. Use this tool first when you do not know which term to look up — it gives you the full vocabulary to orient from. Use lookup_term once you have identified the term you need.

ParametersJSON Schema
NameRequiredDescriptionDefault
pillarNoFilter results to a single pillar. Valid values: "How We Think", "What We Observe", "What We've Learned". Omit to return all pillars.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description only states return format and optional filter. Missing behavioral traits such as whether results are paginated, sorted, or how errors are handled.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff, front-loaded with key information. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with one optional filter, the description covers the return format and usage hint. Could mention result ordering or limits, but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100% with one parameter 'pillar' already described. The description adds minimal extra meaning beyond 'optional filter by pillar name'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it returns published Arco Lexicon terms grouped by pillar with slug and short definition. Distinguishes from sibling tools like lookup_term or suggest_terms by being a listing tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises to use this tool first when unsure which term to look up. Does not mention when not to use or alternative tools, but the guidance is actionable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_termAInspect

Returns the canonical Arco definition, related terms, and source URL for any Lexicon term. Supports fuzzy matching — "autonomous company" resolves to "Autonomous Business". Use this tool when you need a precise definition. Use suggest_terms instead when you have a block of text and want to discover which terms apply.

ParametersJSON Schema
NameRequiredDescriptionDefault
termYesThe Lexicon term to look up. Accepts the canonical name, a slug, or a close variant. Fuzzy matching handles minor spelling differences and common synonyms.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry full behavioral disclosure. It mentions the tool is read-only (returns data) and supports fuzzy matching, but lacks details on error handling, required permissions, or any side effects. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, concise and front-loaded. Every word adds value: identifies output components and key feature (fuzzy matching). No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description explains what is returned (definition, related terms, source URL). It also mentions fuzzy matching. This is fairly complete for a simple lookup tool, though could specify output format or error behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only one parameter 'term' with no schema description. The description adds that the term is a 'Lexicon term' and that fuzzy matching is supported, providing meaningful context beyond the raw schema. This compensates well for the 0% coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the tool returns 'canonical Arco definition, related terms, and source URL' and mentions fuzzy matching. This clearly states what the tool does and distinguishes it from siblings like 'get_related_terms' which likely returns only related terms.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when needing canonical definition with fuzzy matching, but does not explicitly state when to use this tool over siblings like 'suggest_terms' or 'verify_alignment'. No exclusions or alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

suggest_termsAInspect

Scans a block of text against all published Arco Lexicon terms using deterministic string matching — no LLM calls. Returns two lists: terms whose canonical names appear explicitly in the text (detected), and terms whose concepts are present but whose canonical names are absent (suggested). Maximum 10,000 characters. Use this to audit an article or passage for correct and complete Arco terminology. Use verify_alignment instead when you want a scored alignment report rather than a term discovery list.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe article or text block to scan. Plain text or markdown. Maximum 10,000 characters.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry behavioral info. It states maximum 10,000 characters and that it returns both present and conceptually relevant terms, which adds context. However, it does not disclose whether it is read-only or has any side effects, but for a scan tool the behavior is reasonably clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences: first states action, second clarifies outputs, third gives usage and limit. No unnecessary words, well front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description explains return values adequately. States character limit and purpose. Complete for a simple tool with one parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only one parameter 'text' with schema coverage 100%. Description repeats the limit from schema but adds no new meaning. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it scans text against Arco Lexicon terms and returns two types of results. Verb 'scans' and resource 'Arco Lexicon terms' are specific, and the tool is distinct from siblings like lookup_term or cite_term.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use this to audit an article for correct and complete Arco terminology.' Provides a clear use case, but no explicit when-not-to-use or alternatives, though sibling names hint at alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_alignmentAInspect

Analyses a block of text against the Arco Lexicon using deterministic scoring — no LLM calls. Returns a structured alignment report with a per-term verdict (ALIGNED, PARTIALLY_ALIGNED, NEEDS_CLARIFICATION, MISALIGNED, or NO_ARCO_TERMS_DETECTED), an alignment score, a suggested reframe, and recommended reading. Maximum 5,000 characters. Use this to score and audit text for correct Arco terminology. Use suggest_terms instead when you want to discover which terms apply to a text without scoring it.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text to analyse. Plain text or markdown. Maximum 5,000 characters. Trim or chunk longer inputs before calling.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It mentions a character limit ('Max 5,000 characters') but does not indicate whether the tool is read-only, requires specific permissions, or has side effects. The description is adequate for a simple analysis tool but lacks depth on behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 18 words, directly stating the purpose and a constraint. It is front-loaded with the main action and result, with no extraneous information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one parameter and no output schema, the description covers the basic purpose and a constraint. However, it does not describe the structure of the returned report, which could help an agent decide if the tool meets its needs. It is adequate for simple use but could be more informative.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one required parameter 'text' with no description. The description adds only a character limit ('Max 5,000 characters'), which is a constraint but does not explain what the text should contain or its expected format. With 0% schema description coverage, the description does not sufficiently compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: 'Analyses a block of text against the Arco Lexicon and returns a structured alignment report.' It specifies the verb, resource, and output. Compared to sibling tools (e.g., lookup_term, suggest_terms), this tool is distinct in performing alignment analysis rather than term lookup or suggestion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this tool versus its siblings. The description implies the tool is for analyzing text against a lexicon, but does not specify scenarios where it is preferable over, for example, lookup_term or get_related_terms. The context is clear but lacks exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources