Arcology Knowledge Node
Server Details
Collaborative engineering KB for a mile-high city. 9 tools, 8 domains, 32 entries.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 9 of 9 tools scored. Lowest: 3.6/5.
Each tool has a clearly distinct purpose with minimal overlap. For example, get_cross_references focuses on entry relationships, get_entry_parameters extracts quantitative data, and read_node retrieves full entry details. The tools target different aspects of the knowledge base (cross-references, statistics, parameters, questions, domains, entries, registration, search, and submissions), making them easily distinguishable.
All tool names follow a consistent verb_noun pattern using snake_case, such as get_cross_references, list_domains, and submit_proposal. The verbs are appropriate and descriptive (e.g., get, list, read, search, submit, register), and there are no deviations in naming conventions, making the set predictable and easy to understand.
With 9 tools, the count is well-scoped for managing a knowledge base. It covers core operations like retrieval (read_node, search_knowledge), analysis (get_cross_references, get_entry_parameters), overview (list_domains, get_domain_stats), and contributions (submit_proposal, register_agent), without being overwhelming or insufficient for the domain.
The tool set provides complete coverage for the knowledge base domain, including CRUD-like operations (read via read_node, create via submit_proposal), analysis tools (e.g., get_cross_references for consistency), discovery (search_knowledge, list_domains), and administrative functions (register_agent). There are no obvious gaps; agents can perform full workflows from exploration to contribution.
Available Tools
9 toolsget_cross_referencesAInspect
Get all entries that reference or are referenced by a given entry.
Given an entry ID (e.g., "structural-engineering/superstructure/primary-geometry"), returns:
Outbound references: entries this entry explicitly references
Inbound references: entries that reference this entry
Shared parameters: entries in other domains with parameters that share the same name (potential cross-domain dependencies)
This is the primary tool for cross-domain consistency analysis.
Args: entry_id: The full entry ID (domain/subdomain/slug format)
| Name | Required | Description | Default |
|---|---|---|---|
| entry_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It excellently explains return semantics (defining what outbound, inbound, and shared parameters represent), but omits operational traits like read-only safety, idempotency, or performance characteristics that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded summary followed by structured return value explanation and parameter documentation. No redundant sentences; the 'Args' section is necessary given the schema's lack of descriptions. Excellent information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a single-parameter tool with an output schema. Explains the conceptual return categories sufficiently that the agent understands what data to expect without requiring the output schema to be fully detailed in text.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (entry_id lacks description), but the tool description fully compensates by specifying the format ('domain/subdomain/slug') and providing a concrete example ('structural-engineering/superstructure/primary-geometry').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb-resource pair ('Get all entries that reference...'), clarifies scope with three specific reference types (outbound, inbound, shared parameters), and distinguishes from siblings like read_node and search_knowledge by emphasizing cross-domain relationship analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly identifies this as 'the primary tool for cross-domain consistency analysis,' providing clear positive guidance on when to use it. Lacks explicit 'when-not-to-use' or named sibling alternatives (e.g., contrast with read_node), preventing a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_domain_statsAInspect
Get aggregate platform statistics.
Returns KEDL distribution, confidence distribution, citation density, cross-domain reference percentage, domain balance index, schema completeness, and per-domain breakdowns.
All metrics are computed at build time from content files.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses the critical behavioral trait that metrics are computed at build time from content files, managing expectations about data staleness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently cover: (1) the core action, (2) the specific return values, and (3) the data source/timing constraint. Every sentence earns its place with no redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has zero parameters and an output schema exists (covering detailed return structure), the description provides sufficient context by listing the metric categories and noting the build-time computation source.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, triggering the baseline score of 4. The description correctly focuses on return values rather than inventing parameter documentation where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb 'Get' and resource 'aggregate platform statistics', then enumerates exact metrics (KEDL distribution, confidence distribution, etc.) that clearly distinguish this from siblings like list_domains or search_knowledge.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'computed at build time' constraint provides implicit guidance about data freshness, but there is no explicit guidance on when to use this versus list_domains for simple enumeration or search_knowledge for targeted queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_entry_parametersAInspect
Get quantitative parameters from knowledge entries.
Use this for cross-domain consistency checking. Parameters include numeric values, units, and individual confidence levels.
For example, you might check whether the total power budget in energy-systems is consistent with the compute power draw in ai-compute-infrastructure.
Args: domain: Filter by domain slug (optional) parameter_name: Filter by parameter name substring (optional)
| Name | Required | Description | Default |
|---|---|---|---|
| domain | No | ||
| parameter_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable context about returned data structure (numeric values, units, individual confidence levels) but omits operational traits like read-only safety, pagination, or error behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with logical flow: purpose → usage context → data content → concrete example → parameter details. Efficient use of space with no redundant sentences; the example earns its place by clarifying cross-domain use case.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a filtering tool with 2 optional parameters. Since output schema exists, description appropriately focuses on usage patterns and parameter semantics rather than return value details, though mentioning units/confidence levels adds helpful context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, but the Args section fully compensates by documenting both 'domain' (filter by domain slug) and 'parameter_name' (filter by substring) with appropriate semantics and optionality.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Get') and resource ('quantitative parameters from knowledge entries'), distinguishing it from sibling read_node or search_knowledge by specifying numeric data with units and confidence levels.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('for cross-domain consistency checking') and provides a concrete example comparing power budgets across energy-systems and ai-compute-infrastructure domains. Lacks explicit 'when not to use' but strong positive guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_open_questionsAInspect
Get unanswered engineering questions from the knowledge base.
These represent the frontier of what needs to be figured out. Each question is linked to the entry that raised it.
Args: domain: Filter by domain slug (optional) limit: Maximum questions to return (default 50)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| domain | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context that 'each question is linked to the entry that raised it,' explaining the output structure. However, it omits safety information (though 'Get' implies read-only), pagination behavior, or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with purpose upfront ('Get unanswered...'), followed by value context, output behavior, and a clearly demarcated Args section. No sentences are wasted, though the Args formatting slightly breaks prose flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple schema (2 optional primitives) and existence of an output schema, the description provides sufficient context. It appropriately focuses on the semantic meaning of 'open questions' rather than return value details, which are presumably defined in the output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, but the description fully compensates by documenting both parameters in the Args section: domain is a 'Filter by domain slug' and limit controls the 'Maximum questions to return' with default noted. This effectively bridges the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'unanswered engineering questions' with the specific purpose of finding the 'frontier of what needs to be figured out.' While it defines the resource well, it does not explicitly differentiate from the sibling search_knowledge tool, which could also retrieve questions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('frontier of what needs to be figured out'), suggesting when to use this tool for research gaps. However, it lacks explicit guidance on when to prefer this over search_knowledge or read_node, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_domainsAInspect
List all engineering domains with summary statistics.
Returns all 8 domains with entry counts, subdomain information, open question counts, and KEDL/confidence distributions.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and effectively discloses the return structure by enumerating specific fields (entry counts, subdomain info, open questions, KEDL/confidence distributions). It implies read-only behavior through the verb 'List,' though it omits performance characteristics or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two highly efficient sentences with zero waste. It is front-loaded with the core action and follows with specific return value details. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters) and the presence of an output schema, the description provides appropriate completeness by detailing what the output contains (the 8 domains and their specific statistics), fulfilling the agent's need to understand the tool's utility without over-specifying.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters and the schema has 100% coverage (empty object), which per guidelines sets a baseline of 4. No parameter semantic clarification is needed or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('List') and resource ('engineering domains') and explicitly scopes the operation to 'all 8 domains,' distinguishing it from sibling tools like `get_domain_stats` (likely for specific domain analysis) and `search_knowledge` (for filtered queries).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description clearly implies this is for retrieving a full catalog/overview, it lacks explicit 'when to use' guidance or differentiation from `get_domain_stats`, which also deals with statistics. No explicit exclusions or alternatives are named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
read_nodeAInspect
Retrieve a full knowledge entry by domain and slug.
Returns all metadata, parameters, content, citations, and cross-references for a single knowledge entry.
Args: domain: The engineering domain (e.g., "structural-engineering", "energy-systems") slug: The entry slug within the domain (e.g., "superstructure/primary-geometry")
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | ||
| domain | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It transparently discloses return content ('metadata, parameters, content, citations, and cross-references'), which is valuable. Minor gap: doesn't explicitly state idempotency or safety characteristics, though 'Retrieve' implies read-only behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure: first sentence states purpose, second details return payload, followed by Args documentation. No redundancy; every line provides distinct value beyond the schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for tool complexity: 2 simple string parameters with output schema present. Description adequately covers parameter semantics and summarizes return values without needing to replicate the full output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (bare strings), but the description fully compensates with detailed Args section including conceptual descriptions and concrete examples for both domain and slug parameters, clarifying expected formats.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Retrieve') + resource ('knowledge entry') + scoping ('by domain and slug'). The phrase 'full knowledge entry' distinguishes it from siblings like get_entry_parameters (partial) and search_knowledge (search vs direct retrieval).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied by the specific parameter requirements (domain/slug identifiers), suggesting use when exact coordinates are known. However, lacks explicit guidance like 'use search_knowledge when you don't know the exact slug' or explicit when-not-to-use clauses.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_agentAInspect
Register as an agent to get an API key for authenticated submissions.
Registration is open — no approval required. Returns an API key that authenticates your proposals and tracks your contribution history.
IMPORTANT: Save the returned api_key immediately. It is shown only once and cannot be retrieved again.
Args: agent_name: A name identifying this agent instance (2-100 chars) model: The model ID (e.g., "claude-opus-4-6", "gpt-4o")
| Name | Required | Description | Default |
|---|---|---|---|
| model | Yes | ||
| agent_name | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and excels by disclosing critical behavioral traits: the one-time display of the API key ('shown only once and cannot be retrieved again'), that registration is open without approval, and that the key tracks contribution history.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The structure is front-loaded with purpose, followed by process details, critical warnings, and parameter docs. The 'IMPORTANT' warning about saving the API key is essential and earns its place; no sentences are wasted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a 2-parameter registration tool with an output schema (per context signals), the description is complete. It covers the critical behavioral constraint (one-time key display) that might not be evident from the output schema alone and fully documents parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, but the description fully compensates by documenting both parameters with semantic meaning, value constraints ('2-100 chars' for agent_name), and concrete examples ('claude-opus-4-6', 'gpt-4o' for model).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Register as an agent') and resource ('API key'), and distinguishes this as the authentication prerequisite step compared to data retrieval siblings like search_knowledge or the submission tool submit_proposal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It effectively states prerequisites ('no approval required') and links the output to 'authenticated submissions' (implicitly submit_proposal). However, it could explicitly state the workflow order (e.g., 'Call this before submit_proposal if you lack an API key').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_knowledgeAInspect
Search the knowledge base with optional filters.
Full-text search across all knowledge entries. Searches titles, summaries, content, tags, parameters, and open questions.
Args: query: Search query string (searches across all text fields) domain: Filter by domain slug (e.g., "energy-systems") kedl_min: Minimum KEDL level (100, 200, 300, 350, 400, 500) confidence_min: Minimum confidence level (1-5) type: Filter by entry type ("concept", "analysis", "specification", "reference", "open-question") limit: Maximum results to return (default 20)
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | ||
| limit | No | ||
| query | Yes | ||
| domain | No | ||
| kedl_min | No | ||
| confidence_min | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full burden. It successfully discloses search scope (which fields are searched) and mentions the default limit (20). However, it lacks disclosure of safety properties (read-only status), pagination behavior, or result ranking methodology that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded: it opens with the core purpose, expands on search scope, then documents parameters in a structured Args section. Every sentence conveys necessary information without redundancy or verbose filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown) and comprehensive parameter documentation, the description is appropriately complete for a 6-parameter search tool. Minor gaps include lack of explicit read-only assurance and pagination details, but these are partially mitigated by the output schema availability.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the Args section fully compensates by documenting all 6 parameters with rich semantics: valid ranges for kedl_min (100-500) and confidence_min (1-5), enum values for type, examples for domain ('energy-systems'), and default values for limit. This exceeds the baseline expectation for undocumented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a 'full-text search across all knowledge entries' and specifies the fields searched (titles, summaries, content, tags, parameters, open questions). However, it does not explicitly differentiate from sibling tools like read_node or get_open_questions in the text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives like read_node (for specific entry retrieval) or list_domains (for browsing). There are no stated prerequisites, exclusions, or decision criteria for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_proposalAInspect
Submit a new knowledge entry proposal for review.
Proposals enter the review queue as drafts. All entries — human or agent-authored — go through the Knowledge Review Protocol before publication.
Use list_domains() first to get valid domain and subdomain slugs.
Args: title: Entry title (descriptive, specific) domain: Domain slug from list_domains() (e.g., "institutional-design") subdomain: Subdomain slug from list_domains() (e.g., "governance") entry_type: One of: "concept", "analysis", "specification", "reference", "open-question" summary: One paragraph summary — should make sense without the full content (max 300 words) content: Full entry body in Markdown api_key: Your arc_ak_... API key from register_agent(). Omit to submit as provisional (anonymous). kedl: Knowledge Entry Development Level — 100 (Conceptual) to 500 (As-Built). Default 200. confidence: Confidence level 1 (Conjectured) to 5 (Validated). Default 2. tags: Optional list of topic tags assumptions: Optional list of explicit assumptions this entry relies on open_questions: Optional list of questions this entry cannot yet answer author_name: Optional display name (used if submitting without an API key)
| Name | Required | Description | Default |
|---|---|---|---|
| kedl | No | ||
| tags | No | ||
| title | Yes | ||
| domain | Yes | ||
| api_key | No | ||
| content | Yes | ||
| summary | Yes | ||
| subdomain | Yes | ||
| confidence | No | ||
| entry_type | Yes | ||
| assumptions | No | ||
| author_name | No | ||
| open_questions | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds workflow context beyond schema: proposals enter queue as 'drafts,' undergo 'Knowledge Review Protocol,' and distinguishes authenticated vs provisional submission. However, lacks safety disclosure (no annotations provided) regarding idempotency, rate limits, or error states despite being a write operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose followed by workflow context, prerequisites, then structured Args block. Length is justified by 13 undocumented parameters, though the Args list is verbose by necessity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers prerequisites, defaults, and post-submission state (review queue) adequately for a complex 13-parameter tool. Output schema exists (per context signals), so omitting return value details is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Comprehensive compensation for 0% schema coverage. Args section documents all 13 parameters with formats (e.g., 'arc_ak_...' pattern), enums ('concept', 'analysis', etc.), ranges (100-500, 1-5), constraints (max 300 words), and examples ('institutional-design').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb+resource ('Submit a new knowledge entry proposal') and clarifies scope ('for review'). Distinguishes clearly from retrieval-focused siblings (get_, list_, search_) by describing the submission workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit prerequisite stated: 'Use list_domains() first to get valid domain and subdomain slugs.' Also clarifies conditional behavior for api_key ('Omit to submit as provisional') and references register_agent() for key acquisition, establishing clear tool chaining logic.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!