Anchord MCP
Server Quality Checklist
- Disambiguation4/5
Tools are largely distinct with clear boundaries between single/batch operations (resolve_company vs resolve_company_batch) and opposing actions (link_source_record vs unlink_source_record). Minor potential confusion exists between get_entity (basic fetch) and get_entity_export (golden record export), though descriptions clarify the distinction.
Naming Consistency4/5Follows a consistent verb_noun pattern throughout (get_entity, resolve_company, ingest_record, guard_write). The exception is get_entity_export which inverts the action noun order; export_entity would be more consistent. Batch suffixes are applied predictably.
Tool Count5/5Eleven tools is well-scoped for an entity resolution system. The set covers resolution (single/batch), ingestion, linking/unlinking, validation (guard), and retrieval without bloat. Each tool earns its place by addressing a specific workflow step.
Completeness4/5Covers the core entity resolution lifecycle: resolving companies/people to AnchorIDs, ingesting source records, managing source links, and pre-write validation. Minor gap in general entity listing/search beyond specific resolution, but resolve functions cover primary access patterns. No explicit AnchorID deletion, though unlink provides soft-delete capability for source relationships.
Average 4/5 across 11 of 11 tools scored. Lowest: 3.4/5.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.1.1
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 11 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the return format ('single JSON object') and data characteristics ('merged/canonical view'), but omits operational traits like read-only safety, rate limits, or potential size constraints of the export.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states the operation, second describes the return value. Information is front-loaded and appropriately sized for a single-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple schema (1 parameter, 100% coverage) and lack of output schema, the description adequately explains what the tool returns. It could be improved by noting the read-only nature (given no annotations), but otherwise covers the essential behavioral contract.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description uses the term 'AnchorID' which aligns with the entity_id parameter's schema description ('UUID of the AnchorID'), reinforcing the domain mapping, but does not add syntax details or examples beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Export') and resource ('golden record'/'AnchorID'), and distinguishes this from get_entity by specifying it returns the 'merged/canonical view'. However, it does not explicitly contrast with the sibling get_entity tool to clarify when to choose one over the other.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no explicit guidance on when to use this tool versus alternatives like get_entity, or prerequisites for invocation. While the term 'export' implies a retrieval of the full merged record, the description lacks explicit when/when-not directives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the optional inclusion behavior for linked records, but omits safety characteristics (read-only nature), error handling (e.g., 404 for invalid UUIDs), and return structure details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences with zero waste. Main action front-loaded in first sentence; optional behavior deferred to second. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for basic invocation given good schema coverage, but gaps remain: no output schema exists yet the description doesn't characterize the return structure (e.g., single entity vs. wrapped response) or error cases for missing UUIDs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds value by enumerating specific valid values for include ('links', 'source_records', or both) and clarifying that entity_id refers to an AnchorID UUID, exceeding what the schema alone provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Fetch') and resource ('AnchorID/canonical entity') with specific identifier ('by UUID'). The UUID specificity distinguishes it from sibling resolve_company/resolve_person tools which likely resolve by attributes rather than direct UUID lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through 'by UUID' phrasing, suggesting use when the canonical AnchorID is already known. However, lacks explicit guidance on when to use get_entity versus get_entity_export or the resolve_* alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral disclosure burden. It excellently compensates by detailing the tri-state status values (resolved | needs_review | not_found), confidence scoring, and ambiguous candidate handling. It implies a read-only lookup operation through the 'Returns...' phrasing, though explicit safety characteristics (idempotent, non-destructive) are not stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first sentence covers inputs and action, second covers outputs. Information density is high with no filler. The structure front-loads the core verb and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description effectively documents return values (status, confidence, AnchorID, candidates). With 6 optional parameters and nested objects, it adequately covers complexity. Minor gap: does not mention that all parameters are optional or provide input combination logic (e.g., minimum required fields).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds value by grouping parameters into functional categories (email/name/domain/external identifiers), implying they are alternative resolution methods. However, it does not explain parameter relationships (e.g., that company_entity_id requires name) or validation rules beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action (resolve to AnchorID) and target resource (person). It distinguishes from resolve_company by specifying person-specific inputs like Slack/Google IDs. However, it fails to explicitly differentiate from the sibling resolve_person_batch tool (e.g., 'for single person resolution').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists available input methods but provides no guidance on when to use which (e.g., email vs. name+company domain) or when to prefer this over resolve_person_batch. There are no prerequisites or exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical idempotency guarantee and 'reactivate' behavior (implying links can exist in deactivated states). Missing: return value structure, error conditions (e.g., invalid UUIDs), authorization requirements, and whether this operation triggers side effects like audit logging.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence front-loads the core operation; second sentence provides the critical idempotency guarantee. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a 4-parameter linking operation with complete schema coverage. Core behavior and safety properties (idempotency) are covered. Deducted one point for missing return value description and error scenarios, which are important for a mutation tool with no output schema or annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline of 3. Description maps 'entity_id' to 'AnchorID' conceptually, adding minor semantic value, but does not expand on parameter formats, validation rules, or business logic constraints beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific action ('Create or reactivate') with explicit resources ('link between an AnchorID and a source record'). The 'reactivate' verb implicitly distinguishes from sibling tool 'unlink_source_record', indicating this handles both new and restored links.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit guidance through the idempotency note ('calling twice returns the existing link'), suggesting safe retry patterns. However, lacks explicit comparison to sibling 'unlink_source_record' or guidance on when to use versus other entity management tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of explaining behavior. It effectively discloses the ternary resolution states (resolved | needs_review | not_found), explains the confidence scoring mechanism, and warns about ambiguous candidates—critical context for an entity resolution tool. Does not mention rate limits or side effects, but covers the resolution logic thoroughly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first covers inputs/purpose, second covers return values. Front-loaded with the core action and output type. Every clause provides distinct information (resolution method, return statuses, data elements) without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description comprehensively explains the return contract (status values, confidence score, AnchorID, match reasons, ambiguous candidates). For a 6-parameter entity resolution tool with nested objects, this covers the critical resolution outcomes. Minor gap: does not note that all parameters are optional (0 required), which is unusual and relevant for invocation strategy.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description maps parameter groups conceptually ('city/state' as geo-matching, 'external identifiers' for the identifiers object) but adds no syntax details, validation rules, or usage examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States the specific action (resolve), target resource (company), and output format (AnchorID) using clear verbs. The mention of 'AnchorID' distinguishes it from sibling get_entity (which likely retrieves by ID), while listing input methods (domain, name, city/state, external identifiers) clarifies the entity resolution pattern versus simple lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes what inputs can be used but provides no explicit guidance on when to select this tool versus resolve_company_batch (batch alternative) or get_entity (direct retrieval by ID). No mention of prerequisites like minimum data requirements or when to prefer external identifiers over name/geo.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully communicates critical behavioral traits: soft-delete (not hard deletion), idempotency, and specific HTTP return code behavior (200). It could improve by clarifying whether the AnchorID and source record entities themselves remain intact or if there are cascade effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. The first sentence front-loads the core action (soft-delete the link), while the second provides essential idempotency context. Every word earns its place with no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema with 100% coverage and no output schema, the description adequately covers the operation's purpose and key behavioral characteristics (soft-delete, idempotency). Minor gap: could clarify impact on parent entities (whether they persist after unlinking) and response body structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing complete parameter documentation. The description maps domain concepts ('AnchorID', 'source record') to the parameter names, but does not add syntax details, format constraints, or examples beyond what the schema already provides. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Soft-delete the link between an AnchorID and a source record' — providing specific verb (soft-delete), resource (the link), and scope. It clearly distinguishes from sibling tool link_source_record through the inverse operation naming and soft-delete specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes idempotency ('calling on an already-unlinked pair returns 200'), which provides implicit usage guidance that the operation is safe to retry. However, it lacks explicit guidance on when to use this versus link_source_record or other alternatives, and omits prerequisites or conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and succeeds in disclosing key behavioral traits: it clarifies the tool is read-only/evaluation-only ('does NOT perform any write'), explains the decision logic ('Returns allowed/blocked with reasons'), and notes the four specific validation checks performed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three highly efficient sentences front-loaded with the critical 'Evaluation-only' classification. Every sentence earns its place: the first defines the tool type, the second lists specific checks and return values, and the third provides the essential disclaimer about write operations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description appropriately compensates by describing the conceptual return values ('allowed/blocked with reasons') and behavioral constraints. It adequately covers the 3 simple parameters, though it could slightly improve by mentioning error conditions or explicitly naming the write counterpart.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions validation checks that loosely map to parameters (confidence threshold, conflicts) but does not add semantic details beyond what the schema already provides (e.g., it does not explain the 0-1 range for min_confidence or UUID format for entity_id).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool performs an 'Evaluation-only pre-write safety check' and lists specific validations (AnchorID existence, confidence threshold, conflict resolution, canonical links). This clearly distinguishes it from sibling tools like ingest_record (actual writes) and guard_write_batch (batch operations).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clearly establishes when to use the tool ('pre-write safety check') and explicitly states it 'does NOT perform any write,' implying the caller must use a separate write tool afterward. However, it does not explicitly name the specific sibling tool (likely ingest_record) to use for the actual write operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and succeeds well. It discloses critical behavioral details: automatic AnchorID matching, the prerequisite of a registered source system, and the implementation detail that it wraps a batch endpoint. Missing minor details like return value format or idempotency guarantees prevent a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste. Front-loaded with purpose ('Ingest a single source record'), followed by behavior (matching), prerequisites (registered source), and implementation context (wrapper). Every sentence earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter write operation with nested objects and no output schema, the description covers the ingestion flow, ID resolution behavior, and prerequisites comprehensively. The only gap is the lack of description for return values or success indicators, which would be helpful given the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description references 'registered source (system)' which aligns with the system parameter, and implies the payload contains record fields, but does not add syntax details, format examples, or semantic constraints beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (ingest), resource (single source record), and destination (Anchord). It explicitly distinguishes from batch operations by noting it 'Wraps POST /ingest/batch with a single-item array,' clearly differentiating it from batch siblings like guard_write_batch and resolve_company_batch.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear prerequisites ('Requires a registered source') and implies usage context (single record vs batch via the wrapper explanation). However, it lacks explicit guidance on when to use this versus similar write operations like guard_write or link_source_record, or when to prefer the batch endpoint directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses batch limit (200), correlation requirement, and crucial ambiguous match behavior ('needs_review with candidate AnchorIDs'). Missing operational details like rate limits or idempotency, but covers key behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 defines purpose + limits, sentence 2 states input requirements, sentence 3 explains edge case behavior. Appropriately front-loaded and dense with actionable information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complex nested input schema with 100% coverage handled well. No output schema provided, but description compensates by explaining both success case ('Resolve... to AnchorIDs') and ambiguous case ('needs_review'). Could clarify 'not_found' behavior but covers primary scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% per context signals, establishing baseline 3. Description adds conceptual grouping ('at least one identifying field') which helps interpret the various ID parameters, but schema already documents client_ref purpose and array structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Resolve') + resource ('people to AnchorIDs') + scope ('multiple', 'max 200'). The batch nature and item limit explicitly distinguish this from the singular sibling tool 'resolve_person'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Clear context for batch use ('in a single call', 'max 200'). Specifies input constraints ('Each item needs a client_ref', 'at least one identifying field'). Lacks explicit mention of singular alternative 'resolve_person' for single lookups.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden effectively. It explicitly states the read-only safety guarantee ('never writes'), describes the return value structure ('per-item allowed/blocked decisions with reasons'), and notes the correlation mechanism. Missing: specific error conditions or rate limiting details that would merit a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste: (1) scope/limits, (2) required field semantics, (3) output description, (4) safety guarantee. Critical information is front-loaded ('Batch', 'max 200', 'pre-write'). Every sentence earns its place in guiding tool selection and invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description compensates adequately by describing return values ('allowed/blocked decisions with reasons'). For a safety-critical validation tool with complex nested input (array of objects), the description covers essential behavioral constraints. Minor gap: doesn't describe error response format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The description adds value by explaining the purpose of client_ref ('for correlation') rather than just its existence, and emphasizes the operational constraint ('max 200'). It could enhance further by explaining min_confidence or require_no_conflicts semantics, but exceeds baseline requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Batch pre-write safety check' provides exact verb (check), resource (AnchorIDs), and scope (batch, max 200). The phrase 'Evaluation-only — never writes' clearly distinguishes it from write-oriented siblings like ingest_record, while 'Batch' differentiates it from guard_write.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Strong contextual guidance: 'pre-write safety check' establishes when to use (before writing), and 'Evaluation-only — never writes' clarifies when NOT to use (when actual persistence is needed). However, it doesn't explicitly name guard_write as the alternative for single-item checks or ingest_record for actual writes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses key behavioral traits: the 200-item limit, correlation mechanism via client_ref, validation rule requiring at least one identifying field per item, and the specific return behavior for ambiguous matches (candidate AnchorIDs with needs_review status).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient two-sentence structure. First sentence establishes purpose and scale limits; second sentence covers input requirements and output behavior. Zero redundancy—every clause conveys essential operational information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a batch resolution tool with complex nested items and no output schema, the description adequately covers the essential contract: input validation rules and return status behaviors (needs_review with candidates). Minor gap in not explicitly listing which fields count as 'identifying fields' (domain, name, identifiers object, etc.).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema describes the top-level structure, the description adds critical semantic constraints not captured in the schema: the requirement for 'at least one identifying field' (business logic not enforced by schema validation) and the purpose of client_ref ('for correlation'). It does not fully enumerate what constitutes an 'identifying field' or explain min_confidence, but adds significant value beyond the 100% top-level schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (resolve), resource (companies), output format (AnchorIDs), and batch nature (max 200). It effectively distinguishes from sibling 'resolve_company' (single item) and 'resolve_person_batch' (different entity type) through explicit scoping.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear operational constraints (max 200 items, client_ref requirement, need for identifying fields) and explains ambiguous match handling (needs_review status). Could be improved by explicitly stating when to prefer this over the single-item 'resolve_company' alternative, though this is implicitly clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/nolenation04/anchord-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server