Contentrain MCP
OfficialServer Quality Checklist
- Disambiguation3/5
While the detailed descriptions clarify intent, several tools overlap in domain: contentrain_apply, contentrain_bulk, and contentrain_content_save all modify content entries with nuanced distinctions (normalization vs. batch vs. individual CRUD) that could confuse selection. Similarly, contentrain_init and contentrain_scaffold both handle project setup, and contentrain_describe versus contentrain_describe_format both return schema information at different scopes.
Naming Consistency3/5The tool set mixes naming conventions: roughly half follow a clear resource_action pattern (contentrain_content_save, contentrain_model_delete) while others use standalone verbs (contentrain_apply, contentrain_scan, contentrain_validate) or nouns (contentrain_bulk, contentrain_status). This inconsistency makes it harder to predict tool names from their function, though the contentrain_ prefix provides basic namespacing.
Tool Count4/5Fifteen tools is well-suited for a content management system covering models, content CRUD, validation, git workflows, and project initialization. While contentrain_init and contentrain_scaffold overlap slightly in setup functionality, and contentrain_apply bundles multiple complex modes, the overall surface feels comprehensive without excessive granularity.
Completeness3/5The surface covers model and content lifecycle management well but has notable gaps: there is no dedicated tool to retrieve a single content entry by ID or slug (requiring agents to filter list results), and no explicit model_list operation (though status may provide this). Additionally, contentrain_bulk lacks specificity about which operations it batches, creating uncertainty about its coverage.
Average 4.1/5 across 15 of 15 tools scored. Lowest: 3.2/5.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 15 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the git auto-commit side effect ('auto-committed to git'), but fails to warn about the destructive nature of delete_entries or explain partial failure behavior for bulk operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at two sentences. Information is front-loaded with purpose first, followed by behavioral trait. Every sentence earns its place with zero redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a 7-parameter tool with three distinct operation modes and destructive capabilities. Missing: operation mode selection logic, safety warnings for delete_entries, return value description (no output schema exists), and error handling behavior for batch operations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 86% (high), establishing a baseline of 3. The description adds no parameter-specific guidance, but the schema adequately documents most fields. However, the description doesn't clarify the conditional parameter requirements (e.g., confirm required only for delete_entries).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States 'Batch operations on content entries' which clearly identifies the resource and distinguishes from single-entry siblings (contentrain_content_save/delete). However, it doesn't enumerate the specific operation types (copy_locale, update_status, delete_entries) available, leaving the agent to discover these from the schema.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit when-to-use guidance versus alternatives. While 'batch' implies use for multiple entries, there's no guidance on selecting between the three operation modes or when to use this versus single-entry tools like contentrain_content_delete.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It successfully communicates the critical side effect that 'Changes are auto-committed to git,' which is essential for a setup tool. However, it omits other important behavioral aspects: whether it overwrites existing files, required git state, failure modes, or return value structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences with zero waste. Purpose is front-loaded ('Template-based project setup'), followed by enumeration of options, and closing with behavioral note. Every sentence earns its place with no redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 3-parameter schema with complete coverage and no output schema, the description adequately covers the essential operational context (templates and git side effects). However, it lacks completeness regarding the tool's output/return value, idempotency guarantees, or filesystem impact details that would be expected for a project scaffolding tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mirrors the schema by listing available templates, adding no additional semantic depth for 'locales' or 'with_sample_content' parameters. It neither adds constraint details beyond the schema nor leaves parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('setup') and resource ('project'), clearly identifying this as a project scaffolding tool. Lists available template options (blog, landing, etc.) which helps distinguish it from sibling content management tools. However, it does not explicitly differentiate from 'contentrain_init' which may cause confusion about when to use each.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives (particularly 'contentrain_init'), nor does it mention prerequisites like git initialization or existing directory state. The description focuses only on what the tool does, not when to invoke it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses read-only nature and data source location (.contentrain/). However, lacks details on pagination behavior, error modes when model doesn't exist, or the structure of resolved relations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First establishes purpose and safety profile; second provides data source context and operational warning. Efficiently front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with nested objects and no output schema, description provides minimum viable context (purpose, read-only status, data source). Missing return value description and detailed filter semantics that would compensate for lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description does not add parameter-specific semantics (e.g., filter syntax examples, resolve behavior details) beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'List' with resource 'content entries' and includes 'read-only' parenthetical. The warning about .contentrain/ directory and manual file modification effectively distinguishes this from write-oriented siblings like contentrain_content_save.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides negative guidance ('do NOT manually create or modify content files') implying this is the correct read path. However, lacks explicit when-to-use guidance versus other read operations like contentrain_describe or contentrain_scan.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully describes the return value ('comprehensive specification' covering file structure, meta files, and locale strategies), but omits safety characteristics (e.g., read-only status) or performance considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste: the first defines the action and scope, while the second details the return value. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of input parameters, annotations, and output schema, the description adequately covers the essential information—what the tool does and what comprehensive details it returns (file structure, conventions, strategies). Sufficient for a simple documentation retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, establishing a baseline score of 4. The description does not need to compensate for parameter documentation, and appropriately focuses on the tool's behavior and output rather than non-existent inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Describes the Contentrain content file format' with specific scope ('any language/platform'), providing a concrete verb and resource. However, it does not explicitly differentiate from the sibling tool 'contentrain_describe', leaving potential ambiguity about which description tool to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context—invoking this when needing to understand file structure, JSON formats, and markdown conventions—but lacks explicit 'when to use' or 'when not to use' guidance relative to alternatives like 'contentrain_describe'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the git auto-commit side effect and the prohibition against manual file editing, but omits error handling behavior, return value structure, and whether updates are partial or full replacements. This leaves significant behavioral gaps for a structural mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose, second delivers a critical behavioral warning. The information is perfectly front-loaded and appropriately terse for the complexity level.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (nested field definitions with 27 types, 9 parameters) and lack of annotations or output schema, the description provides minimal viable coverage. It successfully flags the git integration quirk but fails to describe return values, error conditions, or the create-vs-update determination logic (implied by ID presence but not explained).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with detailed descriptions for all 9 parameters including the complex nested 'fields' object and enum values. The description adds no additional parameter-specific guidance, meeting the baseline expectation when the schema is self-documenting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Create or update a model definition,' providing a specific verb pair and resource type. It clearly distinguishes this as a model-level operation (schema definition) rather than content operations, differentiating it from siblings like contentrain_content_save.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides one critical safety constraint ('do NOT manually edit .contentrain/ files'), but lacks explicit guidance on when to use this versus contentrain_model_delete or how to choose between create vs update semantics. The workflow context (git auto-commit) is mentioned but not fully contextualized.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It effectively reveals the git auto-commit side effect and scope of destruction (model + content/meta). However, it omits critical safety context like whether deletion is reversible or what return value indicates success.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences total. First establishes purpose and scope; second delivers critical behavioral constraint (git auto-commit). No filler, no redundancy, front-loaded with essential information. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers the destructive nature and persistence mechanism for a 2-parameter tool. Missing explicit description of return value or success/failure behavior, though no output schema exists to require this. Could strengthen with irreversibility warning given the destructive nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage ('Model ID to delete', 'Must be true to confirm deletion'). The description implies the confirm parameter's purpose through the deletion warning, but adds no explicit parameter syntax or format details beyond the schema. Baseline 3 appropriate for complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Delete a model and its content/meta' — specific verb (Delete), resource (model), and scope (content/meta). This clearly distinguishes from sibling contentrain_content_delete (which deletes entries, not the model structure itself) and contentrain_model_save.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific operational guidance about git auto-commit and warns against manual editing of .contentrain/ files. However, lacks explicit guidance on when to use this versus contentrain_content_delete (destroy model structure vs. just content entries) or prerequisites like backup warnings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses the critical side effect of automatic git commits and warns against manual file manipulation. Missing idempotency details (what happens if already initialized) and prerequisite requirements (e.g., git repo must exist).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences total, front-loaded with purpose ('Initialize .contentrain/ structure') followed by critical behavioral warning. Every sentence earns its place with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter initialization tool with no output schema, the description adequately covers the primary safety concerns (git integration). Minor gap: doesn't mention prerequisites (git initialization required) or idempotency behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds no parameter-specific guidance beyond what the schema already documents (stack auto-detection, locale defaults, domain suggestions).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Initialize') with a specific resource ('.contentrain/ structure'), clearly distinguishing this setup tool from siblings like content_save or model_delete which manipulate existing data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical negative guidance ('do NOT manually create .contentrain/ files') and discloses git auto-commit behavior. However, lacks explicit comparison to sibling tools like contentrain_scaffold to clarify when to use init versus other setup tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully establishes this is a write operation ('Push') and clarifies PR handling delegation, but lacks disclosure of failure modes (e.g., push conflicts), idempotency, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 states purpose, sentence 2 defines scope limitations, sentence 3 provides operational guidance. Perfectly front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema with full coverage and no output schema, the description adequately covers the tool's role. It appropriately omits return value details (no output schema exists), though it could benefit from mentioning conflict resolution behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the 'contentrain/*' branch naming convention already present in the schema, adding conceptual context but no additional syntax or format details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Push') and resource ('contentrain/* branches'), clearly distinguishing this git operation from sibling content management tools like contentrain_content_save or contentrain_model_delete. The scope ('to remote') is explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent guidance: explicitly states the tool is 'push-only', clarifies that 'PR creation is handled by the platform' (defining the boundary), and commands 'Do NOT manually push or create PRs', preventing incorrect manual workflows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full disclosure burden effectively. Explicitly states mutating behavior when fix:true ('auto-fixes structural issues') and enumerates specific modifications (canonical sort, orphan meta, missing locale files). Includes safety warning about manual file editing, though omits details on atomicity or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes purpose and validation scope; second sentence covers fix behavior and critical safety constraint. Information is front-loaded and dense without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a 2-parameter validation tool with 100% schema coverage. Describes validation scope comprehensively and clarifies auto-fix side effects. Minor gap: no output description provided (and no output schema exists), though this is partially mitigated by the detailed behavioral description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% establishing baseline 3. Description adds value by contextualizing the fix parameter with specific examples of structural issues (canonical sort, orphan meta, missing locale files) and coupling it with the safety warning about manual edits, providing usage context beyond raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Validate' and resource 'project content against model schemas'. Lists concrete validation targets (required field violations, type mismatches, broken relations, secret leaks, i18n parity) that clearly distinguish this from sibling CRUD operations like content_save or model_delete.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear conditional guidance for the fix parameter ('If fix:true, auto-fixes...') and safety constraints ('do NOT manually edit .contentrain/ files'). However, lacks explicit comparison to sibling tools like contentrain_scan or contentrain_apply regarding when validation is the appropriate choice over other operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and succeeds in disclosing that changes are 'auto-committed to git'—crucial behavioral context for a deletion tool. It also warns against manual file edits. Could improve by stating whether deletions are irreversible or describing error behavior on missing entries.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 states purpose, sentence 2 covers dictionary-specific usage, sentence 3 gives the git/manual-edit warning. Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 6-parameter destructive operation with no output schema or annotations, the description adequately covers the git side-effect, confirmation requirement, and dictionary nuances. Minor gap: does not describe what happens when required identifiers (id/slug) are missing or the return value on success.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage (baseline 3), the description adds valuable semantic context that 'keys' is specifically for dictionary partial deletion versus full locale file removal. This adds meaning beyond the schema's technical description of the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Delete content entries'—a specific verb plus clear resource. It distinguishes from the sibling 'contentrain_model_delete' by targeting entries rather than models, and clarifies it handles both collections (via id) and documents (via slug).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance on when to use the 'keys' parameter versus omitting it for dictionary operations. Includes a critical warning not to manually edit .contentrain/ files post-invocation. Lacks explicit naming of sibling alternatives (e.g., when to use contentrain_content_save instead), preventing a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical git auto-commit behavior and persistence side effects. Explains ID auto-generation for collections and field ignorance patterns. Missing return value description and error handling behavior for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four dense sentences with zero waste. Front-loaded purpose statement followed by structured model-kind breakdown using parallel syntax. Critical git warning placed at end as imperative constraint. Every clause delivers essential usage information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage of input variations for complex polymorphic content models. Addresses git integration consequences. Missing output semantics (what the tool returns on success—generated IDs? confirmation?) which is notable given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% establishing baseline of 3. Description adds significant value by explaining polymorphic entry formats across four model kinds—semantic context the schema cannot capture (e.g., 'data keys are the identities' for DICTIONARY, 'body' key convention for DOCUMENT).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Save' + resource 'content entries', immediately clarifying the write operation. Distinguishes from sibling 'contentrain_model_save' by focusing on content entries rather than models, and from 'contentrain_content_list/delete' via the save operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit field requirements for four model kinds (DICTIONARY, COLLECTION, DOCUMENT, SINGLETON), including which fields are ignored vs required. Warns against manual git file editing. Lacks explicit differentiation from 'contentrain_bulk' and 'contentrain_apply' siblings for batch operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses read-only nature explicitly and warns against manual file creation workflow. Deduct one point for not describing return format, error conditions, or cache behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. Front-loaded with primary purpose, second sentence provides critical workflow guidance. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a read-only schema inspection tool with simple parameters. Lacks output schema description (what the returned schema object contains), but the read-only warning and alternative tool reference provide sufficient context for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with examples ('blog-post', 'hero') and defaults documented. Description mentions 'single model' aligning with the model parameter but adds no syntax details beyond schema. Baseline 3 appropriate when schema does heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') + resource ('full schema of a single model') with scope limitation. Explicitly distinguishes from content manipulation siblings by mentioning contentrain_content_save as the alternative for creating content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit negative constraint ('Do NOT manually create content files') and names the specific alternative tool to use instead ('use contentrain_content_save instead'). Prevents confusion between schema inspection and content creation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Successfully discloses read-only nature, return payload categories (config, models, context), and implies filesystem access to .contentrain/ directory. Missing minor details like idempotency or caching behavior, but sufficient for a status tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Perfect efficiency: Two sentences, zero waste. First sentence front-loads purpose, scope, and safety (read-only). Second sentence provides critical operational warning. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a zero-parameter read tool. Compensates for missing output schema by listing return categories (config, models, context). Minor gap: doesn't specify output format (JSON vs text) or structure details, but 'Returns' implies the presence of data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters with 100% schema coverage (empty object). Baseline score of 4 applies as there are no parameters requiring semantic clarification beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Get full project status' provides clear verb (Get) and resource (project status). The parenthetical '(read-only)' and return value description ('config, models, context') clearly distinguish this from sibling mutation tools like contentrain_apply, contentrain_content_save, and contentrain_model_delete.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit negative guidance: 'Do NOT manually edit .contentrain/ based on this output' warns against a specific misuse pattern. However, lacks positive guidance on when to choose this over similar inspection tools like contentrain_describe or contentrain_scan.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical safety trait ('Read-only — no changes to disk or git'), determinism ('MCP finds strings deterministically'), and pagination behavior for candidates mode. Deducting one point as it lacks details on error handling (e.g., invalid paths) or performance characteristics for large codebases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Every sentence earns its place: purpose statement, mode definitions, safety guarantee, determinism clarification, and workflow recommendation. Efficiently structured with no redundancy despite covering 8 parameters and 3 distinct behavioral modes. Front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given high complexity (3 modes, 8 optional parameters) and lack of annotations/output schema, description successfully covers operational modes, workflow sequencing, and safety constraints. Would benefit from brief mention of output structure (e.g., 'returns JSON with strings array') to achieve full completeness without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). Description adds significant conceptual meaning beyond schema: explains 'graph' builds import/component graphs for intelligence, 'candidates' performs extraction with pre-filtering, and 'summary' provides stats. Connects 'paginate through candidates' to the limit/offset parameters, adding usage context not in schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Scan' and resource 'project source code for content strings'. Clearly distinguishes from mutation siblings (contentrain_apply, contentrain_save) by stating read-only nature. Explicitly enumerates the three operational modes (graph, candidates, summary) with their distinct purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow recommendation: 'start with summary or graph for orientation, then paginate through candidates'. Clarifies decision boundary ('MCP finds strings deterministically; the agent decides what is content'). Implicitly guides when not to use mutation siblings by emphasizing read-only safety.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior5/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral disclosure burden and excels: specifies dry_run defaults to true, explains that execute mode 'writes files to disk, commits to a branch,' discloses the 'review workflow (never auto-merge)' constraint, and clarifies safety mechanisms (conflict resolution in dry run, branch health checks required).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with four well-structured sentences covering modes, dry-run behavior, execute behavior, and workflow recommendations. The DRY RUN/EXECUTE labeling aids scanability. Minor quibble: 'normalize operations' should be 'normalization operations,' but overall efficiently packed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with nested objects and 5 parameters, the description comprehensively covers mode behaviors, safety mechanisms, and workflow. Minor gap: while it mentions 'returns a full preview,' it doesn't describe the output structure or error scenarios given the lack of output schema, though this is partially mitigated by the detailed behavioral descriptions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100% (baseline 3), the description adds valuable semantic context: maps 'extractions' parameter to 'extract' mode and 'patches'/'scope' to 'reuse' mode, explains the dry_run parameter's role in the two-phase workflow, and clarifies that entries in extractions mode leave source files untouched while patches modify them.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines the tool's purpose with specific verbs ('writes', 'patches') and distinguishes two distinct modes ('extract' and 'reuse'). It differentiates from siblings like contentrain_content_save by specifying this is for 'normalize operations' with specific behaviors (source untouched vs source patching).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance: 'always run dry_run first, review the preview, then call again with dry_run:false to execute.' Clearly distinguishes when to use DRY RUN (validation/preview) versus EXECUTE (writing to disk), including the prerequisite that branch health checks must pass for execution.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Contentrain/ai'
If you have feedback or need assistance with the MCP directory API, please join our Discord server