coremodels
Server Details
Schema modeling in JSON, JSON-LD, and other formats with CoreModels platform.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 16 of 16 tools scored. Lowest: 2.6/5.
Most tools have distinct purposes targeting specific resources (nodes, mixins, relations) and actions (create, fetch, remove, update). However, some overlap exists: core_models_bulk_create and core_models_create_node both handle node creation, which could cause confusion despite bulk_create's broader scope. Descriptions help clarify, but the overlap is noticeable.
All tools follow a consistent snake_case pattern with a clear 'core_models_' prefix and verb_noun structure (e.g., create_node, fetch_nodes, remove_relation). This uniformity makes the tool set predictable and easy to navigate, with no deviations in naming style.
With 16 tools, the count is slightly high but reasonable for a data modeling system covering nodes, mixins, relations, and imports. It feels comprehensive without being overwhelming, though it borders on the upper limit of typical well-scoped servers (3-15 tools).
The tool set provides complete CRUD/lifecycle coverage for the domain: nodes (create, fetch, update, remove, restore), mixins (create type/value, get info, remove type/value), and relations (create, get info, remove). Additional tools like import_jsonschema and project_content_summary enhance functionality, leaving no obvious gaps for core operations.
Available Tools
16 toolscore_models_bulk_createcore_models_bulk_createAInspect
Bulk create nodes, relations, and mixin values in one atomic operation.
USE WHEN: Creating multiple nodes, or any creation that includes relations and/or mixins. LIMITATION: Cannot create AttributeMixin mixins - use the create mixin API instead.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| bulkCreateDto | Yes | The bulk create payload containing nodes, relations, and mixins | |
| graphProjectId | Yes | CoreModels project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: the atomic operation nature (all-or-nothing), the limitation on AttributeMixin creation, and the ability to reference existing or new nodes in relations/mixins (implied through the input schema context). However, it doesn't mention authentication needs, rate limits, or error handling, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. The 'USE WHEN' and 'LIMITATION' sections are clearly labeled and add essential guidance without redundancy. Every sentence earns its place by providing distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a bulk creation tool with nested objects and no output schema, the description does well by clarifying the atomic operation and usage scenarios. However, without annotations or output schema, it lacks details on response format, error conditions, or side effects, which are important for a mutation tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema, such as explaining the structure of bulkCreateDto or token usage. This meets the baseline of 3 when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('bulk create') and resources ('nodes, relations, and mixin values'), distinguishing it from sibling tools like core_models_create_node or core_models_create_relation. It explicitly mentions the atomic nature of the operation, which is a key differentiator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'USE WHEN' section explicitly states when to use this tool ('Creating multiple nodes, or any creation that includes relations and/or mixins'), and the 'LIMITATION' section provides a clear exclusion ('Cannot create AttributeMixin mixins') with an alternative ('use the create mixin API instead'), which likely refers to sibling tools like core_models_create_mixin_type or core_models_create_mixin_value.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_create_mixin_typecore_models_create_mixin_typeAInspect
Creates a new mixin type for a CoreModels project.
Mixins provide reusable property sets that can be applied to any node across all project spaces. Always check existing mixins first with CoreModels_GetMixinsInfo to avoid duplicates.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| mixinTypeDto | Yes | The mixin type definition | |
| graphProjectId | Yes | CoreModels project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly indicates this is a creation/mutation operation ('Creates'), which implies it's not read-only and may have side effects. However, it doesn't mention authentication requirements (though the schema shows a 'token' parameter), permission levels, error conditions, or what happens on success/failure. The description adds some context about mixin purpose but lacks operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly sized at three sentences with zero waste. The first sentence states the core purpose, the second provides essential context about what mixins are, and the third gives critical usage guidance. Every sentence earns its place, and the structure is front-loaded with the most important information first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with no annotations and no output schema, the description provides adequate but incomplete coverage. It explains what the tool does and gives important usage guidance, but doesn't describe what the tool returns, error conditions, or authentication requirements. The 100% schema coverage helps, but for a mutation operation, more behavioral context would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema - it doesn't explain the format of 'mixinTypeDto', the purpose of 'graphProjectId', or authentication details. The baseline of 3 is appropriate when the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Creates a new mixin type') and resource ('for a CoreModels project'), with additional context about what mixins are ('reusable property sets that can be applied to any node across all project spaces'). It distinguishes this from sibling tools like 'core_models_create_mixin_value' (which creates values rather than types) and 'core_models_remove_mixin_type' (which removes rather than creates).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Always check existing mixins first with CoreModels_GetMixinsInfo to avoid duplicates.' This gives a clear prerequisite action and names the specific alternative tool to use first, helping the agent understand the proper workflow and avoid redundant creations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_create_mixin_valuecore_models_create_mixin_valueAInspect
Create a mixin value for a node.
IMPORTANT NOTES:
MixinId must be retrieved from GetMixinsInfo tool.
A node can have multiple mixins.
Each mixin can have multiple columns.
When creating or updating a mixin column value, always use the column ID.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| graphProjectId | Yes | CoreModels project ID (GUID) | |
| mixinCreateDto | Yes | Mixin value creation payload |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context: it's a creation tool (implies mutation), specifies that a node can have multiple mixins and each mixin multiple columns, and provides critical constraints (use column ID). However, it lacks details on permissions, error handling, or response format, which are important for a mutation tool with no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose, followed by bullet-pointed important notes that add necessary context without fluff. Every sentence earns its place, but the structure could be slightly improved by integrating notes more seamlessly or adding a brief example.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (mutation tool with nested objects, no annotations, no output schema), the description is moderately complete. It covers purpose, usage guidelines, and some behavioral context, but lacks details on what happens on success/failure, response format, or error cases. For a creation tool in a system with siblings like remove/update operations, more completeness would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema: it emphasizes that MixinId must come from GetMixinsInfo and column IDs should be used, but doesn't explain parameter interactions or provide additional semantics. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a mixin value for a node') with a specific resource ('mixin value'), distinguishing it from siblings like 'core_models_create_mixin_type' (which creates mixin types) and 'core_models_create_node' (which creates nodes). However, it doesn't explicitly differentiate from 'core_models_update_node' which might also handle mixin values, leaving room for slight ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance in the 'IMPORTANT NOTES' section: it specifies when to use (after retrieving MixinId from GetMixinsInfo tool), prerequisites (MixinId must be retrieved from that tool), and alternatives are implied by sibling tools like 'core_models_update_node' for updates. It also includes exclusions (always use column ID, not other identifiers), making it comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_create_nodecore_models_create_nodeAInspect
Create a new node in a CoreModels project.
IMPORTANT NOTES:
NodeType can be: Element, Type, Taxonomy, Exemplar, Component, Space, Tag, or Mixin.
If CheckBeforeCreate is true, the system checks for existing nodes and returns the result.
SpaceIds determines which spaces the node belongs to:
If omitted or empty, the node is created in the default space.
When creating a Space node, do NOT pass SpaceIds.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| nodeCreateDto | Yes | Node creation payload | |
| graphProjectId | Yes | CoreModels project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context about the CheckBeforeCreate functionality (checking for existing nodes) and SpaceIds behavior (default space handling, restriction for Space nodes), which goes beyond the input schema. However, it doesn't cover important aspects like authentication requirements (token usage), potential side effects, error handling, or what the tool returns, leaving gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear main statement followed by IMPORTANT NOTES in bullet points. Each sentence earns its place by providing essential information. It could be slightly more concise by integrating the notes more seamlessly, but overall it's efficient and front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a mutation tool with 3 parameters, nested objects, no annotations, and no output schema), the description is moderately complete. It covers key behavioral aspects and parameter semantics well, but lacks information about return values, error conditions, authentication implications, and differentiation from sibling tools. For a creation tool without output schema, more detail on what happens after creation would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the baseline is 3. The description adds meaningful semantic context beyond the schema: it explains the purpose of CheckBeforeCreate ('checks for existing nodes and returns the result'), clarifies SpaceIds behavior ('If omitted or empty, the node is created in the default space'), and provides a critical restriction ('When creating a Space node, do NOT pass SpaceIds'). This significantly enhances understanding of how parameters affect behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a new node') and resource ('in a CoreModels project'), making the purpose immediately understandable. However, it doesn't explicitly differentiate this tool from sibling tools like 'core_models_bulk_create' or 'core_models_create_relation', which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some implied usage context through the IMPORTANT NOTES section (e.g., when to omit SpaceIds for Space nodes), but it doesn't explicitly state when to use this tool versus alternatives like 'core_models_bulk_create' for multiple nodes or 'core_models_create_relation' for relationships. The guidance is helpful but incomplete.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_create_relationcore_models_create_relationAInspect
Create a relation between two nodes in a CoreModels project.
IMPORTANT NOTES:
Relation has a direction FromNodeId → ToNodeId.
RelationGroupId is the template for the relation.
Use GetRelationGroupsInfo to get the list of available relation groups for a project.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| relation | Yes | Relation creation payload | |
| graphProjectId | Yes | CoreModels project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It explains the directional nature of relations and references another tool for template discovery, but doesn't disclose critical behavioral aspects like whether this is an idempotent operation, what permissions are required, error conditions, or what happens if nodes don't exist. The description adds some value but leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear purpose statement followed by IMPORTANT NOTES. Every sentence earns its place by providing essential information without redundancy. The formatting with bullet points enhances readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with no annotations and no output schema, the description provides basic directional context and references to other tools, but doesn't address important aspects like what the tool returns, error handling, or prerequisites. Given the complexity of creating graph relations, more completeness would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all parameters thoroughly. The description adds minimal value by clarifying directionality (FromNodeId → ToNodeId) and mentioning RelationGroupId as a template, but doesn't provide additional semantic context beyond what's in the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a relation') and resource ('between two nodes in a CoreModels project'), which distinguishes it from sibling tools like create_node or remove_relation. However, it doesn't explicitly differentiate from bulk_create or other relation tools beyond the basic verb.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The IMPORTANT NOTES section provides clear context about directionality and references GetRelationGroupsInfo for obtaining relation templates, which helps guide usage. It doesn't explicitly state when NOT to use this tool or mention alternatives like bulk_create for multiple relations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_fetch_nodescore_models_fetch_nodesAInspect
Fetch multiple nodes from a CoreModels project with filtering options. Returns paged data. Requires passing at least one of the following properties: nodeIds, nodeType, or expression.
PAGINATION:
First request: leave pagingToken null
Check response: if "lastPage"=false, make another request using the fresh "pagingToken" from response
Each response provides a NEW pagingToken - always use the latest one, never reuse old tokens
Stop when "lastPage"=true
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| nodeIds | No | ||
| nodeType | No | Filter by node type | |
| pageSize | No | Number of nodes per page | |
| sortAttr | No | Attribute to sort results by | |
| sortDesc | No | ||
| spaceIds | No | Comma-separated space IDs to filter by. Use IDs, not space names. | |
| expression | No | Search by partial node name. Do NOT use wildcard characters. | |
| pagingToken | No | Pagination token from previous response. Leave null for first request. Important: Always use the last token from the most recent response. | |
| includeMixins | No | Include node mixins in results | |
| graphProjectId | Yes | CoreModels project ID | |
| includeRelations | No | Include node relations in results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and excels at disclosing behavioral traits. It clearly explains pagination mechanics (first request, token usage, stopping condition), specifies that it 'returns paged data,' and provides detailed pagination workflow that goes beyond what parameters alone would indicate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and front-loaded: first sentence states core purpose, second adds critical constraint, then dedicated PAGINATION section with clear workflow. Every sentence earns its place with zero wasted words, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 12 parameters, no annotations, and no output schema, the description does remarkably well by covering pagination behavior, filtering requirements, and response handling. It misses explaining what the actual returned data structure looks like, but otherwise provides substantial operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is high at 83%, establishing a baseline of 3. The description adds some value by emphasizing the requirement for 'at least one of nodeIds, nodeType, or expression' and providing pagination context for pagingToken, but doesn't significantly enhance understanding of individual parameters beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('fetch multiple nodes') and resource ('from a CoreModels project') with filtering options. It distinguishes from siblings like create/remove/update nodes by focusing on retrieval, but doesn't explicitly contrast with other fetch-like tools since none are listed among siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage guidance by stating 'Requires passing at least one of: nodeIds, nodeType, or expression' which indicates when this tool is appropriate. However, it doesn't explicitly state when to use this vs. alternatives or mention any prerequisites beyond the required parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_get_mixins_infocore_models_get_mixins_infoCInspect
Get information about all mixins in a CoreModels project. Mixins define structured metadata used for mapping models, system integrations, and other extensions. Use FetchNode tool documentation to understand how mixins are attached to nodes.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| graphProjectId | Yes | CoreModels project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a 'Get' operation, implying read-only behavior, but doesn't cover critical aspects like authentication requirements (though parameters hint at it), rate limits, error conditions, or what the output contains. The reference to external documentation adds some context but is insufficient for standalone transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences: the first states the purpose clearly, and the second provides supplementary context about mixins and references external documentation. It's front-loaded with the core function, though the second sentence could be more directly actionable for tool usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a tool that retrieves project metadata with no annotations and no output schema, the description is incomplete. It lacks details on return values, error handling, or behavioral constraints, relying too heavily on external references without providing essential usage context for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, clearly documenting both required parameters ('token' for authentication and 'graphProjectId' for project identification). The description adds no additional parameter details beyond what the schema provides, so it meets the baseline score of 3 for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get information about') and resource ('all mixins in a CoreModels project'), with additional context about what mixins are. However, it doesn't explicitly differentiate from sibling tools like 'core_models_get_relation_groups_info' or 'core_models_project_content_summary' that might also retrieve project metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance by referencing the 'FetchNode tool documentation' for understanding mixin attachment, but it doesn't specify when to use this tool versus alternatives (e.g., for mixin info vs. other project data), nor does it mention prerequisites or exclusions. This leaves usage context largely implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_get_relation_groups_infocore_models_get_relation_groups_infoCInspect
Get information about relation groups for a CoreModels project. Use the relation groups to create a relation between two nodes.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| graphProjectId | Yes | CoreModels project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it indicates this is a read operation ('Get information'), it doesn't describe what information is returned, whether there are rate limits, authentication requirements beyond the token parameter, or any side effects. The second sentence about creating relations is contextual but doesn't explain the tool's own behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences. The first sentence directly states the purpose, and the second adds useful context about relation groups. There's no wasted verbiage, though the structure could be slightly improved by front-loading the context about creating relations if it's critical usage guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description is incomplete. It doesn't explain what information is returned about relation groups (e.g., list of groups, their properties), nor does it cover behavioral aspects like error conditions or pagination. Given the complexity implied by sibling tools that manipulate relations, more detail is needed about what this read operation provides.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters (token and graphProjectId) clearly documented in the schema. The description adds no additional parameter information beyond what the schema provides, such as format examples or constraints. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Get information about relation groups for a CoreModels project', which provides a clear verb ('Get information') and resource ('relation groups'). However, it doesn't differentiate from sibling tools like core_models_get_mixins_info or core_models_fetch_nodes, which also retrieve information about different resources. The second sentence about using relation groups to create relations adds context but doesn't further clarify the tool's specific purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions that relation groups can be used to create relations, but doesn't specify if this tool should be used before core_models_create_relation or how it differs from other information-retrieval siblings like core_models_get_mixins_info. There's no explicit when/when-not guidance or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_import_jsonschemacore_models_import_jsonschemaCInspect
Imports a JSON Schema into a space
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| importDto | No | ||
| graphProjectId | Yes | CoreModels project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While 'Imports' implies a write operation, the description doesn't address important behavioral aspects like whether this requires specific permissions, whether it overwrites existing schemas, what happens on failure, or any rate limits. The description is too minimal for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just 5 words, front-loading the core purpose without any wasted words. Every word contributes directly to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 3 parameters, no annotations, and no output schema, the description is inadequate. It doesn't explain what 'importing a JSON Schema' means operationally, what the expected outcome is, or provide any behavioral context. The minimal description leaves too many questions unanswered for proper tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 67% (2 of 3 top-level parameters have descriptions), so the schema provides substantial documentation. The description adds no parameter-specific information beyond what's in the schema - it doesn't explain the relationship between parameters or provide usage context. Baseline 3 is appropriate when schema does most of the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Imports') and resource ('JSON Schema into a space'), making the purpose understandable. However, it doesn't differentiate this tool from sibling tools like 'core_models_create_node' or 'core_models_bulk_create' which also involve data creation/import operations, so it doesn't reach the highest clarity level.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites, when this tool is appropriate versus other creation/import tools in the sibling list, or any contextual constraints for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_project_content_summarycore_models_project_content_summaryCInspect
Returns the labels and IDs of all types, elements, and taxonomies in the project.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| graphProjectId | Yes | CoreModels project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states this is a read operation ('Returns'), but lacks details on permissions, rate limits, pagination, error handling, or what format the labels and IDs are returned in. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is appropriately sized and front-loaded, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of interacting with a project's models and the lack of annotations and output schema, the description is incomplete. It does not explain the return format, structure, or scope of the summary, leaving the agent uncertain about what data to expect from this read operation in a context with many sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with clear documentation for both required parameters ('token' for authentication, 'graphProjectId' for project identification). The description adds no additional parameter information beyond what the schema provides, so it meets the baseline for adequate but not enhanced parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Returns') and the resources ('labels and IDs of all types, elements, and taxonomies in the project'), making the purpose specific and understandable. However, it does not explicitly distinguish this read-only summary tool from its many sibling tools that perform create, update, or remove operations, which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With 16 sibling tools including various fetch, create, update, and remove operations, the agent receives no explicit or implied context about when this summary tool is appropriate compared to others like 'core_models_fetch_nodes' or 'core_models_get_mixins_info'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_remove_mixin_typecore_models_remove_mixin_typeAInspect
Remove a mixin type from a CoreModels project.
WARNING:
This operation permanently removes the mixin type.
All columns belonging to this mixin type will also be deleted.
This action cannot be undone.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| mixinTypeId | Yes | ID of the mixin type to permanently delete. WARNING: This removes the mixin type and all its columns irreversibly. | |
| graphProjectId | Yes | CoreModels project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explicitly warns about permanent deletion, cascading effects (columns deleted), and irreversibility, which are critical behavioral traits for a destructive operation. This goes beyond what the input schema describes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by a clearly marked warning section. Every sentence adds critical information about the tool's behavior, with no wasted words. The structure efficiently communicates essential details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description provides strong behavioral context (permanence, cascading effects) but lacks details on prerequisites, error conditions, or return values. It is mostly complete given the complexity, though some gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description does not add meaning beyond the schema, such as explaining parameter relationships or usage nuances. Baseline 3 is appropriate when the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Remove a mixin type') and the target resource ('from a CoreModels project'), distinguishing it from sibling tools like core_models_remove_mixin_value or core_models_remove_node. It precisely identifies what the tool does without being vague or tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like core_models_remove_mixin_value or core_models_remove_node, nor does it mention prerequisites or context for its use. The warning section discusses consequences but not usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_remove_mixin_valuecore_models_remove_mixin_valueAInspect
Remove one or more mixin column values from a node.
IMPORTANT NOTES:
mixinId must be retrieved from GetMixinsInfo tool.
mixinColumnsIds must contain the IDs of the mixin columns to remove.
Removing a mixin column value does not delete the mixin definition itself.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| graphProjectId | Yes | CoreModels project ID (GUID) | |
| mixinRemoveDto | Yes | Mixin value removal payload |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It clearly indicates this is a destructive mutation (removing values) and specifies important constraints (mixinId source requirement, distinction from definition deletion). However, it doesn't mention authentication requirements, potential side effects, error conditions, or what happens to the node after removal. The behavioral disclosure is adequate but incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with a clear purpose statement followed by bullet-pointed IMPORTANT NOTES. Every sentence earns its place - the first sentence states the core action, and each bullet addresses a critical constraint or clarification. No wasted words, front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with no annotations and no output schema, the description provides adequate but incomplete context. It covers the core purpose and key constraints, but lacks information about authentication requirements (though token parameter is in schema), error handling, what constitutes successful removal, or what the tool returns. Given the complexity of removing values from nodes, more behavioral context would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all 3 parameters. The description adds minimal value beyond the schema - it reinforces that mixinId must come from GetMixinsInfo and mixinColumnsIds are IDs to remove, but doesn't provide additional semantic context about parameter relationships or usage patterns. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Remove one or more mixin column values'), target resource ('from a node'), and distinguishes from siblings like 'core_models_remove_mixin_type' (which removes definitions) and 'core_models_remove_node' (which removes entire nodes). The verb+resource combination is precise and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The IMPORTANT NOTES section provides explicit prerequisites (mixinId must come from GetMixinsInfo tool) and clarifies what this tool does NOT do (doesn't delete mixin definitions). However, it doesn't explicitly state when to use this versus alternatives like 'core_models_update_node' for modifying values or 'core_models_remove_mixin_type' for removing definitions entirely.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_remove_nodecore_models_remove_nodeAInspect
Remove (suspend) a node in a CoreModels project. The node can be restored later using the RestoreNode tool.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| nodeId | Yes | ID of the node to suspend. Node can be restored later using core_models_restore_node. | |
| graphProjectId | Yes | CoreModels project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden of behavioral disclosure. While it clarifies the operation is a 'suspend' rather than permanent deletion and mentions restorability, it doesn't address important behavioral aspects like required permissions, side effects on related data, error conditions, or what the tool returns. For a destructive operation with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that both earn their place: the first states the core purpose, and the second provides crucial behavioral context about restorability. No wasted words or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive operation with no annotations and no output schema, the description does the minimum viable job. It clarifies the 'suspend' nature and mentions restorability, but doesn't address return values, error handling, or system impacts. Given the complexity of node removal in a graph project, more completeness would be expected.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, maintaining the baseline score of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Remove (suspend)') and resource ('a node in a CoreModels project'), providing specific verb+resource pairing. However, it doesn't explicitly differentiate from sibling tools like core_models_remove_relation or core_models_remove_mixin_type, which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by mentioning the node can be restored later using RestoreNode, which implies this is a reversible operation rather than permanent deletion. However, it doesn't explicitly state when to use this versus alternatives like core_models_update_node or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_remove_relationcore_models_remove_relationBInspect
Remove a relation between two nodes in a CoreModels project.
| Name | Required | Description | Default |
|---|---|---|---|
| token | No | Authentication token | |
| relation | No | Relation to remove | |
| graphProjectId | No | CoreModels project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states the action is 'Remove', implying a destructive mutation, but doesn't specify if this requires special permissions, is reversible, has side effects (e.g., cascading deletions), or what happens on success/failure. For a destructive tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without redundancy. It's appropriately sized for a straightforward removal operation and front-loads the key action ('Remove'). Every word earns its place, with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (destructive mutation, 3 parameters with nested objects, no output schema, and no annotations), the description is minimally adequate but incomplete. It covers the basic purpose but lacks behavioral context, usage guidelines, and output expectations. Without annotations or output schema, the agent must rely heavily on the input schema for execution details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters (id, fromNodeId, toNodeId, relationGroupId, token, relation, graphProjectId). The description adds no additional meaning beyond what's in the schema—it doesn't explain parameter relationships, dependencies, or usage nuances. Baseline 3 is appropriate when the schema does all the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Remove') and target ('a relation between two nodes in a CoreModels project'), which is specific and unambiguous. It doesn't explicitly differentiate from sibling tools like 'core_models_remove_node' or 'core_models_remove_mixin_type', but the focus on 'relation' is clear enough to distinguish it from other removal operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing relation), exclusions, or comparisons to sibling tools like 'core_models_create_relation' or 'core_models_remove_node'. The agent must infer usage solely from the tool name and input schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_restore_nodecore_models_restore_nodeBInspect
Restore a suspended node in a CoreModels project. Use this tool to bring back a previously suspended (soft-deleted) node.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| nodeId | Yes | ID of the suspended node to restore | |
| graphProjectId | Yes | CoreModels project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the node is 'suspended (soft-deleted),' which adds useful context about the node's state. However, it doesn't disclose critical behavioral traits such as required permissions, whether the restoration is reversible, potential side effects (e.g., on related relations or mixins), or rate limits. For a mutation tool with zero annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, and the second sentence adds clarifying context. Both sentences earn their place by providing essential information without redundancy or fluff. It's efficient and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a mutation tool (restoring a node) with no annotations and no output schema, the description is incomplete. It lacks details on behavioral aspects like permissions, reversibility, side effects, and error handling. While the purpose and usage are clear, the description doesn't provide enough context for safe and effective use in a production environment, especially compared to siblings that involve data modification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters (token, graphProjectId, nodeId) with descriptions. The description doesn't add any parameter-specific semantics beyond what's in the schema, such as format details or usage examples. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't need to given the schema's completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Restore') and resource ('a suspended node in a CoreModels project'), making the purpose specific and understandable. It distinguishes the tool from siblings like 'core_models_remove_node' by focusing on restoration rather than deletion or creation. However, it doesn't explicitly differentiate from all siblings (e.g., 'core_models_update_node' might also modify node states).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'to bring back a previously suspended (soft-deleted) node.' This implies it's for nodes that have been soft-deleted, not for creating new nodes or other operations. It doesn't explicitly mention when not to use it or name alternatives, but the context is sufficient to guide usage relative to siblings like 'core_models_create_node' or 'core_models_remove_node'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
core_models_update_nodecore_models_update_nodeCInspect
Update an existing node in a CoreModels project.
IMPORTANT NOTES:
You can update the node label and/or the spaces the node belongs to.
Fields not provided will remain unchanged.
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | Authentication token | |
| nodeUpdateDto | Yes | Node update payload | |
| graphProjectId | Yes | CoreModels project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that only label and spaces can be updated and that unspecified fields remain unchanged, which is helpful. However, it lacks critical details like whether this requires specific permissions, if updates are reversible, potential side effects (e.g., impact on relations), or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences: a clear purpose statement and important notes. It's front-loaded with the core action and wastes no words, though the formatting with all-caps 'IMPORTANT NOTES' could be slightly refined for better flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (mutation tool with nested objects, no annotations, no output schema), the description is moderately complete. It covers the basic update scope and partial-update behavior but lacks details on permissions, error handling, return values, or how it differs from sibling tools, leaving gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value by hinting at the updatable fields (label and spaces) and the partial-update behavior, but doesn't provide additional semantics beyond what's in the schema (e.g., format details for spaceIds).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and resource ('an existing node in a CoreModels project'), making the purpose immediately understandable. However, it doesn't differentiate this tool from sibling tools like 'core_models_restore_node' or 'core_models_remove_node' beyond the basic verb, missing explicit distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing node ID), compare to sibling tools like 'core_models_create_node' or 'core_models_restore_node', or specify any exclusions (e.g., cannot update certain fields).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!