EOxElements
Server Details
A server to provide information about EOxElements custom elements for coding agents.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- EOX-A/EOxElements
- GitHub Stars
- 24
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3/5 across 10 of 10 tools scored.
Every tool has a clearly distinct purpose targeting specific aspects of EOxElements custom elements (attributes, CSS parts, properties, details, events, methods, slots, stories, and listing). There is no overlap or ambiguity—each tool name precisely indicates what it retrieves, making misselection unlikely.
All tools follow a consistent verb_noun pattern with 'get_element_' or 'list_elements' prefixes, except 'list_elements' which still fits the pattern. The naming is uniform and predictable, using snake_case throughout without any deviations or mixed conventions.
With 10 tools, the count is well-scoped for the server's purpose of retrieving detailed information about EOxElements custom elements. Each tool earns its place by covering different facets (e.g., attributes, events, stories), avoiding redundancy while providing comprehensive coverage.
The tool set is highly complete for retrieval operations, covering all key aspects of custom elements (attributes, properties, methods, events, CSS, slots, stories, and listing). A minor gap is the lack of create, update, or delete tools, but this is reasonable if the server is read-only, and agents can still perform core workflows effectively.
Available Tools
10 toolsget_element_attributesCInspect
Get the attributes for a specific EOxElements custom element.
| Name | Required | Description | Default |
|---|---|---|---|
| tagName | Yes | The tag name of the element. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states this is a 'Get' operation, implying read-only behavior, but doesn't clarify permissions, rate limits, error conditions, or what the output contains (e.g., format, data types). This is inadequate for a tool with no annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It's front-loaded with the core purpose and appropriately sized for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It doesn't explain what 'attributes' entail (e.g., HTML attributes vs. custom properties), the return format, or error handling. For a tool in a family with many siblings, more context is needed to distinguish it adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'tagName' documented in the schema. The description doesn't add any meaning beyond this (e.g., examples, constraints, or context for the tagName). This meets the baseline of 3 when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('attributes for a specific EOxElements custom element'), making the purpose understandable. However, it doesn't explicitly differentiate this tool from its siblings (like get_element_properties or get_element_details), which would require a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools (e.g., get_element_properties, get_element_details), there's no indication of what distinguishes attributes from properties or details, leaving the agent without usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_element_css_partsCInspect
Get the CSS shadow parts for a specific EOxElements custom element.
| Name | Required | Description | Default |
|---|---|---|---|
| tagName | Yes | The tag name of the element. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states what the tool does but fails to describe key traits: whether it's read-only or mutative, what happens with invalid inputs, if there are rate limits, or what the output format looks like. This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and wastes no space, making it highly concise and well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It does not explain what 'CSS shadow parts' are, provide examples of output, or detail error handling. For a tool with no structured behavioral data, this minimal description fails to offer sufficient context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the tagName parameter documented as 'The tag name of the element.' The description does not add any additional meaning beyond this, such as examples or constraints. Given the high schema coverage, the baseline score of 3 is appropriate, as the schema handles the parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and the resource ('CSS shadow parts for a specific EOxElements custom element'), making the purpose understandable. However, it does not explicitly differentiate this tool from its siblings (e.g., get_element_css_properties, get_element_attributes), which would require a more specific distinction to achieve a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_element_css_properties or other sibling tools. It lacks context about prerequisites, such as needing a valid tagName, or any exclusions, leaving the agent without clear usage instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_element_css_propertiesBInspect
Get the CSS custom properties for a specific EOxElements custom element.
| Name | Required | Description | Default |
|---|---|---|---|
| tagName | Yes | The tag name of the element. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'gets' data, implying a read-only operation, but does not specify any behavioral traits such as error handling, performance considerations, or what happens if the element is not found. This leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that directly states the tool's purpose without any unnecessary words or fluff. It is front-loaded and efficiently communicates the core functionality, earning a high score for conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema, no annotations), the description is minimally adequate. It explains what the tool does but lacks details on behavior, usage context, or output format, which would be needed for higher completeness in the absence of annotations or output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'tagName' documented as 'The tag name of the element.' The description does not add any additional semantic context beyond this, such as examples or format specifics, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and the target resource ('CSS custom properties for a specific EOxElements custom element'), making the purpose understandable. However, it does not explicitly differentiate this tool from its siblings (e.g., get_element_css_parts, get_element_properties), which would be needed for a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention any prerequisites, context for usage, or comparisons to sibling tools like get_element_css_parts or get_element_properties, leaving the agent without clear usage instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_element_detailsBInspect
Get the full details for a specific EOxElements custom element.
| Name | Required | Description | Default |
|---|---|---|---|
| tagName | Yes | The tag name of the element. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states it 'gets' details, implying a read-only operation, but doesn't clarify aspects like authentication needs, rate limits, error handling, or what 'full details' entails (e.g., format, depth). This leaves significant gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words, making it efficient and easy to parse. It's appropriately sized for a simple tool with one parameter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema, no annotations), the description is minimally adequate but incomplete. It lacks details on behavioral traits and usage context, which are needed since annotations don't cover them, making it just viable but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the parameter 'tagName' documented as 'The tag name of the element.' The description adds no additional meaning beyond this, such as examples or constraints, so it meets the baseline for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('full details for a specific EOxElements custom element'), making the purpose understandable. However, it doesn't differentiate from siblings like 'get_element_attributes' or 'get_element_properties', which also retrieve specific details about elements, so it lacks sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, context, or exclusions, leaving the agent without usage instructions compared to sibling tools that fetch specific element aspects.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_element_eventsCInspect
Get the events for a specific EOxElements custom element.
| Name | Required | Description | Default |
|---|---|---|---|
| tagName | Yes | The tag name of the element. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states what the tool does but lacks critical details: whether this is a read-only operation, if it requires authentication, what format the events are returned in, or any rate limits. This is inadequate for a tool with no annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words. It's front-loaded with the core purpose and efficiently communicates the tool's function without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It doesn't explain what 'events' are, their structure, or the return format. For a tool retrieving data with undefined output, this leaves significant gaps for an agent to understand the tool's behavior and results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'tagName' fully documented in the schema. The description adds no additional parameter semantics beyond implying it's for a 'specific EOxElements custom element,' which aligns with the schema but doesn't provide extra value. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and target resource ('events for a specific EOxElements custom element'), making the purpose understandable. However, it doesn't differentiate this tool from its siblings (like get_element_details or get_element_properties) which also retrieve specific element data, missing full sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, context, or compare it to sibling tools like get_element_details or list_elements, leaving the agent with no usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_element_methodsCInspect
Get the methods for a specific EOxElements custom element.
| Name | Required | Description | Default |
|---|---|---|---|
| tagName | Yes | The tag name of the element. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the action ('Get') but doesn't clarify if this is a read-only operation, what permissions are needed, how results are formatted, or any error conditions. For a tool with no annotations, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with no wasted words. It front-loads the key information ('Get the methods') and specifies the resource clearly, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what 'methods' include (e.g., function signatures, return types) or how results are structured, which is critical for an agent to use the tool effectively. With no structured data to compensate, the description falls short.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'tagName' documented as 'The tag name of the element.' The description adds no additional meaning beyond this, such as examples or constraints. Since the schema does the heavy lifting, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('methods for a specific EOxElements custom element'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_element_properties' or 'get_element_events', which would require more specificity about what 'methods' entail in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. With multiple sibling tools (e.g., get_element_attributes, get_element_properties), the description lacks context on how 'methods' differ from other element features or when this tool is appropriate, leaving the agent to infer usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_element_propertiesCInspect
Get the properties for a specific EOxElements custom element.
| Name | Required | Description | Default |
|---|---|---|---|
| tagName | Yes | The tag name of the element. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden but offers limited behavioral insight. It implies a read operation ('Get'), but doesn't disclose permissions, rate limits, error conditions, or output format. For a tool with no annotations, this leaves significant gaps in understanding how it behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with zero waste—it states the action and target efficiently. It's appropriately sized for a simple tool and front-loaded with essential information, making it easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It doesn't explain what 'properties' include, the return format, or any behavioral traits. For a tool with 1 parameter and multiple siblings, more context is needed to fully understand its role and output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the parameter 'tagName' is documented in the schema. The description adds no additional meaning beyond implying it's for a 'specific' element, which is redundant with the schema. Baseline 3 is appropriate as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('properties for a specific EOxElements custom element'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_element_details' or 'get_element_attributes', which likely retrieve different aspects of elements, leaving some ambiguity about scope boundaries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. With multiple sibling tools (e.g., get_element_attributes, get_element_details), the description lacks context about what 'properties' specifically refers to or how it differs from related queries, offering minimal help for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_element_slotsCInspect
Get the slots for a specific EOxElements custom element.
| Name | Required | Description | Default |
|---|---|---|---|
| tagName | Yes | The tag name of the element. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states what the tool does but doesn't describe how it behaves—e.g., whether it returns an error for invalid tag names, if it's a read-only operation, what the output format looks like, or any rate limits. This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words. It's appropriately sized and front-loaded, directly stating the tool's purpose without unnecessary elaboration, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a tool that retrieves element slots, the lack of annotations and output schema means the description should provide more context. It doesn't explain what 'slots' are in this context, the return format, or error handling, leaving the agent with incomplete information to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the parameter 'tagName' documented as 'The tag name of the element.' The description doesn't add any extra meaning beyond this, such as examples or constraints, so it meets the baseline for high schema coverage without compensating further.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('slots for a specific EOxElements custom element'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_element_attributes' or 'get_element_properties' beyond the resource name, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_element_details' or 'list_elements'. It lacks context about prerequisites, such as whether the element must exist or be registered, and doesn't mention any exclusions or specific scenarios for its use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_element_storiesCInspect
Get the stories (examples/snippets) for a specific EOxElements custom element. This includes descriptions and vanilla JS code snippets.
| Name | Required | Description | Default |
|---|---|---|---|
| tagName | Yes | The tag name of the element. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves stories, implying a read-only operation, but doesn't clarify permissions, rate limits, error handling, or output format. For a tool with zero annotation coverage, this is a significant gap, as it lacks details on safety, constraints, or response behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, stating the core purpose in the first sentence. It uses two sentences efficiently to clarify what 'stories' include ('descriptions and vanilla JS code snippets'), with no wasted words. However, it could be slightly more structured by explicitly separating purpose from details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no nested objects) and 100% schema coverage, the description is minimally adequate. However, with no annotations and no output schema, it should provide more behavioral context (e.g., output format, error cases). It covers the basic purpose but lacks completeness for safe and effective use by an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no parameter-specific information beyond what the schema provides. With 100% schema description coverage (the 'tagName' parameter is documented in the schema), the baseline is 3. The description doesn't elaborate on the 'tagName' parameter's format, examples, or constraints, so it doesn't compensate or add extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get the stories (examples/snippets) for a specific EOxElements custom element.' It specifies the verb ('Get'), resource ('stories'), and scope ('for a specific EOxElements custom element'), and distinguishes it from siblings by focusing on stories/examples rather than attributes, CSS, details, etc. However, it doesn't explicitly contrast with all siblings (e.g., 'list_elements'), keeping it at 4 instead of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context, or exclusions, nor does it reference sibling tools like 'get_element_details' or 'list_elements' for comparison. This leaves the agent without explicit usage instructions, scoring a 2 for minimal guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_elementsBInspect
List all available EOxElements custom elements.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. It states this is a list operation but doesn't describe what 'available' means (e.g., filtered by permissions, status), whether it's paginated, what format the output takes, or any rate limits. For a tool with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose with zero wasted words. It's appropriately sized and front-loaded, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is minimally adequate but lacks context about output format or behavioral traits. It doesn't explain what 'list' entails (e.g., array of names, objects with metadata), which is a gap for a tool with no structured output documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100%, so there's no need for parameter explanation in the description. The baseline for 0 parameters is 4, as the description appropriately doesn't waste space on non-existent parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('all available EOxElements custom elements'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_element_details' or 'get_element_properties' that might also list elements in some form, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. With multiple sibling tools focused on element details (e.g., 'get_element_details', 'get_element_properties'), the description lacks context about whether this is a high-level overview or when to choose it over more specific queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!