docs-mcp
Server Details
Provides Vaadin Documentation and help with development tasks
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- vaadin/vaadin-mcp
- GitHub Stars
- 13
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
11 toolsget_component_java_apiGet Component Java APIAInspect
Returns the Java API documentation for a specific Vaadin component. The component name can be in any format (e.g., 'Button', 'button', 'vaadin-button').
| Name | Required | Description | Default |
|---|---|---|---|
| component_name | Yes | The name of the component (e.g., 'Button', 'button', 'TextField', 'text-field') | |
| vaadin_version | Yes | Required. Vaadin version: "7", "8", "14", "24", "25.0", "25.1", or "25.2". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the return type (Java API documentation) and input flexibility, but lacks details on output format, error handling, rate limits, or authentication needs. For a tool with no annotations, this leaves significant gaps in understanding how it behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and includes a helpful note on input flexibility. Every word contributes value, with no redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 required parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose and input semantics well, but lacks details on output behavior, errors, or usage context, which are needed for full agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds minimal value by noting the component name can be in 'any format', which slightly clarifies the schema's examples. This meets the baseline for high schema coverage without significant enhancement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns') and resource ('Java API documentation for a specific Vaadin component'), distinguishing it from sibling tools like get_component_react_api or get_component_web_component_api by specifying the API type. It also mentions format flexibility for the component name, adding useful context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when Java API documentation is needed for a Vaadin component, but it does not explicitly state when to use this tool versus alternatives like get_component_react_api or get_full_document. No exclusions or prerequisites are provided, leaving the agent to infer context from tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_component_react_apiGet Component React APIAInspect
Returns the React API documentation for a specific Vaadin component. The component name can be in any format (e.g., 'Button', 'button', 'vaadin-button').
| Name | Required | Description | Default |
|---|---|---|---|
| component_name | Yes | The name of the component (e.g., 'Button', 'button', 'TextField', 'text-field') | |
| vaadin_version | Yes | Required. Vaadin version: "7", "8", "14", "24", "25.0", "25.1", or "25.2". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only states what the tool returns without disclosing behavioral traits like error handling, response format, data freshness, or rate limits. It adds some context about component name format acceptance, but lacks details on what the output looks like or potential limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the core purpose and efficiently adds useful detail about component name format. Every word earns its place with no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with no annotations and no output schema, the description adequately covers the basic purpose but lacks completeness regarding behavioral transparency and output expectations. It doesn't compensate for the absence of structured output information, leaving gaps in understanding what the tool actually returns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds minimal value by reiterating component name format examples and implying vaadin_version is required, but doesn't provide additional semantic context beyond what's in the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns') and resource ('React API documentation for a specific Vaadin component'), distinguishing it from siblings like get_component_java_api or get_component_web_component_api by specifying the API type (React). It also adds nuance about component name format flexibility.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when React API docs are needed for a Vaadin component, but provides no explicit guidance on when to choose this over alternatives like get_component_java_api or get_component_web_component_api. It mentions component name format flexibility, which is helpful but not a full usage guideline.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_components_by_versionGet Components by VersionAInspect
Returns a comprehensive list of components available in a specific Vaadin version, including component names, React component names, Java class names, and npm packages.
| Name | Required | Description | Default |
|---|---|---|---|
| version | Yes | The Vaadin version as a minor version (e.g., '24.8', '24.9', '25.0') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns a 'comprehensive list' but doesn't specify if it's paginated, rate-limited, requires authentication, or what happens on errors. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior beyond the basic read operation implied by 'Returns.'
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and details the scope of returned information. Every part earns its place without redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is adequate but not complete. It covers the purpose and scope well, but lacks details on behavioral aspects like error handling or output format, which are important for a tool with no annotations or output schema. It meets minimum viability with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'version' parameter well-documented in the schema itself. The description doesn't add any parameter-specific details beyond what the schema provides, such as format examples or constraints. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Returns') and resource ('components available in a specific Vaadin version'), and distinguishes it from siblings by specifying the scope of information returned (names, React component names, Java class names, npm packages). It doesn't just restate the name/title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'in a specific Vaadin version,' but doesn't explicitly state when to use this tool versus alternatives like 'get_component_java_api' or 'search_vaadin_docs.' It provides basic context but lacks explicit guidance on exclusions or comparisons with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_component_stylingGet Component StylingAInspect
Returns the styling/theming documentation for a specific Vaadin component. Returns both Java and React styling documentation when available. The component name can be in any format (e.g., 'Button', 'button', 'vaadin-button').
| Name | Required | Description | Default |
|---|---|---|---|
| component_name | Yes | The name of the component (e.g., 'Button', 'button', 'TextField', 'text-field') | |
| vaadin_version | Yes | Required. Vaadin version: "7", "8", "14", "24", "25.0", "25.1", or "25.2". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool returns documentation for both Java and React when available, which adds context beyond a simple read operation. However, it doesn't cover aspects like error handling, rate limits, authentication needs, or response format details, leaving behavioral gaps for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by additional details in a second sentence. Both sentences are necessary and efficient, with no redundant or wasted words, making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides basic purpose and parameter context but lacks details on return values, error cases, or behavioral traits. For a tool with 2 parameters and 100% schema coverage, it's minimally adequate but incomplete in covering all contextual aspects an agent might need.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the two parameters. The description adds minor context by noting 'The component name can be in any format (e.g., 'Button', 'button', 'vaadin-button'),' which clarifies input flexibility but doesn't significantly enhance meaning beyond the schema. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns the styling/theming documentation for a specific Vaadin component.' It specifies the resource (Vaadin component styling/theming documentation) and the verb (returns). However, it doesn't explicitly differentiate from siblings like 'get_theme_css_properties' or 'get_component_java_api', which might also relate to styling or documentation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by mentioning 'Returns both Java and React styling documentation when available,' suggesting it's for retrieving styling info across frameworks. However, it lacks explicit guidance on when to use this tool versus alternatives like 'get_component_java_api' or 'get_theme_css_properties,' and doesn't specify prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_component_web_component_apiGet Component Web Component (TypeScript) APIAInspect
Returns the Web Component/TypeScript API documentation for a specific Vaadin component by fetching from external TypeScript API docs. The component name can be in any format (e.g., 'Button', 'button', 'vaadin-button').
| Name | Required | Description | Default |
|---|---|---|---|
| component_name | Yes | The name of the component (e.g., 'Button', 'button', 'TextField', 'text-field') | |
| vaadin_version | Yes | Required. Vaadin version: "7", "8", "14", "24", "25.0", "25.1", or "25.2". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions fetching from external docs but doesn't describe error handling, rate limits, authentication needs, response format, or what happens with invalid inputs. For a tool with external dependencies and no annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences with zero waste. The first sentence states the core purpose, and the second provides important normalization details about component name formats. Every element earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 parameters, 100% schema coverage, but no annotations and no output schema, the description is adequate but incomplete. It covers the basic purpose and parameter context well, but lacks behavioral details about the fetch operation, error handling, and response format that would be needed for full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing complete parameter documentation. The description adds marginal value by reinforcing the component name format flexibility ('any format') and mentioning 'external TypeScript API docs' context, but doesn't provide additional syntax or format details beyond what the schema already covers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns the Web Component/TypeScript API documentation') and resource ('for a specific Vaadin component'), distinguishing it from siblings like get_component_java_api or get_component_react_api by specifying the API type. It also provides useful normalization details about component name formats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'fetching from external TypeScript API docs' and specifying component name formats, but it doesn't explicitly state when to use this tool versus alternatives like get_component_react_api or get_full_document. No exclusions or prerequisites are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_full_documentGet Full DocumentAInspect
Retrieves complete documentation pages for one or more file paths. Use this when you need full context beyond what search results provide. Provide file_paths only (array).
| Name | Required | Description | Default |
|---|---|---|---|
| file_paths | Yes | Array of file paths from search results. Use this to fetch one or more documents in a single call. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool retrieves 'complete documentation pages' and handles 'one or more file paths,' but lacks details on permissions, rate limits, or response format. This is adequate for a read operation but misses richer behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and concise with two sentences that earn their place: the first defines purpose and usage, the second specifies parameter constraints. There is zero waste, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity is low (single parameter, read operation) and no output schema exists, the description is mostly complete. It covers purpose, usage, and parameter constraints, but lacks details on response format or error handling, which would enhance completeness for a retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single parameter 'file_paths' as an array of strings. The description adds minimal value by restating 'Provide file_paths only (array),' which doesn't enhance semantics beyond the schema. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieves complete documentation pages') and resource ('for one or more file paths'), distinguishing it from siblings like search_vaadin_docs by emphasizing full context beyond search results. It explicitly mentions the verb 'retrieves' and the target 'documentation pages,' making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('when you need full context beyond what search results provide'), distinguishing it from search_vaadin_docs. It also specifies constraints ('Provide file_paths only'), offering clear context for usage without exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_latest_vaadin_versionGet Latest Vaadin VersionAInspect
Returns the latest stable version of Vaadin as a simple JSON object. This is useful when setting up new projects, checking for updates, or when helping with dependency management. Returns: {version, released}.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the return format ('simple JSON object' with fields 'version, released'), which is helpful. However, it does not mention potential limitations like rate limits, error conditions, or whether the data is cached/real-time. For a read-only tool with zero annotation coverage, this leaves some behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and front-loaded: the first sentence states the core purpose, the second provides usage context, and the third specifies the return format. Every sentence earns its place with no redundant or vague language, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is largely complete: it explains what the tool does, when to use it, and the return format. However, without annotations or output schema, it could benefit from mentioning any behavioral constraints (e.g., update frequency, error handling) to achieve full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100% (though empty). The description does not need to explain parameters, so it appropriately focuses on the tool's purpose and output. A baseline of 4 is applied since no parameters exist, and the description adds value by clarifying the return format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns the latest stable version of Vaadin') and resource ('Vaadin'), distinguishing it from sibling tools like 'get_supported_vaadin_versions' (which returns multiple versions) or 'get_components_by_version' (which focuses on components). It uses precise language that leaves no ambiguity about what the tool does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('setting up new projects, checking for updates, or when helping with dependency management'), which helps differentiate it from sibling tools that retrieve component APIs, documentation, or multiple versions. However, it does not explicitly state when NOT to use it or name specific alternatives, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_supported_vaadin_versionsGet Supported Vaadin VersionsAInspect
Returns the latest stable release for each supported Vaadin major version (25, 24, 23, 14, 8, 7) with version number, release date, and whether it requires a commercial license. Useful for migration planning and understanding which versions are available.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it returns data for multiple major versions, includes specific fields (version number, release date, license requirement), and is read-only (implied by 'Returns'). It doesn't mention error handling, rate limits, or authentication needs, but for a simple query tool with zero parameters, this is reasonably comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core functionality in the first sentence, followed by a concise usage context. Every sentence adds value: the first defines the output, and the second explains its utility. There is no wasted text or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is largely complete. It clearly states what the tool does, its output fields, and use cases. However, it doesn't specify the exact format of the return data (e.g., JSON structure), which could be helpful for an agent, but this is a minor gap for such a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description adds no parameter information, which is appropriate since none are needed, and it doesn't detract from the schema's completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns the latest stable release for each supported Vaadin major version') and resource ('Vaadin major versions'), distinguishing it from siblings like 'get_latest_vaadin_version' (singular) and 'get_components_by_version' (component-focused). It specifies the exact versions covered (25, 24, 23, 14, 8, 7) and output fields (version number, release date, license requirement).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Useful for migration planning and understanding which versions are available'), which implicitly suggests alternatives like 'get_latest_vaadin_version' for a single version or 'get_components_by_version' for component details. However, it doesn't explicitly state when not to use it or name specific alternatives, keeping it at a 4.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_theme_css_propertiesGet Theme CSS PropertiesAInspect
Returns CSS custom properties documentation for a specific Vaadin theme (Aura, Lumo, or Base styles). Use this to look up the correct CSS variables for the theme the application is using. Base style properties (--vaadin-*) are available in all themes.
| Name | Required | Description | Default |
|---|---|---|---|
| theme | Yes | The theme to get CSS custom properties for: "aura" (Vaadin 25+ default), "lumo" (classic theme), or "base" (base styles available in all themes, Vaadin 25+ only). | |
| vaadin_version | Yes | Required. Vaadin version: "7", "8", "14", "24", "25.0", "25.1", or "25.2". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, placing full disclosure burden on the description. While it states the tool 'Returns' documentation (implying read-only), it fails to specify return format (string vs object), safety characteristics, or whether results are cached. The mention of base properties being available in all themes adds some behavioral context, but critical operational details are missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with zero redundancy: first establishes purpose, second provides usage guidance, third offers critical technical context about base styles. Every sentence earns its place with high information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter structure with complete schema coverage and no output schema, the description adequately covers inputs. It identifies the return type conceptually (CSS properties documentation) but lacks specifics on data structure. For a lookup tool of this complexity, the description is sufficiently complete despite missing return format details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds valuable semantic context beyond the schema: it maps the enum values to their conceptual meanings (Aura as 'Vaadin 25+ default', Lumo as 'classic'), and highlights the CSS naming pattern (--vaadin-*) for base styles, aiding agent understanding of parameter implications.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states the tool 'Returns CSS custom properties documentation' with specific resource identification (Vaadin themes: Aura, Lumo, Base). It clearly distinguishes from sibling component API tools (get_component_*) by focusing on theme-level CSS variables rather than component-specific documentation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage context: 'Use this to look up the correct CSS variables for the theme the application is using.' This establishes when to invoke the tool. However, it lacks explicit differentiation from get_component_styling, which might also involve CSS, creating potential ambiguity for agents choosing between theme-level vs component-level styling queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_vaadin_primerVaadin PrimerAInspect
IMPORTANT: Always use this tool FIRST before working with Vaadin. Returns a comprehensive primer document with current (2025+) information about modern Vaadin development. This addresses common AI misconceptions about Vaadin and provides up-to-date information about Java vs React development models, project structure, components, and best practices. Essential reading to avoid outdated assumptions. For legacy versions (7, 8, 14), returns guidance on version-specific resources.
| Name | Required | Description | Default |
|---|---|---|---|
| vaadin_version | Yes | Required. Vaadin version: "7", "8", "14", "24", "25.0", "25.1", or "25.2". For legacy versions (7, 8, 14), returns guidance on version-specific resources. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool returns a primer document (not raw data), addresses AI misconceptions, provides up-to-date information, and handles legacy versions differently by offering guidance rather than direct content. However, it doesn't specify response format, size, or potential errors, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the most critical information ('IMPORTANT: Always use this tool FIRST'), followed by clear purpose and scope. Every sentence adds value: explaining what it returns, why it's essential, and how it handles legacy cases. There is no wasted text, and it's structured for immediate comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (educational primer with version-specific behavior), no annotations, and no output schema, the description does a good job of covering purpose, usage, and behavioral context. However, it lacks details on the output format (e.g., document structure, length) and error handling, which would be helpful for an agent to interpret results fully.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the single parameter 'vaadin_version' with its enum values and description. The description adds no additional parameter semantics beyond what's in the schema, but it reinforces the legacy version behavior mentioned in the schema description. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns a comprehensive primer document with current (2025+) information about modern Vaadin development.' It specifies the verb ('returns'), resource ('primer document'), and scope ('modern Vaadin development'), and distinguishes it from siblings by emphasizing it should be used FIRST to avoid outdated assumptions, unlike documentation lookup tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Always use this tool FIRST before working with Vaadin' and 'Essential reading to avoid outdated assumptions.' It also distinguishes when to use it versus alternatives by noting that for legacy versions, it returns guidance on version-specific resources, implicitly contrasting with other tools that might provide direct API or component information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_vaadin_docsSearch Vaadin DocumentationAInspect
Search Vaadin documentation for relevant information about Vaadin development, components, and best practices. Uses hybrid semantic + keyword search. USE THIS TOOL for questions about: Vaadin components (Button, Grid, Dialog, etc.), TestBench, UI testing, unit testing, integration testing, @BrowserCallable, Binder, DataProvider, validation, styling, theming, security, Push, Collaboration Engine, PWA, production builds, Docker, deployment, performance, and any Vaadin-specific topics. When using this tool, try to deduce the correct development model from context: use "java" for Java-based views, "react" for React-based views, or "common" for both. Use get_full_document with file_paths containing the result's file_path when you need complete context.
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | The search query or question about Vaadin. Will be used to query a vector database with hybrid search (semantic + keyword). | |
| max_tokens | No | Maximum number of tokens to return (default: 1500) | |
| max_results | No | Maximum number of results to return (default: 5) | |
| ui_language | No | The UI implementation language: "java" for Java-based views, "react" for React-based views, or "common" for both. If not specified, the agent should try to deduce the correct language from context or asking the user for clarification. | |
| vaadin_version | Yes | Required. Vaadin version: "7", "8", "14", "24", "25.0", "25.1", or "25.2". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the search methodology ('hybrid semantic + keyword search'), context deduction requirements for ui_language, and the relationship with get_full_document for deeper context. However, it doesn't mention rate limits, authentication needs, or error behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero wasted sentences. It front-loads the core purpose, then provides usage guidelines, parameter guidance, and relationship to other tools—all in a compact format where every sentence serves a clear purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search with multiple parameters) and the absence of both annotations and output schema, the description provides excellent contextual completeness. It covers purpose, usage scenarios, parameter guidance, and relationships to other tools, making it sufficiently complete for an AI agent to understand and use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so parameters are well-documented in the schema itself. The description adds some value by explaining the ui_language parameter ('try to deduce the correct development model from context') and mentioning the search methodology, but doesn't provide significant additional semantics beyond what the schema already covers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search Vaadin documentation for relevant information about Vaadin development, components, and best practices.' It specifies the resource (Vaadin documentation) and method (hybrid semantic + keyword search), and distinguishes it from sibling tools like get_full_document by explaining when to use each.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: 'USE THIS TOOL for questions about:' followed by a comprehensive list of Vaadin-specific topics. It also distinguishes from alternatives by stating 'Use get_full_document with file_paths containing the result's file_path when you need complete context,' giving clear when/when-not guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!