Broadcom Support MCP Server
Server Quality Checklist
- Disambiguation5/5
The two tools have clearly distinct purposes: one retrieves a specific article by URL, while the other searches the knowledge base with a query. There is no overlap or ambiguity between them.
Naming Consistency5/5Both tools follow a consistent verb_noun pattern (read_documentation, search_documentation), using the same naming convention and clear, descriptive verbs.
Tool Count2/5With only two tools, the server feels thin for a support knowledge base domain. While the tools cover basic retrieval and search, typical support systems might benefit from additional operations like filtering by category, viewing recent articles, or handling user authentication.
Completeness3/5The tools provide core read and search functionality, but there are notable gaps. For example, there is no way to browse articles by category, list trending or recent content, or manage user-specific features like saved articles or subscriptions, which could limit agent workflows.
Average 3.2/5 across 2 of 2 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 2 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the return format ('list of relevant articles with titles, URLs, and snippets'), which is helpful, but does not cover other important aspects such as authentication needs, rate limits, error handling, or whether the search is paginated or limited in scope. This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured, consisting of two sentences that efficiently convey the tool's purpose and output. Every sentence adds value without redundancy, making it easy to understand at a glance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no annotations, but with an output schema), the description is partially complete. It explains the purpose and return values, but lacks details on behavioral traits and usage guidelines. The output schema likely covers return values, reducing the need for description, but gaps in behavioral transparency and guidelines prevent a higher score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not mention any parameters, and the schema description coverage is 0%, so it adds no semantic information beyond what the schema provides. However, with only 2 parameters and an output schema present, the baseline is 3, as the schema handles the parameter documentation adequately without requiring additional explanation in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Search') and resource ('Broadcom Support knowledge base'), and specifies what it returns ('list of relevant articles with titles, URLs, and snippets'). However, it does not explicitly differentiate from its sibling tool 'read_documentation', which likely serves a different function like reading specific articles rather than searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as its sibling 'read_documentation'. It mentions the tool's function but lacks explicit instructions on context, prerequisites, or exclusions, leaving the agent to infer usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what the tool does ('Retrieve...') and what it returns, but it lacks details on permissions, rate limits, error handling, or other behavioral traits like whether it's read-only or has side effects. This is a significant gap for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, consisting of two efficient sentences that directly state the action and return values without any wasted words. Every sentence earns its place by providing essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter) and the presence of an output schema (which handles return value documentation), the description is mostly complete. It covers the purpose and parameter semantics adequately, but it lacks usage guidelines and behavioral details, which are needed for full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaning beyond the input schema by specifying that the 'url' parameter should point to a 'Broadcom Support article'. With schema description coverage at 0% and only one parameter, the description effectively compensates by clarifying the parameter's purpose, though it doesn't detail format constraints (e.g., URL structure).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Retrieve') and resource ('Broadcom Support article'), and it distinguishes the scope ('full content') from the sibling tool 'search_documentation', which likely searches rather than retrieves full content. However, it doesn't explicitly differentiate the sibling's function, keeping it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the sibling 'search_documentation', nor does it mention any prerequisites or context for usage. It implies usage by specifying the resource but lacks explicit alternatives or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/ttonogai/BroadcomSupportMCP'
If you have feedback or need assistance with the MCP directory API, please join our Discord server