CompanyScope
Server Quality Checklist
- Disambiguation4/5
Tools are well-differentated by data source and scope, with lookup_company serving as a clear comprehensive entry point while specialized tools (get_tech_stack, get_key_people) target specific needs. Minor overlap exists since lookup_company returns similar categories (news, tech stack, people), but descriptions explicitly guide agents to use specific tools when 'only' that data is needed.
Naming Consistency4/5Follows a consistent verb_noun pattern throughout (get_company_news, get_corporate_registry, etc.). Minor deviation with 'lookup_company' instead of 'get_company_profile', and abbreviated 'tech' versus full words elsewhere, but overall highly readable and predictable.
Tool Count5/5Six tools strike an excellent balance for a company intelligence domain—covering news, registry, financials, personnel, technology, and comprehensive profiles without bloat. Each tool addresses a distinct research vertical that justifies its existence.
Completeness4/5Covers the full lifecycle of read-only company research with registry, financial, personnel, news, and technical reconnaissance. Minor gap in that no tool handles funding rounds or investment data specifically, and financials are limited to US public companies, but the surface supports core due diligence workflows effectively.
Average 4.4/5 across 6 of 6 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 6 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the external data sources (Brave Search and NewsAPI), the return format (titles, descriptions, sources, dates), sorting behavior (by recency), and operational constraints (API key configuration dependency).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly constructed sentences where every clause earns its place: purpose/sources, return structure, input guidance, and operational constraint. Information is front-loaded with the action verb, followed by specific behavioral details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read tool with no output schema, the description appropriately covers the return value structure and sorting behavior inline. It adequately addresses the informational gaps left by missing annotations and output schema, though it could briefly mention error handling (e.g., no results found).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed parameter documentation including examples. The description reinforces the "company name, not domain" constraint but adds no new semantic information beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ("Get recent news articles"), the resource ("about a company"), and the data sources (Brave Search and NewsAPI). It distinguishes effectively from siblings like get_financials or get_corporate_registry, which handle different data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear input guidance ("Use company name, not domain") that prevents incorrect usage. While it doesn't explicitly contrast with sibling tools, the functional distinction (news vs. registry/financials/people) is clear enough from the name and description combined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical behavioral details: multi-source scraping methodology (website /about/team pages, Wikipedia, GitHub), return structure (names, titles, sources), and input requirement. Lacks mention of failure modes (blocked scraping) or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences total, front-loaded with purpose. Each sentence earns its place: purpose, methodology, output format, usage context, and prerequisites. No redundancy or generic filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without output schema, description adequately compensates by detailing return values (names, titles, sources) and data provenance. Would benefit from explicit output schema or error condition mention to achieve 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear description and example for 'domain'. Description states 'Requires a domain name' but adds no semantic value beyond schema documentation (format, protocol exclusion already covered). Baseline 3 appropriate when schema does heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb 'Find' and clear resource 'key people at a company', immediately distinguishing from siblings like get_financials or get_tech_stack by specifying human targets (founders, C-suite, team members).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this when you need leadership or team information specifically,' providing clear positive context. Lacks explicit 'when not to use' or named sibling alternatives (e.g., doesn't mention lookup_company for general info), preventing a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and effectively discloses: data source (SEC EDGAR), empty-result behavior for invalid inputs ('will return no results'), update frequency ('as companies file'), and detailed return payload structure. Missing only edge case details (rate limits, auth requirements).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences efficiently structured: action+source, return payload enumeration (necessary due to no output schema), scope constraints, and data freshness. Front-loaded with verb, zero redundancy, every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool without output schema or annotations, the description is complete: it comprehensive lists return values, documents scope limitations, and explains data temporality. No gaps requiring inference.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed description and examples for company_name parameter. The description reinforces the parameter's semantic domain ('US public companies') but does not add syntax guidance beyond what the schema already provides. Baseline 3 is appropriate when schema does heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resource 'financial data' and authoritative source 'SEC EDGAR filings'. Explicitly scopes to 'US public companies' which distinguishes it from siblings like get_corporate_registry or lookup_company that likely handle broader entity types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-not guidance: 'Only works for companies that file with the SEC — private companies and non-US companies will return no results.' This clearly defines the tool's boundaries, though it does not explicitly name which sibling to use for those excluded cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses external data source (OpenCorporates), geographic coverage scope (140+ jurisdictions), and coverage limitations (new/small private companies). Would benefit from mentioning rate limits or error handling behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences: functionality+data points first, coverage scope second, usage guidance+limitations third. Zero filler content. Every sentence provides distinct value regarding what, where, and how to use.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description compensates by enumerating returned data fields (incorporation date, officers, etc.). Single parameter with 100% schema coverage requires minimal additional context. Could improve by describing response structure (nested vs flat) or pagination behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds valuable semantic guidance beyond schema: specifying 'legal name' requirement and implying formatting needs (Inc/Ltd/GmbH suffixes mentioned in schema context). Elevates understanding of how to populate the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Look up' + resource 'corporate registry data' + source 'OpenCorporates'. Lists concrete data points returned (incorporation date, status, jurisdiction, registered address, officers) which distinguishes from siblings like get_company_news (news), get_financials (financials), and lookup_company (generic).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides input guidance ('Use the company's legal name for best results') and limitation disclosure ('may return no results for very new or small private companies'). Lacks explicit comparison to sibling 'lookup_company' regarding when to prefer registry data over general company lookup.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden and succeeds in explaining the data sources (HTTP headers, DNS, GitHub) and return structure (frameworks, languages, hosting, analytics, CDNs). Minor gap: does not explicitly state if this is read-only, rate-limited, or potentially costly due to external scraping behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences each serving distinct purposes: mechanism, output, sibling differentiation, and input constraints. No redundancy or filler. Information is front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter analysis tool, the description is complete. It compensates for the missing output schema by detailing the categories of returned data. Combined with perfect input schema coverage, no additional description is necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description reinforces the domain requirement and company-name constraint, but does not add semantic details beyond what the schema already provides (format, examples, constraints).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific action ('Detect') and clear resource ('technology stack'), detailing the methodology ('analyzing HTTP headers, DNS records, and GitHub repositories'). It effectively distinguishes from siblings by contrasting with lookup_company for technology-specific needs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool versus alternatives ('Use this instead of lookup_company when you only need technology information'). Also clarifies input requirements versus unsupported formats ('Requires a domain name — company names are not supported'), preventing common misuse.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses aggregation behavior across multiple external sources, return format as 'JSON object with confidence scores and source attribution', and the comprehensiveness of returned fields. Could improve by mentioning latency, caching behavior, or rate limit considerations for the external APIs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: 1) purpose/sources, 2) return contents, 3) usage guidelines/relationship to siblings, 4) input/output format details. Information is front-loaded and each sentence adds distinct value not redundant with schema or siblings list.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex aggregation tool with no output schema, the description comprehensively compensates by listing all returned data categories (founding year, tech stack, key people, etc.) and output structure (confidence scores, attribution). Adequately addresses the 5 single-purpose sibling tools by explaining its aggregate nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (single 'query' parameter fully documented in schema), establishing baseline 3. Description elevates this by providing concrete input examples ('stripe.com' vs 'Stripe') and critical semantic guidance that 'Domains produce richer results because they enable website scraping and DNS analysis' - helping users select input format wisely.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with verb 'Get', resource 'company profile', and explicit data source enumeration (Wikipedia, GitHub, SEC EDGAR, etc.). Clearly distinguishes from siblings by stating it is the 'primary entry point' that 'calls all other data sources automatically', positioning it as the composite alternative to single-source siblings like get_company_news.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance: 'Use this as the primary entry point for any company research'. Implicitly guides against using siblings individually by noting this tool aggregates them 'automatically'. Also provides input strategy guidance favoring domains over names for richer results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Stewyboy1990/companyscope-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server