companylens-mcp
Server Quality Checklist
- Disambiguation5/5
Each tool targets a completely distinct data domain—search/indexing, financial/SEC profiles, government contracts, federal litigation, and sanctions screening. No functional overlap exists between the tools, making selection unambiguous.
Naming Consistency4/5All tools follow the 'company_<resource>' snake_case pattern consistently. The minor deviation is that 'search' is an action/entry point while the others (profile, contracts, court_cases, sanctions_check) imply data retrieval, but the shared prefix and formatting keep it predictable.
Tool Count5/5Five tools is ideal for this focused corporate intelligence domain. The set covers the essential due diligence workflow (search → profile/contracts/litigation/sanctions) without bloat or trivial fragmentation.
Completeness4/5Covers the core compliance and intelligence lifecycle well: discovery via search, financial vetting via profile, legal risk via court cases, regulatory risk via sanctions, and business relationships via contracts. Minor gaps like news/media monitoring or beneficial ownership chains prevent a perfect score.
Average 4.1/5 across 5 of 5 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 5 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full burden. It successfully discloses the data source ('Powered by CourtListener/RECAP') and hints at return content ('dockets, litigation history, case status'). However, it omits operational details like read-only safety, pagination behavior, or rate limits that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences total, both information-dense. Front-loaded with purpose and scope, followed by prerequisite instruction. No redundant words or filler content. Every phrase earns its place in guiding the LLM.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool without output schema, the description adequately covers purpose, data provenance, return categories, and prerequisites. Minor gap: could explicitly state this retrieves historical litigation data to set expectations for the response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds valuable workflow context by stating the entity_id comes from company_search, helping the agent understand how to obtain valid parameter values. This meaningfully supplements the schema's technical description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves 'US federal court cases involving a company' with specific outputs (dockets, litigation history, case status). The verb 'Get' is clear, and the scope (US federal) is specific. However, it does not explicitly differentiate from siblings like company_contracts or company_sanctions_check within the description text itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite guidance: 'Use company_search first to get an entity_id,' establishing the required workflow sequence. This is strong context for when to invoke the tool. Lacks explicit 'when not to use' guidance or alternative tool suggestions for non-federal cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses data sources (USAspending.gov, SAM.gov) and data types (awarded contracts, open opportunities) but omits safety declarations (read-only status), error handling, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently structured sentences with zero waste: first establishes purpose and data provenance, second states the critical prerequisite. Information is front-loaded with the action verb.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool, description adequately covers prerequisites and identifies return data categories (awarded contracts, open opportunities). Absence of output schema limits completeness regarding return structure, but description compensates by naming specific data sources.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete parameter documentation, establishing baseline. Description references entity_id in workflow context but doesn't add semantic details (format, validation rules) beyond schema's explanation that it starts with 'companylens_'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resource 'US government contract data' and distinguishes from siblings (company_court_cases, company_sanctions_check) by specifying the unique domain (USAspending.gov and SAM.gov data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite instruction 'Use company_search first to get an entity_id' establishing clear workflow sequence, though it lacks explicit 'when-not-to-use' exclusions or alternative tool recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Lists comprehensive data return types (behavioral outcome) but omits operational details: error handling (invalid entity_id), rate limits, caching behavior, or data freshness. Adequate for data scope disclosure but missing operational safety context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence front-loads specific data categories; second provides critical workflow prerequisite. Every word earns its place with no redundancy or generic filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool without output schema, description adequately compensates by enumerating return data categories. Missing explicit error scenarios or rate limit warnings, but covers essential usage pattern and data scope sufficiently for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter description. Description reinforces parameter provenance ('from company_search') which aids workflow understanding, but does not add technical constraints or format details beyond schema. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Get') + resource ('corporate profile') with explicit data scope (SEC filings, financials, UK registration, officers). Clearly distinguishes from siblings by contrasting comprehensive profile data vs. specialized data (contracts, court cases, sanctions) and establishing dependency on company_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisite workflow ('Use company_search first to get an entity_id'), establishing clear sequence. Lacks explicit guidance on when to use sibling tools instead (e.g., for specific data types like contracts), but provides critical dependency information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses data sources (SEC EDGAR, Companies House) and return behavior (entity IDs for chaining), but lacks safety profile confirmation (read-only status), rate limits, or error handling details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences each earning their place: core function definition, return value explanation, and workflow positioning. No redundancy or waste; perfectly front-loaded with critical guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter search tool with complete schema coverage, the description adequately compensates for missing output schema by explaining return values (entity IDs) and provides essential workflow context. Lacks detailed return structure specification but sufficient for correct tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description references 'name or ticker' and 'US/UK registries' which reinforces the query and jurisdiction parameters, but adds no syntax details or constraints beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Search' and resource 'companies' with clear scope (name/ticker across US SEC EDGAR and UK Companies House registries). Distinguishes from sibling detail tools (profile, contracts, etc.) by identifying this as the broad registry search function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit workflow guidance 'Always start here' clearly positions this as the entry point before using sibling tools. Also explains it returns 'entity IDs that can be used with other tools', establishing the dependency chain with company_profile and other detail tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description carries significant burden well: discloses data source (OpenSanctions), output format ('match confidence scores'), and critical legal disclaimer ('automated screening, not a legal determination'). Missing rate limits or error behaviors, but covers essential risk context for sanctions screening.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste: purpose/source, return value, prerequisite instruction, and legal disclaimer. Front-loaded with specific action, each sentence delivers unique essential information appropriate for a high-stakes compliance tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for low complexity (1 param). Compensates for missing annotations with legal disclaimer and output description ('match confidence scores'). No output schema exists, but description acknowledges return type. Could mention failure modes (no matches found), but adequate for tool complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (entity_id fully documented), establishing baseline 3. Description adds valuable workflow context indicating the parameter must be obtained from company_search and hints at format ('companylens_'), exceeding baseline by providing usage semantics beyond static schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb ('Screen'), resource ('company'), scope ('global sanctions lists OFAC, EU, UN'), and data source ('via OpenSanctions'). Clearly distinguishes from sibling tools by specifying sanctions-specific functionality versus contracts, court cases, or general profile/search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Strong prerequisite instruction ('Use company_search first to get an entity_id') establishes explicit workflow sequence with sibling tool. Lacks explicit 'when not to use' or alternative recommendations (e.g., when to use profile vs sanctions), but the dependency guidance is clear and actionable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/diplv/companylens-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server