OpenSOSData
Server Details
Real-time US business entity search across all 53 US jurisdictions - all 50 states, DC, Puerto Rico, and US Virgin Islands. Search, verify, and check the status of any LLC, corporation, or registered entity. Ideal for KYB, due diligence, and vendor verification.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 6 of 6 tools scored.
Each tool has a clearly distinct purpose with no overlap. batch_search handles multiple entities, check_entity_status verifies status, get_account_balance checks credits, get_jurisdictions_for_region lists jurisdictions by region, list_supported_jurisdictions lists all jurisdictions, and search_business_entity searches single entities. An agent can easily differentiate them.
All tools follow a consistent verb_noun pattern with clear, descriptive names. Examples include batch_search, check_entity_status, get_account_balance, get_jurisdictions_for_region, list_supported_jurisdictions, and search_business_entity. There are no deviations or mixed conventions.
With 6 tools, the set is well-scoped for a business entity search and compliance server. Each tool serves a specific function, from searching and verifying entities to managing account details and jurisdiction information, without being too sparse or bloated.
The tools cover core workflows like searching, status checking, and jurisdiction listing, with good CRUD-like coverage for the domain. A minor gap is the lack of tools for updating or managing saved entities, but agents can work around this for typical compliance and verification tasks.
Available Tools
6 toolsbatch_searchARead-onlyInspect
Search for multiple business entities simultaneously across different states. Maximum 10 entities per batch. Returns results for each entity in order.
| Name | Required | Description | Default |
|---|---|---|---|
| searches | Yes | List of entities to search (max 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false, indicating a safe read operation. The description adds behavioral context: it specifies a maximum batch size of 10 and that results are returned in order, which are useful details not covered by annotations. However, it does not disclose other traits like rate limits, error handling, or auth needs, so it adds some value but not comprehensive behavioral insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose and followed by key constraints (max batch size, result order). Every sentence earns its place by providing essential information without waste, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (batch search with constraints), annotations cover safety, and schema fully documents inputs, the description adds necessary context like batch limits and result ordering. However, without an output schema, it could benefit from more detail on return values or error cases, leaving minor gaps in completeness for a tool with no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the schema fully documenting the 'searches' parameter including its structure, max items, and nested fields. The description mentions 'maximum 10 entities per batch' and 'search for multiple business entities simultaneously across different states,' which aligns with but does not add significant meaning beyond the schema. Baseline 3 is appropriate as the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'search' and resource 'multiple business entities simultaneously across different states,' distinguishing it from sibling tools like 'search_business_entity' (singular) and 'check_entity_status' (status check). It specifies the batch nature and scope, making the purpose explicit and distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for batch searches across states, but does not explicitly state when to use this tool versus alternatives like 'search_business_entity' or other siblings. It provides context (multiple entities, different states) but lacks explicit exclusions or named alternatives, limiting guidance to clear context without full differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_entity_statusARead-onlyInspect
Verify if a business entity is active, inactive, or dissolved in a specific US jurisdiction. Ideal for KYB compliance, due diligence, and vendor verification workflows.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | Two-letter US state or territory code | |
| entity_name | Yes | The business entity name to check |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds useful context about compliance use cases but doesn't disclose behavioral traits like rate limits, authentication requirements, or what happens with invalid inputs. With annotations covering safety, a 3 is appropriate for adding some value without rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with zero waste. The first sentence states the purpose and scope, and the second sentence provides usage guidelines. Every sentence earns its place by adding distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema) and good annotations, the description is mostly complete. It covers purpose, scope, and usage contexts well. However, it lacks details on return values or error handling, which would be helpful since there's no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema, such as format examples or constraints. Baseline 3 is correct when the schema does all the heavy lifting for parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('verify'), resource ('business entity'), and outcome ('active, inactive, or dissolved') with jurisdictional scope ('specific US jurisdiction'). It distinguishes from siblings like 'search_business_entity' by focusing on status verification rather than general search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage contexts ('KYB compliance, due diligence, and vendor verification workflows'), which gives clear guidance on when to use this tool. However, it doesn't specify when NOT to use it or name alternatives among siblings, though the purpose differentiation implies when to choose this over 'search_business_entity'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_account_balanceARead-onlyInspect
Check your current OpenSOSData credit balance and account status. Returns remaining lookups and a link to add funds.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, which the description aligns with by describing a read operation ('Check'). The description adds valuable behavioral context beyond annotations by specifying what is returned ('remaining lookups and a link to add funds'), which helps the agent understand the output format and potential actions, though it lacks details on rate limits or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose and return values without any wasted words. It is front-loaded with the main action and resource, making it easy for an agent to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, read-only operation) and lack of an output schema, the description provides sufficient context by explaining what the tool does and what it returns. However, it could be more complete by mentioning potential error states or authentication requirements, though annotations cover safety aspects adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, focusing instead on the tool's purpose and output. This meets the baseline for tools with no parameters, as it avoids unnecessary details while maintaining clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Check'), resource ('OpenSOSData credit balance and account status'), and distinguishes from siblings by focusing on account information rather than entity searches or jurisdiction listings. It explicitly mentions what is returned ('remaining lookups and a link to add funds'), making the purpose distinct and comprehensive.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating 'Check your current... credit balance and account status,' suggesting it should be used when needing to monitor account resources. However, it does not explicitly state when to use this tool versus alternatives like batch_search or check_entity_status, nor does it provide exclusions or prerequisites, leaving some ambiguity in tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_jurisdictions_for_regionARead-onlyInspect
Get the list of US jurisdictions in a specific region. Useful for multi-state compliance checks and understanding regional coverage.
| Name | Required | Description | Default |
|---|---|---|---|
| region | Yes | US region to get jurisdictions for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds context about its usefulness for 'multi-state compliance checks', which hints at practical applications, but does not disclose additional behavioral traits such as rate limits, authentication needs, or what 'jurisdictions' entail (e.g., states, territories). With annotations covering safety, the description adds some value but not rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, and the second adds practical context. Both sentences earn their place by providing clarity and utility without redundancy or unnecessary details. It is efficient and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, 100% schema coverage, annotations provided, no output schema), the description is reasonably complete. It explains the tool's purpose and usage context. However, without an output schema, it does not describe return values (e.g., format of the jurisdiction list), which is a minor gap. The annotations help cover safety aspects, making the description adequate but not fully comprehensive for output expectations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the 'region' parameter fully documented in the schema, including an enum list. The description does not add any parameter-specific details beyond what the schema provides, such as explaining the 'all' option or differences between regions. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description does not compensate with extra semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get the list of US jurisdictions in a specific region.' It specifies the verb ('Get') and resource ('list of US jurisdictions'), but does not explicitly differentiate it from sibling tools like 'list_supported_jurisdictions', which might have overlapping functionality. The mention of 'multi-state compliance checks' adds context but doesn't fully distinguish it.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance: 'Useful for multi-state compliance checks and understanding regional coverage.' This suggests when to use the tool, but it does not explicitly state when not to use it or name alternatives like 'list_supported_jurisdictions' for comparison. The guidance is helpful but lacks specificity about exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_supported_jurisdictionsARead-onlyInspect
Returns all 53 US jurisdictions with live business entity search coverage, including all 50 states, Washington DC, Puerto Rico, and US Virgin Islands.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds useful context about what specific data is returned (53 US jurisdictions with coverage details), which goes beyond the annotations. However, it doesn't describe format, structure, or any behavioral constraints like rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that efficiently conveys the tool's purpose, scope, and specific content. Every word earns its place - no redundancy, no unnecessary elaboration. Perfectly front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless read-only tool with good annotations, the description provides sufficient context about what data is returned. However, without an output schema, it could benefit from mentioning the return format or structure. The description covers the essential 'what' but leaves the 'how' of the response unspecified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline would be 4. The description appropriately doesn't discuss parameters since there are none, and instead focuses on what the tool returns, which is the correct emphasis for a parameterless tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Returns') and resource ('all 53 US jurisdictions with live business entity search coverage'), specifying exactly what the tool does. It distinguishes from siblings by focusing on jurisdiction listing rather than searching or checking status, and provides concrete examples of what's included.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'live business entity search coverage,' suggesting this tool should be used when needing to know which jurisdictions support entity searches. However, it doesn't explicitly state when to use this vs. alternatives like 'get_jurisdictions_for_region' or provide explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_business_entityARead-onlyInspect
Search for a US business entity by name in a specific state or territory. Returns entity name, ID, status, type, registered agent, and address. Covers all 53 US jurisdictions including DC, Puerto Rico, and US Virgin Islands.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | Two-letter US state or territory code (e.g. CA, NY, TX, DC, PR, VI) | |
| entity_name | Yes | The business entity name to search for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds useful context beyond annotations by specifying the return fields (entity name, ID, status, etc.) and jurisdiction coverage, but it does not disclose behavioral traits like rate limits, authentication needs, or pagination. With annotations covering safety, this provides moderate additional value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the purpose and parameters, and the second specifies return fields and jurisdiction coverage. Every sentence adds essential information without redundancy, making it front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 required parameters, no output schema), the description is largely complete. It covers purpose, parameters, return fields, and jurisdiction scope. However, it lacks details on error handling, result limits, or how to interpret the return fields, which could be helpful for an agent. Annotations provide safety context, but some operational gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters ('state' and 'entity_name'). The description adds minimal semantic value beyond the schema by reinforcing the parameter roles ('by name in a specific state or territory') and noting the state code format includes territories, but it does not provide additional syntax, examples, or constraints. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search for a US business entity by name in a specific state or territory'), identifies the resource ('business entity'), and distinguishes it from siblings like 'batch_search' (which likely handles multiple searches) or 'check_entity_status' (which focuses on status verification). It explicitly mentions the scope ('all 53 US jurisdictions') and return fields, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying the search context ('by name in a specific state or territory') and jurisdiction coverage, but it does not explicitly state when to use this tool versus alternatives like 'batch_search' or 'check_entity_status'. No exclusions or prerequisites are mentioned, leaving the agent to infer appropriate scenarios based on the description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!