Skip to main content
Glama

OpenSOSData

Ownership verified

Server Details

Real-time US business entity search across all 53 US jurisdictions - all 50 states, DC, Puerto Rico, and US Virgin Islands. Search, verify, and check the status of any LLC, corporation, or registered entity. Ideal for KYB, due diligence, and vendor verification.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap. batch_search handles multiple entities, check_entity_status verifies status, get_account_balance checks credits, get_jurisdictions_for_region lists jurisdictions by region, list_supported_jurisdictions lists all jurisdictions, and search_business_entity searches single entities. An agent can easily differentiate them.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with clear, descriptive names. Examples include batch_search, check_entity_status, get_account_balance, get_jurisdictions_for_region, list_supported_jurisdictions, and search_business_entity. There are no deviations or mixed conventions.

Tool Count5/5

With 6 tools, the set is well-scoped for a business entity search and compliance server. Each tool serves a specific function, from searching and verifying entities to managing account details and jurisdiction information, without being too sparse or bloated.

Completeness4/5

The tools cover core workflows like searching, status checking, and jurisdiction listing, with good CRUD-like coverage for the domain. A minor gap is the lack of tools for updating or managing saved entities, but agents can work around this for typical compliance and verification tasks.

Available Tools

6 tools
check_entity_statusA
Read-only
Inspect

Verify if a business entity is active, inactive, or dissolved in a specific US jurisdiction. Ideal for KYB compliance, due diligence, and vendor verification workflows.

ParametersJSON Schema
NameRequiredDescriptionDefault
stateYesTwo-letter US state or territory code
entity_nameYesThe business entity name to check
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds useful context about compliance use cases but doesn't disclose behavioral traits like rate limits, authentication requirements, or what happens with invalid inputs. With annotations covering safety, a 3 is appropriate for adding some value without rich behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences with zero waste. The first sentence states the purpose and scope, and the second sentence provides usage guidelines. Every sentence earns its place by adding distinct value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema) and good annotations, the description is mostly complete. It covers purpose, scope, and usage contexts well. However, it lacks details on return values or error handling, which would be helpful since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema, such as format examples or constraints. Baseline 3 is correct when the schema does all the heavy lifting for parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('verify'), resource ('business entity'), and outcome ('active, inactive, or dissolved') with jurisdictional scope ('specific US jurisdiction'). It distinguishes from siblings like 'search_business_entity' by focusing on status verification rather than general search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage contexts ('KYB compliance, due diligence, and vendor verification workflows'), which gives clear guidance on when to use this tool. However, it doesn't specify when NOT to use it or name alternatives among siblings, though the purpose differentiation implies when to choose this over 'search_business_entity'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_account_balanceA
Read-only
Inspect

Check your current OpenSOSData credit balance and account status. Returns remaining lookups and a link to add funds.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true and destructiveHint=false, which the description aligns with by describing a read operation ('Check'). The description adds valuable behavioral context beyond annotations by specifying what is returned ('remaining lookups and a link to add funds'), which helps the agent understand the output format and potential actions, though it lacks details on rate limits or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose and return values without any wasted words. It is front-loaded with the main action and resource, making it easy for an agent to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, read-only operation) and lack of an output schema, the description provides sufficient context by explaining what the tool does and what it returns. However, it could be more complete by mentioning potential error states or authentication requirements, though annotations cover safety aspects adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, focusing instead on the tool's purpose and output. This meets the baseline for tools with no parameters, as it avoids unnecessary details while maintaining clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check'), resource ('OpenSOSData credit balance and account status'), and distinguishes from siblings by focusing on account information rather than entity searches or jurisdiction listings. It explicitly mentions what is returned ('remaining lookups and a link to add funds'), making the purpose distinct and comprehensive.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating 'Check your current... credit balance and account status,' suggesting it should be used when needing to monitor account resources. However, it does not explicitly state when to use this tool versus alternatives like batch_search or check_entity_status, nor does it provide exclusions or prerequisites, leaving some ambiguity in tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_jurisdictions_for_regionA
Read-only
Inspect

Get the list of US jurisdictions in a specific region. Useful for multi-state compliance checks and understanding regional coverage.

ParametersJSON Schema
NameRequiredDescriptionDefault
regionYesUS region to get jurisdictions for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds context about its usefulness for 'multi-state compliance checks', which hints at practical applications, but does not disclose additional behavioral traits such as rate limits, authentication needs, or what 'jurisdictions' entail (e.g., states, territories). With annotations covering safety, the description adds some value but not rich behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, and the second adds practical context. Both sentences earn their place by providing clarity and utility without redundancy or unnecessary details. It is efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, 100% schema coverage, annotations provided, no output schema), the description is reasonably complete. It explains the tool's purpose and usage context. However, without an output schema, it does not describe return values (e.g., format of the jurisdiction list), which is a minor gap. The annotations help cover safety aspects, making the description adequate but not fully comprehensive for output expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the 'region' parameter fully documented in the schema, including an enum list. The description does not add any parameter-specific details beyond what the schema provides, such as explaining the 'all' option or differences between regions. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description does not compensate with extra semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the list of US jurisdictions in a specific region.' It specifies the verb ('Get') and resource ('list of US jurisdictions'), but does not explicitly differentiate it from sibling tools like 'list_supported_jurisdictions', which might have overlapping functionality. The mention of 'multi-state compliance checks' adds context but doesn't fully distinguish it.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance: 'Useful for multi-state compliance checks and understanding regional coverage.' This suggests when to use the tool, but it does not explicitly state when not to use it or name alternatives like 'list_supported_jurisdictions' for comparison. The guidance is helpful but lacks specificity about exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_supported_jurisdictionsA
Read-only
Inspect

Returns all 53 US jurisdictions with live business entity search coverage, including all 50 states, Washington DC, Puerto Rico, and US Virgin Islands.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds useful context about what specific data is returned (53 US jurisdictions with coverage details), which goes beyond the annotations. However, it doesn't describe format, structure, or any behavioral constraints like rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that efficiently conveys the tool's purpose, scope, and specific content. Every word earns its place - no redundancy, no unnecessary elaboration. Perfectly front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless read-only tool with good annotations, the description provides sufficient context about what data is returned. However, without an output schema, it could benefit from mentioning the return format or structure. The description covers the essential 'what' but leaves the 'how' of the response unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline would be 4. The description appropriately doesn't discuss parameters since there are none, and instead focuses on what the tool returns, which is the correct emphasis for a parameterless tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Returns') and resource ('all 53 US jurisdictions with live business entity search coverage'), specifying exactly what the tool does. It distinguishes from siblings by focusing on jurisdiction listing rather than searching or checking status, and provides concrete examples of what's included.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'live business entity search coverage,' suggesting this tool should be used when needing to know which jurisdictions support entity searches. However, it doesn't explicitly state when to use this vs. alternatives like 'get_jurisdictions_for_region' or provide explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_business_entityA
Read-only
Inspect

Search for a US business entity by name in a specific state or territory. Returns entity name, ID, status, type, registered agent, and address. Covers all 53 US jurisdictions including DC, Puerto Rico, and US Virgin Islands.

ParametersJSON Schema
NameRequiredDescriptionDefault
stateYesTwo-letter US state or territory code (e.g. CA, NY, TX, DC, PR, VI)
entity_nameYesThe business entity name to search for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds useful context beyond annotations by specifying the return fields (entity name, ID, status, etc.) and jurisdiction coverage, but it does not disclose behavioral traits like rate limits, authentication needs, or pagination. With annotations covering safety, this provides moderate additional value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and parameters, and the second specifies return fields and jurisdiction coverage. Every sentence adds essential information without redundancy, making it front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 required parameters, no output schema), the description is largely complete. It covers purpose, parameters, return fields, and jurisdiction scope. However, it lacks details on error handling, result limits, or how to interpret the return fields, which could be helpful for an agent. Annotations provide safety context, but some operational gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters ('state' and 'entity_name'). The description adds minimal semantic value beyond the schema by reinforcing the parameter roles ('by name in a specific state or territory') and noting the state code format includes territories, but it does not provide additional syntax, examples, or constraints. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search for a US business entity by name in a specific state or territory'), identifies the resource ('business entity'), and distinguishes it from siblings like 'batch_search' (which likely handles multiple searches) or 'check_entity_status' (which focuses on status verification). It explicitly mentions the scope ('all 53 US jurisdictions') and return fields, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying the search context ('by name in a specific state or territory') and jurisdiction coverage, but it does not explicitly state when to use this tool versus alternatives like 'batch_search' or 'check_entity_status'. No exclusions or prerequisites are mentioned, leaving the agent to infer appropriate scenarios based on the description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources