Singapore Business Directory
Server Details
Singapore business directory. Search companies, UENs, and SSIC industry classifications.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
4 toolscheck_name_availabilityAInspect
Check if a proposed business entity name is available for registration in Singapore.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | The proposed entity name to check |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description carries full burden but omits safety properties (read-only status), return value structure, and whether this validates against ACRA database or local cache.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single 11-word sentence with zero waste; front-loaded with verb 'Check' and precisely defines the operation's domain and scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a simple single-parameter tool; mentions availability checking purpose but could clarify expected response format since no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, description adds valuable semantic context by specifying 'business entity' and jurisdiction (Singapore), elaborating beyond the schema's generic 'entity name' description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Check), resource (business entity name), and scope (available for registration in Singapore), clearly distinguishing from sibling search/get tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies pre-registration use case through 'available for registration' phrasing, but lacks explicit when-to-use guidance or differentiation from search_entities/get_entity siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_entityAInspect
Get detailed information about a specific entity by UEN. If you only have a company name, call search_entities first to obtain the UEN, then call this tool.
| Name | Required | Description | Default |
|---|---|---|---|
| uen | Yes | Unique Entity Number (UEN) of the company. If you only have a company name, use search_entities first to find the UEN. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. States 'get' implying read-only access and 'detailed information' suggesting comprehensive data return, but lacks specifics on error handling (invalid UEN), rate limits, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence front-loads purpose (what it does), second sentence provides prerequisite guidance (when to use). Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a simple 1-parameter lookup tool with no output schema. Covers the entity retrieval workflow and prerequisite chain. Missing only output format details and error behavior specifics, which would be necessary for a 5 without annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description mentions 'by UEN' but adds no semantic detail beyond the schema's existing 'Unique Entity Number (UEN) of the company' description. Schema already contains the prerequisite guidance about search_entities.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get'), resource ('detailed information about a specific entity'), and scope ('by UEN'). Distinguishes from sibling search_entities by specifying UEN-based lookup vs name-based search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance: 'If you only have a company name, call search_entities first to obtain the UEN, then call this tool.' Names the prerequisite workflow and alternative tool directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_entitiesCInspect
Search for entities (companies) in the directory.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return | |
| query | Yes | Search query for company name or UEN | |
| status | No | Filter by entity status (e.g., 'Live', 'Struck Off'). Optional. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses nothing about: whether search is fuzzy/prefix/exact, case sensitivity, pagination behavior, what the directory represents, or required authentication/permissions. The word 'Search' implies read-only but does not explicitly confirm safety profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with verb. No redundant or wasted words. However, brevity is excessive given lack of annotations and output schema; additional sentences explaining behavioral traits or return format would be necessary for a complete tool definition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema and no annotations. For a search tool with three parameters, the description fails to explain return value structure, the domain of 'the directory' (implied to be a business registry given UEN reference in schema), or result ordering. inadequate for autonomous agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (all three parameters documented). Description adds minimal semantic value beyond schema—only clarifying that 'entities' means companies, which aligns with schema's mention of 'company name or UEN'. Baseline score appropriate when schema carries full documentation load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb (search) and resource (entities/companies). Parenthetical clarification that entities means companies adds specificity. However, it does not distinguish from sibling `get_entity` (which retrieves specific entities) or `search_ssic` (which searches classification codes), nor does it clarify what 'the directory' refers to.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus `get_entity` (lookup by identifier) or `check_name_availability`. No mention of prerequisites or rate limiting. Agent must infer usage from sibling tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_ssicAInspect
Search for SSIC (Singapore Standard Industrial Classification) codes by keywords or find codes used by peer entities. Provide keywords (space-separated, AND logic) and/or peers (comma-separated UENs).
| Name | Required | Description | Default |
|---|---|---|---|
| peers | No | Comma-separated list of peer entity UENs to find their SSIC codes. Example: "202100001A,202200002B". | |
| keywords | No | Space-separated keywords to search SSIC descriptions (AND logic — all terms must match). Example: "software development" matches codes whose description contains both words. Use peers for OR-style lookup across multiple entities. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the query logic (AND for keywords) and input formatting (space-separated, comma-separated), but omits safety characteristics, rate limits, or whether this accesses a live government database versus cached data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes purpose and resource; second provides parameter syntax guidance. The structure front-loads the actionable verb and resource before detailing input formats.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Input parameters are well-documented between schema and description, but without an output schema, the description fails to indicate what the tool returns (e.g., just codes, code-descriptions pairs, or full hierarchical classifications). For a search tool with optional parameters, return value disclosure is necessary for complete context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value by defining the SSIC acronym (domain context for the search target) and succinctly summarizing the parameter relationship ('and/or'), which reinforces that these are alternative filtering strategies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool searches for 'SSIC (Singapore Standard Industrial Classification) codes' using keywords or peer entities. It defines the acronym and distinguishes the domain (industry classification codes) from siblings that handle entity lookups and name availability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it doesn't explicitly name sibling alternatives, it effectively guides usage by describing the two distinct query patterns: keywords (with AND logic) for description matching versus peers (UENs) for entity-based lookups. The 'and/or' construction signals that either or both approaches may be used.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!