Bidda Sovereign Intelligence
Server Details
Search and retrieve cryptographically-verified compliance nodes. 3,000+ nodes across 31 pillars AI Governance, Banking & Global Finance, Cybersecurity, Medical & Healthcare, Legal & IP Sovereignty, ESG and more. Zero hallucination: every node traces to primary legal sources with avg 7 citations.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 3 of 3 tools scored.
Each tool has a clearly distinct purpose: list_pillars discovers domains, search_nodes finds specific nodes by keyword, and get_node retrieves a node by ID. There is no overlap or ambiguity.
All tools follow a consistent verb_noun pattern using snake_case (list_pillars, search_nodes, get_node), making it easy for an agent to predict tool names.
With 3 tools, the server is on the lower end for a registry of 3,000+ nodes. While the tools cover essential discovery, the count feels minimal and could benefit from additional tools like list_nodes_by_pillar.
The tool surface covers core use cases: discovering pillars, searching nodes, and retrieving details. Minor gaps exist, such as the inability to list all nodes in a pillar without a keyword, but the design is functional for a read-only registry.
Available Tools
3 toolsget_nodeAInspect
Get a specific compliance node by its ID. Returns the node summary: title, compliance pillar, version, last updated, and BLUF. The full node (machine-executable deterministic workflow, actionable schema, primary legal citations, dependency chain) is available at bidda.com.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Node ID, e.g. "basel-iii-capital", "gdpr-article-5-principles", "fatf-40-recommendations-2023-consolidated", "us-hipaa-privacy-rule" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries the full burden. It states the tool returns a summary and that the full node is available externally, which implies it is not a write operation. However, it lacks details on authentication, rate limits, or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler, front-loaded with the core action and results. Efficiently communicates purpose and extra information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple get by ID tool with one parameter and no output schema, the description adequately covers what the tool does and what it returns. It mentions where to find additional data, which is helpful context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and already describes the id parameter with examples. The description adds value by specifying that it retrieves a node by ID and listing the returned fields (summary contents), which goes beyond the schema's parameter description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get), the resource (compliance node by ID), and the content of the return (summary with title, pillar, version, etc.). It distinguishes from siblings by specifying single-node retrieval by ID.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus list_pillars or search_nodes. It does not mention alternatives or conditions for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_pillarsAInspect
List all compliance pillars in the Bidda Sovereign Intelligence registry with node counts. Use this first to discover available compliance domains before searching. Bidda has 3,680 cryptographically-verified nodes across 31 pillars including Banking, AI Governance, Cybersecurity, Healthcare, Legal, ESG and more.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description implies a read-only list operation and adds details about node counts and pillar count. Does not explicitly state idempotence or side-effect absence, but sufficient for a simple listing tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no wasted words. Clearly structured and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage context, and domain details (31 pillars, cryptography). Adequate for a parameterless listing tool with no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema (baseline 4), but description adds meaningful context about registry and scale, going beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states "List all compliance pillars... with node counts." and specifies the Bidda Sovereign Intelligence registry. Distinguishes from siblings by positioning as a discovery step before searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says "Use this first to discover available compliance domains before searching." Provides context on scale (3,000+ nodes, 31 pillars) to guide usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_nodesAInspect
Search Bidda compliance nodes by keyword. Returns matching node summaries including a one-sentence BLUF (Bottom Line Up Front) — the exact compliance obligation in plain language. Every node traces to a primary legal source (no hallucination). Examples: "Basel III capital", "GDPR data breach", "AML transaction monitoring", "SOC 2 Type II".
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 10, max 25) | |
| query | Yes | Search terms, e.g. "Basel III capital requirements", "GDPR data breach notification 72 hours", "FATF travel rule" | |
| pillar | No | Optional: filter by pillar name, e.g. "Banking & Global Finance", "Cybersecurity", "AI Governance & Law", "Medical & Healthcare" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It explicitly states the output (summaries with BLUF) and assures factual grounding from legal sources, adding credibility beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences plus examples, front-loading the main action. Every word adds value, and the structure is clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema or annotations, the description covers key aspects: search functionality, output content (BLUF), and source reliability. It briefly mentions optional filter (pillar) but does not detail return structure beyond summaries. Still sufficiently complete for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds limited value for parameters. The examples for 'query' are helpful but not essential given the schema's descriptions already exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches nodes by keyword and returns summaries with a BLUF. It provides examples and distinguishes itself from siblings (get_node, list_pillars) by focusing on search functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides examples but does not explicitly state when to use this tool versus alternatives. It implies usage for keyword searches but lacks direct guidance on exclusions or when to prefer sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!