worlduank
Server Details
World Bank MCP — wraps the World Bank Data API v2 (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-worldbank
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose: get_country retrieves static country metadata, get_gdp and get_population are shortcuts for specific economic and demographic indicators, and get_indicator is the general-purpose tool for any World Bank indicator. There is no overlap or ambiguity in their functions.
All tool names follow a consistent verb_noun pattern with 'get_' prefix, using snake_case uniformly. This predictability makes it easy for agents to understand and select the appropriate tool.
Four tools is well-scoped for a World Bank data server, covering core use cases: country metadata, key indicators (GDP and population), and a flexible general indicator tool. Each tool earns its place without bloat.
The toolset provides robust coverage for retrieving World Bank data, with shortcuts for common indicators and a general method for extensibility. A minor gap is the lack of tools for comparative or multi-country queries, but agents can work around this by iterating calls.
Available Tools
4 toolsget_countryAInspect
Get basic information about a country: full name, region, income level, capital city, and coordinates. Use ISO 3166-1 alpha-2 or alpha-3 country codes (e.g., "US", "GBR", "IN").
| Name | Required | Description | Default |
|---|---|---|---|
| country_code | Yes | ISO country code (2 or 3 letters, e.g., "US", "GBR", "CN") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly indicates this is a read operation ('Get') and specifies the input format, but doesn't mention potential limitations like error handling, data freshness, or rate limits. The description adds useful context about what data is returned, compensating partially for the lack of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences: the first states the purpose and data returned, the second specifies the input format with examples. Every word earns its place with zero redundancy, and the information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with one parameter and no output schema, the description is reasonably complete. It covers the purpose, data returned, and input requirements. The main gap is the lack of output format details, but given the tool's simplicity and the clear data fields listed, this is a minor omission.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents the single parameter thoroughly. The description adds minimal value beyond the schema by repeating the ISO code format examples, but doesn't provide additional semantic context about parameter behavior or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('basic information about a country'), listing key data fields (full name, region, income level, capital city, coordinates). It distinguishes from siblings by focusing on general country info rather than specific metrics like GDP or population.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (to retrieve basic country information) and specifies the required input format (ISO 3166-1 codes). However, it doesn't explicitly state when not to use it or name alternatives among sibling tools, though the distinction is implied by the different data focus.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_gdpAInspect
Get GDP (current USD) over time for a country. Shortcut for get_indicator with NY.GDP.MKTP.CD.
| Name | Required | Description | Default |
|---|---|---|---|
| country_code | Yes | ISO country code (e.g., "US", "GBR", "CN") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it states what the tool does, it doesn't describe important behavioral aspects like whether this is a read-only operation, what format the data returns in, whether there are rate limits, or what happens with invalid country codes. For a tool with no annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place: the first states the core purpose, the second provides crucial sibling differentiation. There's zero wasted language and it's effectively front-loaded with the most important information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read tool with 100% schema coverage, the description provides adequate context about purpose and sibling relationships. However, with no output schema and no annotations, it doesn't describe what format the GDP data returns in (time series? single value? error handling?), which leaves gaps in understanding the tool's complete behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single parameter (country_code with ISO code format). The description doesn't add any parameter-specific information beyond what's in the schema, so the baseline score of 3 is appropriate when the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get GDP'), resource ('current USD over time for a country'), and scope ('shortcut for get_indicator with NY.GDP.MKTP.CD'). It precisely distinguishes this tool from its sibling 'get_indicator' by specifying it's a specialized version for GDP data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('shortcut for get_indicator with NY.GDP.MKTP.CD'), providing clear context and an alternative (using get_indicator directly). This gives the agent perfect guidance on when this specialized tool is appropriate versus the more general sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_indicatorAInspect
Get time-series values for a World Bank indicator for a specific country. Common indicators: NY.GDP.MKTP.CD (GDP), SP.POP.TOTL (population), EN.ATM.CO2E.KT (CO2 emissions), SE.ADT.LITR.ZS (literacy rate).
| Name | Required | Description | Default |
|---|---|---|---|
| indicator | Yes | World Bank indicator code (e.g., "NY.GDP.MKTP.CD", "SP.POP.TOTL") | |
| date_range | No | Year range in format "start:end" (default: 2015:2024). Example: "2000:2023" | |
| country_code | Yes | ISO country code (e.g., "US", "GBR", "CN") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions retrieving time-series values but lacks details on permissions, rate limits, data freshness, or error handling. This is a significant gap for a data-fetching tool with no structured safety hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with a clear purpose in the first sentence and efficient examples in the second. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It covers the basic purpose and parameters but lacks details on return values, error cases, or behavioral constraints, which are needed for a tool fetching time-series data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description adds value by providing common indicator examples (e.g., NY.GDP.MKTP.CD) and implying time-series output, but it does not explain parameter interactions or formats beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get time-series values') for a specific resource ('World Bank indicator for a specific country'), distinguishing it from siblings like get_gdp or get_population by indicating it handles multiple indicators through codes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by listing common indicator examples (e.g., GDP, population), which helps guide usage, but it does not explicitly state when to use this tool versus alternatives like get_gdp or get_population, nor does it mention exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_populationAInspect
Get total population over time for a country. Shortcut for get_indicator with SP.POP.TOTL.
| Name | Required | Description | Default |
|---|---|---|---|
| country_code | Yes | ISO country code (e.g., "US", "GBR", "CN") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. While it mentions this is a 'shortcut' for another tool, it doesn't disclose key behavioral traits like whether this is a read-only operation, what format the 'over time' data returns (e.g., time series), potential rate limits, or error conditions. The description adds minimal behavioral context beyond the basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences: the first states the purpose, and the second provides crucial usage guidance. Every word earns its place, and it's front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter, no annotations, no output schema), the description is adequate but has gaps. It explains the purpose and sibling relationship well, but without annotations or output schema, it should ideally mention more about the return format (e.g., time series data) or behavioral constraints to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single parameter 'country_code' with its type, requirement, and format example. The description doesn't add any parameter-specific information beyond what the schema provides, maintaining the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get total population over time') and resource ('for a country'), and explicitly distinguishes this tool from its sibling 'get_indicator' by calling it a 'shortcut' for that specific indicator code. This provides excellent differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool versus alternatives: it's a 'shortcut for get_indicator with SP.POP.TOTL.' This tells the agent precisely when to choose this tool (for population data) versus the more general 'get_indicator' tool or other siblings like 'get_country' or 'get_gdp'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!