Skip to main content
Glama
Ownership verified

Server Details

Real-time data API for AI Agents. 10 MCP tools covering A-share stock quotes, market overview, fund data, web search, news, weather, logistics tracking, and IP geolocation.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down
Tool DescriptionsB

Average 3.6/5 across 10 of 10 tools scored.

Server CoherenceA
Disambiguation4/5

Tools are mostly distinct with clear purposes: finance tools cover different asset types (funds, stocks, market overview) and the info/life utilities serve separate domains. Minor overlap exists between finance_fund's ranking mode and finance_market's include='funds' option, which could cause momentary confusion about which to use for fund rankings.

Naming Consistency4/5

Strong consistent pattern using category prefixes (finance_, info_, life_) with snake_case throughout. Most tools use category_resource format (finance_fund, finance_stock), but finance_stock_screen breaks this by appending an action verb, creating a slight structural inconsistency compared to the multi-mode parameter-based design of other finance tools.

Tool Count4/5

Ten tools is a reasonable count for this scope. The set covers three distinct domains (financial data, information retrieval, life utilities) without being overwhelming. While the mix of deep financial functionality with basic utilities like IP lookup and weather is slightly eclectic, the count remains appropriate for a general-purpose data API server.

Completeness4/5

Good coverage for a read-only data API. Finance domain includes search, detail, screening, and market overview capabilities. Information tools cover news, search, and scraping. Life utilities provide common lookup functions. Minor gaps exist (no dedicated fund screener separate from general ranking, no weather forecast without current conditions), but core workflows are well-supported.

Available Tools

10 tools
finance_fundBInspect

Query fund data: search, detail, or ranking.

  • Search: finance_fund(keyword="沪深300")

  • Detail: finance_fund(code="110011")

  • Ranking: finance_fund(sort_by="return_1y", limit=20)

ParametersJSON Schema
NameRequiredDescriptionDefault
codeNo
limitNo
orderNodesc
keywordNo
sort_byNoperf_ytd
fund_typeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full disclosure burden. While 'Query' implies read-only behavior, the description fails to state safety characteristics explicitly, explain default behaviors when no parameters are provided, or describe error conditions (e.g., invalid fund codes).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a one-line summary followed by three bulleted examples. Every sentence earns its place. However, given the lack of schema documentation, the extreme brevity leaves critical gaps in parameter coverage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the three primary workflows but leaves gaps regarding parameter interactions (can search and ranking be combined?) and the two undocumented parameters. Since an output schema exists, return values need not be explained, but behavioral constraints and complete parameter semantics are missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description partially compensates by illustrating four parameters (keyword, code, sort_by, limit) through concrete examples. However, it completely omits documentation for 'order' and 'fund_type' parameters and fails to enumerate valid values for sort_by (only showing 'return_1y' when 'perf_ytd' is the default).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool queries fund data (specific resource) and enumerates three distinct operations: search, detail, and ranking. It distinguishes from siblings like finance_stock and finance_market by focusing specifically on funds (mutual funds/ETFs).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The three examples implicitly guide usage by showing when to use keyword (search) vs code (detail) vs sort_by (ranking). However, it lacks explicit guidance on when NOT to use this versus siblings (e.g., finance_stock_screen) or whether parameters can be combined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

finance_marketBInspect

Get market data. Supports multiple modes:

  • Market overview: finance_market() — indices, breadth, volume, top sectors, macro

  • With sectors: finance_market(include="sectors") — add sector ranking

  • With funds: finance_market(include="funds") — add fund ranking

  • With valuation: finance_market(include="valuation") — add industry valuation map

  • With macro: finance_market(include="macro") — add macro indicators

  • All extras: finance_market(include="sectors,funds,valuation,macro")

  • Sector detail: finance_market(sector="半导体") — specific sector with constituents

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNo
typeNoindustry
limitNo
sectorNo
includeNo
sort_byNochange_pct

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It successfully discloses behavioral variations (what data is returned in each mode: indices, breadth, volume, industry valuation map), but lacks safety info like rate limits, caching, or auth requirements. 'Get' implies read-only behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with a clear opening statement followed by bulleted examples. Each line efficiently demonstrates a specific usage pattern. The examples are necessary given the lack of schema descriptions, and there's minimal redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate coverage of primary use cases given the tool's multi-mode complexity and presence of output schema. However, gaps remain for undocumented parameters (date, type, limit, sort_by) and lack of cross-tool guidance, making it minimally viable rather than comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. The description compensates by demonstrating usage of 'include' and 'sector' parameters through concrete examples, but leaves 4/6 parameters (date, type, limit, sort_by) completely undocumented with no semantic guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get market data' with specific resource scope (indices, sectors, macro, funds). It implicitly distinguishes from siblings like finance_fund and finance_stock by emphasizing broad market overview capabilities versus specific instrument queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage patterns for different modes (when to use include='sectors' vs 'funds' vs sector='name'), but lacks explicit guidance on when to use sibling tools (finance_fund vs include='funds') or when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

finance_stockAInspect

Query A-share stock data. Supports multiple modes:

  • Search: finance_stock(keyword="茅台") — find stocks by name or code

  • Latest quote: finance_stock(symbol="600519") — current price, PE, PB, dividend yield

  • Specific date: finance_stock(symbol="600519", date="2026-03-28")

  • History: finance_stock(symbol="600519", days=60) — last N trading days

  • With technicals: finance_stock(symbol="600519", days=60, include="technicals")

  • With fundamentals: finance_stock(symbol="600519", include="fundamental") Either keyword or symbol is required.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNo
daysNo
limitNo
symbolNo
includeNo
keywordNo
indicatorsNoma,macd,rsi

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses critical behavioral constraint that 'Either keyword or symbol is required' which compensates for the schema marking zero parameters as required, and explains the different data returned by each mode (current price, PE, PB for quotes; trading days for history).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with a single summary sentence followed by concrete examples in bullet points and a constraint note; every line provides actionable guidance without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage of the tool's multi-mode functionality and parameter interactions given the complexity and lack of schema documentation; adequately complete since output schema handles return value documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description effectively compensates by providing usage examples that explain the semantics of keyword, symbol, date, days, and include parameters, though it omits explanation of the limit parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States 'Query A-share stock data' with specific verb and resource, clearly distinguishing from sibling tools like finance_fund (funds) and finance_stock_screen (screening) by focusing on individual stock queries and historical data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides excellent internal usage guidance through explicit examples showing when to use search mode (keyword) versus quote mode (symbol) versus historical mode (days), though it does not explicitly name sibling tools as alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

finance_stock_screenAInspect

Screen stocks by criteria or get top/bottom rankings.

  • Ranking: finance_stock_screen(sort_by="change_pct", limit=10) — today's top gainers

  • Filter: finance_stock_screen(industry="半导体", pe_max=30) — semiconductor stocks with PE < 30 All filter params are optional. With no filters, returns a simple ranking.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
orderNodesc
pb_maxNo
pe_maxNo
pe_minNo
sort_byNochange_pct
industryNo
max_market_capNo
min_market_capNo
min_dividend_yieldNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the optional nature of filters and distinguishes ranking vs. filtering modes. However, it lacks disclosure of default sorting behavior (only shown in example), rate limits, data freshness guarantees, or what constitutes the 'simple ranking' beyond the example given.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure with zero waste. Front-loaded purpose statement followed immediately by two concrete, contrasting examples that demonstrate the dual usage patterns. Final sentence clarifies optional parameter behavior. Every sentence earns its place with high information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately omits return value details. However, with 10 parameters and 0% schema coverage, the description is incomplete—it demonstrates only 40% of parameters via examples and provides no guidance on valid values for enums (e.g., sort_by options, order values). Adequate for basic usage but insufficient for the tool's full capability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Critical gap: schema description coverage is 0% for all 10 parameters. The description only illustrates 4 parameters through examples (sort_by, limit, industry, pe_max) while leaving 6 parameters completely undocumented (order, pb_max, pe_min, max_market_cap, min_market_cap, min_dividend_yield). With zero schema coverage, the description fails to compensate adequately for the undocumented parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the dual purpose: 'Screen stocks by criteria or get top/bottom rankings.' It uses specific verbs (screen, get) and identifies the resource (stocks). While it distinguishes implicitly from sibling tools like finance_stock (likely for specific stock lookup) through the screening/ranking focus, it doesn't explicitly contrast with alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage patterns through concrete examples showing the two primary modes: ranking (top gainers) versus filtering (semiconductor stocks with PE constraints). Explicitly states 'All filter params are optional' and explains behavior 'With no filters, returns a simple ranking,' giving agents clear guidance on when to use which pattern.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

info_newsBInspect

Get latest news headlines. category: finance/general/tech/sports/... (default: finance). limit: number of articles (1-50).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
categoryNofinance

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It adds parameter constraints (category options, limit range 1-50) and notes 'latest' implies temporal freshness, but omits safety profile (read-only/destructive), rate limits, or data sources.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Highly efficient two-sentence structure. Front-loaded with purpose, followed by parameter specifications. No redundant or wasteful text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 2-parameter tool with output schema present (no return value explanation needed). Parameter documentation is complete, but usage context regarding sibling differentiation is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage. Documents valid category values (finance/general/tech/sports/...), limit bounds (1-50), and default values for both parameters, which the schema lacks entirely.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Get') and resource ('latest news headlines'), but fails to distinguish from sibling tools like info_search or info_scrape which likely also retrieve information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives (e.g., info_search for general queries, finance_stock for specific financial data). No prerequisites or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

info_scrapeAInspect

Read a webpage and return its content as markdown. url: the webpage URL to scrape.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the output format conversion (markdown), but omits other behavioral traits such as JavaScript rendering capabilities, timeout behavior, redirect handling, or error responses (404s, paywalls).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally concise with two efficient sentences: the first establishes the operation and output format, the second documents the parameter. Zero redundancy and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with an existing output schema, the description covers the essential contract: input (URL), operation (scrape), and output format (markdown). It appropriately delegates detailed return value documentation to the output schema, though it could mention error handling scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage, the description compensates by explicitly documenting the 'url' parameter as 'the webpage URL to scrape.' This adds necessary meaning beyond the schema's bare 'type: string,' though it lacks format examples or validation constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (read/scrape), resource (webpage), and output format (markdown). It effectively distinguishes this tool from siblings like info_search (which likely searches) and info_news (which likely retrieves news) by specifying direct URL-based scraping.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like info_search or info_news. It lacks explicit when-to-use criteria, prerequisites (e.g., valid URL format), or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

life_ipBInspect

Get IP geolocation info. address: IP address (defaults to caller IP if omitted).

ParametersJSON Schema
NameRequiredDescriptionDefault
addressNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It successfully discloses the default behavior (defaults to caller IP when omitted), but lacks other behavioral traits like caching, privacy implications, or rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise two-sentence structure: first states purpose, second documents the parameter. No redundant words; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter tool with an output schema (which handles return value documentation). However, with 0% schema coverage and no annotations, the description could strengthen contextual completeness by mentioning data freshness or privacy considerations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring description to compensate. It successfully documents the single parameter 'address' as an 'IP address' and clarifies its default behavior, adding essential semantics absent from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Get') and resource ('IP geolocation info'). While clear, it does not explicitly differentiate from sibling tools like life_weather or life_logistics, though the distinction is implicit via the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives (e.g., when to prefer this over info_search for IP data). No prerequisites, rate limit warnings, or exclusion criteria mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

life_logisticsAInspect

Track a courier package. number: tracking number. company: courier company code (auto-detected if omitted).

ParametersJSON Schema
NameRequiredDescriptionDefault
numberYes
companyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral burden. It discloses the auto-detection behavior for the company parameter, but fails to indicate if this is a safe read-only operation, mentions rate limits, or describes error handling for invalid tracking numbers.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise with three efficient segments: purpose statement followed by inline parameter definitions. No wasted words; every clause provides necessary information not found in the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 2-parameter tool with an output schema present. The description covers the essential functionality and parameters, though it could briefly mention what tracking information is returned (e.g., location, status history).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage. The description explicitly defines 'number' as the tracking number and 'company' as the courier company code, including the behavioral note that company is 'auto-detected if omitted'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a clear verb-resource combination ('Track a courier package') that distinguishes from finance and info siblings, though it could be more precise about whether it retrieves current status vs. initiates tracking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to select this tool versus alternatives (e.g., when to use this versus other life tools or when tracking is appropriate). Only includes parameter-level guidance about auto-detection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

life_weatherBInspect

Get weather data: current conditions and optional 7-day forecast. city: city name (e.g. '北京'). location: lat,lng. forecast: include 7-day forecast.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNo
forecastNo
locationNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the optional nature of the 7-day forecast (behavioral trait) but omits other behavioral details such as units of measurement, handling of invalid locations, or whether the operation is read-only (though implied by 'Get').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with the purpose front-loaded in the first sentence, followed by parameter documentation. It contains no redundant text, though the parameter documentation is compressed into a single block rather than structured as distinct fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with three simple parameters and an output schema (which absolves the description from detailing return values), the description is nearly adequate. However, it has a clear gap in failing to specify that at least one location parameter (city or location) must be provided despite both having default values in the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage, the description compensates effectively by documenting all three parameters: providing an example for city ('北京'), format hint for location ('lat,lng'), and semantic meaning for forecast ('include 7-day forecast'). It does not fully compensate for missing constraint logic (e.g., mutual exclusivity or requirement of at least one location parameter).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Get[s] weather data: current conditions and optional 7-day forecast,' providing a specific verb and resource. While it does not explicitly contrast with sibling tools (finance, news, logistics), the domain is sufficiently distinct that the purpose is unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it specify prerequisites such as requiring at least one location identifier (city or location) since both parameters technically have default values of empty string in the schema.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Verify Ownership

Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:

{
  "$schema": "https://glama.ai/mcp/schemas/connector.json",
  "maintainers": [
    {
      "email": "your-email@example.com"
    }
  ]
}

The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.

Sign in to verify ownership

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.