Skip to main content
Glama

Server Details

TED MCP Server: Real-time EU public tenders access. https://www.lexsocket.ai/

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
lexsocket/mcp-ted
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

23 tools
browse_by_deadlineAInspect

Browse tenders by submission deadline.

Returns tenders sorted by deadline (ascending). If no date range is specified, defaults to deadlines from today onwards.

Args: after: Show deadlines after this date (ISO format, default: today) before: Show deadlines before this date (ISO format, optional) country: Filter by country code (optional) cpv_code: Filter by CPV code (optional) nuts_code: Filter by NUTS code prefix (optional) status: Filter by status (optional) source: Filter by data source (optional) limit: Maximum number of results (default: 20)

Returns: Tenders sorted by deadline ascending

ParametersJSON Schema
NameRequiredDescriptionDefault
afterNo
limitNo
beforeNo
sourceNo
statusNo
countryNo
cpv_codeNo
nuts_codeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full responsibility for behavioral disclosure. It successfully documents the sorting behavior (ascending by deadline) and date range defaults, but omits operational details such as rate limits, pagination method (cursor vs offset), or explicit confirmation that this is a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The docstring format with explicit Args and Returns sections is highly readable and front-loaded with the core purpose. Minor redundancy exists in repeating '(optional)' for most parameters, but overall structure is efficient and appropriately sized for the parameter count.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 8 parameters and zero schema coverage, the description achieves completeness through the detailed Args section. The brief return value statement is acceptable since the context signals indicate an output schema exists. Only significant gap is the lack of sibling differentiation in a crowded tool namespace.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage (no property descriptions in the JSON schema), the Args section comprehensively documents all 8 parameters including data formats (ISO format), purpose (e.g., 'Filter by country code'), and default values. This fully compensates for the schema deficiency.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (browse) and resource (tenders) with the distinguishing filter (submission deadline). However, it fails to differentiate from the nearly identical sibling tool 'browse_tenders_by_deadline', leaving ambiguity about which to choose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains default behaviors (deadlines from today onwards, ascending sort order, default limit) which imply usage patterns. However, it lacks explicit guidance on when to use this tool versus the sibling 'browse_tenders_by_deadline' or the various 'search_' alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

browse_tenders_by_deadlineAInspect

Browse TED tenders by submission deadline.

Returns tenders sorted by deadline (ascending). If no date range is specified, defaults to deadlines from today onwards.

Args: after: Show deadlines after this date (ISO format, default: today) before: Show deadlines before this date (ISO format, optional) country: Filter by country code (optional) cpv_code: Filter by CPV code (optional) nuts_code: Filter by NUTS code prefix (optional) procurement_type: Filter by procurement type (optional) limit: Maximum number of results (default: 20)

Returns: Tenders sorted by deadline ascending

ParametersJSON Schema
NameRequiredDescriptionDefault
afterNo
limitNo
beforeNo
countryNo
cpv_codeNo
nuts_codeNo
procurement_typeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses default temporal behavior ('defaults to deadlines from today onwards') and sorting order ('ascending'), but omits other behavioral traits like read-only status, pagination behavior, or rate limits that would help an agent understand operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses a clear docstring structure (Args/Returns) that is appropriately front-loaded with the core purpose in the first sentence. The Args list is necessarily verbose given schema deficiencies, though the Returns section slightly duplicates the opening behavior description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage of input parameters and filtering capabilities given the lack of schema metadata. Since output schema exists (per context signals), the brief return description is sufficient. Only gap is the missing sibling differentiation which would complete the contextual picture for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage. The description documents all 7 parameters with precise semantics: data formats (ISO), default values (today, 20), and business meaning (CPV code, NUTS code prefix, procurement type). Without this Args section, the parameters would be opaque.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific action (browse) on specific resource (TED tenders) with clear scope (by submission deadline). However, it fails to differentiate from the similar sibling tool `browse_by_deadline` or explain how deadline browsing differs from general `search_ted`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to select this tool versus the 20+ sibling search/browse tools (e.g., `search_by_cpv`, `browse_by_deadline`, `search_ted_fts`). While it notes the default date behavior, it lacks explicit when-to-use or when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_similar_ted_tendersAInspect

Find TED tenders similar to a given notice using its stored embedding vector.

No external API call needed — uses the vector already stored in the database.

Args: notice_id: The notice UUID to find similar tenders for limit: Maximum number of results (default: 10) country: Filter by country code (optional) cpv_code: Filter by CPV code (optional) nuts_code: Filter by NUTS code prefix (optional)

Returns: Similar tenders ranked by cosine similarity

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
countryNo
cpv_codeNo
notice_idYes
nuts_codeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the algorithm (cosine similarity) and data locality (stored vector, no external API), but omits safety/disposition information (e.g., read-only status) and error handling (e.g., behavior if notice_id not found).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement followed by implementation context, then Args/Returns sections. Every sentence earns its place; there is no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists (per context signals), the brief Returns section is appropriate. The description adequately covers the tool's mechanism and filtering capabilities. It could be improved by mentioning error cases (e.g., invalid notice_id), but is otherwise complete for its complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage, the Args section compensates effectively by documenting all 5 parameters (notice_id, limit, country, cpv_code, nuts_code) with brief semantic meanings. It loses a point for lacking format specifications (e.g., ISO country codes, UUID format for notice_id).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Find), resource (TED tenders), and mechanism (using stored embedding vector). It distinguishes itself from siblings like 'find_similar_tenders' (generic) and 'search_ted_semantic' (likely live computation) by emphasizing the use of pre-stored vectors.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implicit guidance by noting 'No external API call needed' and 'uses the vector already stored in the database,' which hints at prerequisites (notice must exist in DB). However, it lacks explicit when-to-use guidance versus alternatives like 'search_ted_semantic' or 'find_similar_tenders.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_similar_tendersAInspect

Find tenders similar to a given tender using its stored embedding vector.

No external API call needed — uses the vector already stored in the database.

Args: tender_id: The tender ID to find similar tenders for limit: Maximum number of results (default: 10) country: Filter by country code (optional) cpv_code: Filter by CPV code (optional) nuts_code: Filter by NUTS code prefix (optional)

Returns: Similar tenders ranked by cosine similarity

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
countryNo
cpv_codeNo
nuts_codeNo
tender_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Excellent disclosure of implementation details given no annotations: explicitly states it uses 'stored embedding vector', performs 'cosine similarity' ranking, and clarifies 'No external API call needed' indicating local database operation. Could improve by mentioning error behavior when tender_id is not found.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear Main description → Implementation note → Args → Returns sections. Every sentence adds value; no repetition of tool name or tautology. Front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for the tool's standalone function: explains the embedding mechanism, documents all parameters, and notes the return ranking method. Given the output schema exists, return format doesn't need elaboration. Minor gap: doesn't explain how multiple filters interact (AND vs OR).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Completely compensates for 0% schema description coverage by documenting all five parameters in the Args block with clear semantics: tender_id as reference, limit with default noted, and optional filters (country, cpv_code, nuts_code) with type hints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool finds similar tenders using stored embedding vectors and cosine similarity. However, it fails to distinguish from sibling `find_similar_ted_tenders` or clarify whether this operates on the same TED dataset or a different corpus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this embedding-based approach versus alternatives like `search_ted_semantic`, `search_tenders`, or the sibling `find_similar_ted_tenders`. Also omits prerequisites (e.g., that the tender_id must already exist in the database with a stored embedding).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_open_opportunitiesAInspect

Get currently open/active procurement tenders. Useful for bid/no-bid analysis.

Args: query: Optional keyword search country: Country filter — 'FR', 'GB'/'UK', 'DE', 'ES', 'IT', 'NL', 'IE', 'PT', 'DK', 'PL', 'AT', or None for all cpv_code: Optional CPV code filter limit: Maximum results (default: 10)

Returns: Active tenders matching the criteria

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo
countryNo
cpv_codeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Clarifies 'open/active' status filter but omits other behavioral traits: read-only safety (implied by 'Get' but not stated), pagination behavior, data freshness, or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear Args/Returns sections. Front-loaded purpose statement. Slightly informal docstring style ('Args:') but every sentence provides value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 4-parameter read tool with output schema present. Parameter documentation covers the gap from zero schema descriptions. Could mention pagination or result ordering given 'limit' parameter exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema coverage. Documents all 4 parameters: 'query' as keyword search, 'country' with explicit valid code list (FR, GB/UK, DE, etc.), 'cpv_code' as CPV filter, and 'limit' with default context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Get' and resource 'open/active procurement tenders', plus use case 'bid/no-bid analysis'. However, fails to distinguish from siblings like 'search_tenders' or 'browse_tenders_by_deadline' which may also access tender data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides use case hint ('bid/no-bid analysis') but lacks explicit guidance on when to use this specific status-filtered endpoint versus the 15+ sibling search/browse tools available.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_statsBInspect

Get statistics about the national tenders database.

Returns: Database statistics including row count, storage info, and embedding service status

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the return content (row count, storage info, embedding service status), which adds necessary context, but omits other behavioral traits like caching, cost, rate limits, or permissions required.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two distinct blocks: a clear purpose statement and a 'Returns' section detailing specific output components. No redundant text or filler; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While adequate for a parameterless tool and acknowledging the output schema exists, the description leaves a significant gap by not clarifying the distinction between 'national tenders' statistics and the TED-specific or tender-specific statistics offered by sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. Per scoring rules, 0 parameters warrants a baseline score of 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb ('Get') and resource ('statistics about the national tenders database'), identifying the target domain. However, it fails to explicitly differentiate from siblings like 'get_ted_statistics' or 'get_tender_statistics', which appear to offer similar functionality for different scopes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to invoke this versus the numerous sibling statistics tools (get_ted_stats, get_tender_statistics, etc.) or what prerequisites might exist. No 'when-not' or alternative recommendations are present.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ted_noticeAInspect

Get a specific TED notice by its notice ID.

Args: notice_id: The notice UUID

Returns: Full notice content including XML or error if not found

ParametersJSON Schema
NameRequiredDescriptionDefault
notice_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses that the tool returns XML content and explicitly mentions the error condition (not found). However, it lacks information about rate limits, authentication requirements, or whether this is a safe read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description uses a clean docstring format with explicit 'Args:' and 'Returns:' sections. Every line provides essential information about parameters, return format, or error handling with no extraneous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter retrieval tool with an output schema, the description is appropriately complete. It documents the parameter semantics and return format adequately, though clarifying the distinction from 'get_tender' would improve completeness given the sibling toolset.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage, the description compensates effectively by documenting that 'notice_id' is 'The notice UUID', providing critical format information (UUID) that the schema only types as 'string'. This prevents the agent from passing arbitrary strings.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Get) and resource (TED notice) with the identification method (by its notice ID). However, it does not distinguish from the sibling 'get_tender' tool, leaving ambiguity about the relationship between notices and tenders in this domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrasing 'by its notice ID' implies this tool requires a specific identifier, suggesting use when the ID is known versus searching. However, there is no explicit guidance contrasting this with 'search_ted' or 'get_tender', nor any mention of prerequisites or when-not-to-use conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ted_statisticsBInspect

Get aggregated statistics about TED notices.

Args: group_by: Field to group by — one of: 'procurement_type', 'currency', 'nuts_code_main', 'country' country: Filter by country code before aggregating (optional) limit: Maximum number of groups to return (default: 20)

Returns: Counts grouped by the specified field

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
countryNo
group_byNoprocurement_type

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It explains the aggregation behavior (grouping and counting) but omits operational traits like whether the operation is read-only, rate limits, data freshness, or what happens when limits are exceeded.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses structured Args/Returns format that is scannable and efficient. Two brief prose paragraphs followed by structured documentation. Slightly unconventional for MCP descriptions but every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple aggregation tool with an output schema (which handles return value documentation), but lacks critical context regarding how it relates to sibling TED/tender statistics tools, which is essential given the crowded tool namespace.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage. The Args section documents all three parameters with clear semantics: `group_by` includes explicit enum values, `country` explains the filtering context, and `limit` clarifies it constrains groups not records.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it retrieves aggregated statistics about TED notices (specific verb+resource), but fails to distinguish from similar siblings like `get_ted_stats`, `get_stats`, and `get_tender_statistics` which appear to serve similar functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the numerous similar statistical tools (e.g., `get_ted_stats`, `get_tender_statistics`), nor any prerequisites or filters explained beyond the Args section.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ted_statsBInspect

Get statistics about the TED notices database.

Returns: Database statistics including row count, storage info, and embedding service status

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full disclosure burden. It successfully describes the return payload ('row count, storage info, and embedding service status'), but omits operational characteristics like execution cost, caching behavior, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient two-sentence structure. The first sentence states purpose; the second states return values. No filler words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the return value description compensates somewhat for the unseen output schema, the tool suffers from high contextual ambiguity due to the presence of similarly-named siblings (get_ted_statistics). The description fails to clarify the distinction between database statistics versus tender statistics, leaving the agent's selection criteria incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters with 100% coverage. As per guidelines, this warrants the baseline score of 4 since there are no parameter semantics to clarify beyond what the empty schema already communicates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly specifies the verb ('Get'), resource ('statistics'), and scope ('TED notices database'). However, it does not differentiate from the nearly-identical sibling tool 'get_ted_statistics' or 'get_stats', leaving ambiguity about which statistics tool to use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus the numerous similar alternatives (get_ted_statistics, get_stats, get_tender_statistics). No prerequisites, exclusions, or contextual triggers are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tenderAInspect

Get a specific tender by its ID.

Args: tender_id: The tender identifier (e.g. 'FR-12345' or 'GB-abc-def')

Returns: Full tender record including raw JSON, or error if not found

ParametersJSON Schema
NameRequiredDescriptionDefault
tender_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the return format (Full tender record including raw JSON) and error condition (error if not found), which is valuable. However, it omits other behavioral traits like rate limits, authentication requirements, or whether the tender data is cached.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description uses a clear three-part structure (purpose, Args, Returns) that is front-loaded and efficient. The Args section directly addresses the schema coverage gap. The Returns line is slightly dense but functional.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single required parameter) and presence of an output schema, the description is sufficiently complete. It documents the undocumented parameter and mentions error handling, covering the essential gaps without needing to describe output fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing semantic meaning ('The tender identifier') and concrete format examples ('FR-12345' or 'GB-abc-def') that reveal the expected ID pattern, adding clarity beyond the bare 'string' type in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Get) and resource (tender) with specific scope (by its ID). While it implies singular retrieval versus the sibling search/browse tools, it does not explicitly distinguish this from 'find_similar_tenders' or contrast with search workflows.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'by its ID' implies prerequisite knowledge of the identifier, suggesting use for direct lookup rather than discovery. However, it lacks explicit guidance such as 'Use this when you have a specific ID from a previous search; use search_tenders for discovery' or warnings about invalid ID formats.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tender_statisticsAInspect

Get aggregated statistics about national tenders.

Args: group_by: Field to group by — one of: 'country', 'status', 'source', 'currency', 'nuts_code_main' country: Filter by country code before aggregating (optional) status: Filter by status before aggregating (optional) limit: Maximum number of groups to return (default: 20)

Returns: Counts grouped by the specified field

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
statusNo
countryNo
group_byNocountry

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Explains aggregation behavior and return format ('Counts grouped by specified field'), but omits safety confirmation (read-only), performance characteristics, or error behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with Args/Returns sections that front-load critical information. No wasted words, though slightly more verbose than minimal sentence form due to structured headers.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage of inputs (via Args) and outputs (via Returns) given zero schema descriptions. Adequate for an aggregation read-tool, though explicit differentiation from sibling 'get_stats' would further improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage. Documents all 4 parameters in Args section, including critical enum values for 'group_by' and default for 'limit' that are absent from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb 'Get aggregated statistics' and resource 'national tenders'. Effectively distinguishes from sibling TED tools (e.g., get_ted_statistics) by scoping to 'national' domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides domain scoping ('national tenders') which implicitly distinguishes from TED/EU tenders in siblings, but lacks explicit guidance on when to use this vs get_stats or get_ted_statistics, and no exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_buyerAInspect

Search tenders by contracting authority / buyer name.

Uses full-text search on the buyer_name field.

Args: buyer_name: Name (or partial name) of the buyer / contracting authority country: Filter by country code (optional) status: Filter by status (optional) limit: Maximum number of results (default: 10)

Returns: Tenders matching the buyer name

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
statusNo
countryNo
buyer_nameYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully discloses that the search uses full-text matching and accepts 'partial name' inputs, which is valuable behavioral context. However, it lacks information on result ordering, case sensitivity, rate limits, or cache behavior that would help an agent predict performance.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description uses a clear structure with Args and Returns sections that front-load the critical information. It is appropriately sized with minimal waste, though the Returns clause ('Tenders matching the buyer name') is somewhat redundant given the output schema exists and the first sentence already states the purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description documents all parameters and leverages the existing output schema (so return values need not be detailed). However, given the presence of nearly 20 sibling search tools with overlapping concerns (TED vs general tenders), the description should clarify the specific dataset scope and search algorithm characteristics, which it does not.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage, but the description fully compensates by documenting all four parameters in the Args section: buyer_name (accepts partial names), country ('country code'), status (filter), and limit (default 10). This provides rich semantic meaning beyond the raw types in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Search tenders by contracting authority / buyer name' with a specific verb and resource. It also clarifies the search mechanism uses 'full-text search on the buyer_name field'. However, it fails to distinguish from the sibling tool 'search_ted_by_buyer', leaving ambiguity about whether this searches TED-specific data or general tender databases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search_ted_by_buyer' or the general 'search_tenders'. There are no explicit when-to-use conditions, prerequisites, or exclusion criteria provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_cpvBInspect

Search tenders by CPV (Common Procurement Vocabulary) code.

Supports exact codes (e.g., '45000000'). Optionally combine with a text query for hybrid search.

Args: cpv_code: CPV code (e.g., '45000000') query: Optional text query to combine with CPV filter (enables hybrid search) country: Filter by country code (optional) status: Filter by status (optional) limit: Maximum number of results (default: 10)

Returns: Tenders matching the CPV code

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo
statusNo
countryNo
cpv_codeYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. Mentions exact code matching and hybrid search mode, but omits: data source scope (TED? National?), authorization requirements, rate limits, error behaviors, and whether results are paginated. With zero annotation coverage, this is insufficient behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses structured Args/Returns sections which is readable, but somewhat verbose given the information density. The Args section is necessary due to empty schema, though the 'Returns' line adds minimal value given the output schema exists. Could be more front-loaded with critical distinctions first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately documents parameters given zero schema coverage. With output schema present, minimal return description is acceptable. However, lacks domain context (what is CPV classification system?) and relationship to TED-specific variants in sibling list. Sufficient for basic invocation but incomplete for complex selection scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, description compensates effectively via Args section documenting all 5 parameters. Provides example values ('45000000'), explains hybrid search semantics for 'query', and notes defaults for 'limit'. Meets baseline requirements for undocumented schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Search' with resource 'tenders' and key identifier 'CPV code'. Expands acronym 'Common Procurement Vocabulary' and mentions hybrid search capability. However, fails to distinguish from sibling tool 'search_ted_by_cpv' or other search variants.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage via 'Supports exact codes' and 'Optionally combine with a text query', but offers no explicit guidance on when to use this versus 'search_ted_by_cpv', 'search_tenders', or other sibling search tools. No prerequisites or exclusion criteria stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_nutsAInspect

Search tenders by NUTS (Nomenclature of Territorial Units for Statistics) code.

Supports prefix matching: 'FR' matches all of France, 'FR10' matches Ile-de-France, etc.

Args: nuts_code: NUTS code prefix (e.g., 'FR', 'FR10', 'DE300') query: Optional text query to combine with NUTS filter (enables hybrid search) country: Filter by country code (optional) status: Filter by status (optional) limit: Maximum number of results (default: 10)

Returns: Tenders matching the NUTS code

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo
statusNo
countryNo
nuts_codeYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure, partially succeeding by explaining the prefix matching logic and 'hybrid search' capability when combining NUTS codes with text queries. However, it omits operational details such as rate limits, pagination behavior, authorization requirements, or whether results are cached.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description uses a clear structured format with Args and Returns sections, and the critical prefix matching explanation is efficiently front-loaded. Minor redundancy exists in the Returns section and repeated 'optional' labels, but these are necessary and justified given the schema's lack of descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description adequately covers parameter semantics given the schema limitations, but lacks critical context regarding the distinction between general tenders and TED tenders (highly relevant given the `search_ted_by_nuts` sibling). Since an output schema exists, the minimal return value description ('Tenders matching the NUTS code') is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given the schema has 0% description coverage, the Args section comprehensively documents all five parameters including the required `nuts_code` with concrete examples ('FR', 'FR10', 'DE300'), explains optionality for filters, and specifies the default value for `limit`. This fully compensates for the complete lack of schema property descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Search tenders by NUTS (Nomenclature of Territorial Units for Statistics) code' with specific verb and resource identification. It effectively explains the prefix matching mechanism (e.g., 'FR' matches all of France). However, it fails to distinguish this tool from the sibling `search_ted_by_nuts`, which appears to offer similar functionality for TED-specific tenders.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description explains prefix matching behavior, it provides no explicit guidance on when to use this tool versus alternatives like `search_ted_by_nuts` or other search methods in the sibling list. There are no exclusions, prerequisites, or conditions specified for proper usage selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_value_rangeBInspect

Search tenders by estimated contract value range.

Args: min_value: Minimum estimated value (optional) max_value: Maximum estimated value (optional) currency: Currency code (default: 'EUR') query: Optional text query to combine with value filter (enables hybrid search) country: Filter by country code (optional) status: Filter by status (optional) limit: Maximum number of results (default: 10)

Returns: Tenders within the specified value range

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo
statusNo
countryNo
currencyNoEUR
max_valueNo
min_valueNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden but offers minimal behavioral context beyond the return type. It omits range boundary semantics (inclusive/exclusive), sorting order, pagination behavior, and whether searches are exact or approximate matches.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses a clear structured format with Args and Returns sections. Each parameter gets a concise, single-line description. No redundant prose, though the Args block bulks up the description, it is necessary given the lack of schema descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic invocation with all parameters documented, but incomplete regarding domain constraints (allowed currency codes, status values) and sibling differentiation. Output schema existence excuses the brief Returns section.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema coverage: the Args section documents all 7 parameters with meaningful descriptions, including domain-specific context like 'enables hybrid search' for the query parameter and default values. Minor gap: lacks format specifications for country codes or status values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The opening line 'Search tenders by estimated contract value range' provides a specific verb, resource, and scope. However, it fails to distinguish this tool from the sibling 'search_ted_by_value_range', leaving ambiguity about which tender dataset this queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the numerous sibling search tools (e.g., search_ted_by_value_range, search_tenders, search_by_buyer) or when value-range filtering is preferable to other criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_tedAInspect

Search TED procurement notices using hybrid, full-text, or vector search.

Args: query: The search query string mode: Search mode - 'hybrid' (default), 'fts' (full-text/BM25), or 'vector' (semantic) limit: Maximum number of results (default: 10) fts_weight: Weight for full-text search in hybrid mode (0-1) vector_weight: Weight for vector search in hybrid mode (0-1) procurement_type: Filter by procurement type (e.g., 'open', 'restricted') country: Filter by country code (e.g., 'POL', 'FRA') cpv_code: Filter by CPV code (exact match in cpv_codes array) nuts_code: Filter by NUTS code prefix (e.g., 'FR', 'FR10', 'FR101') min_value: Minimum estimated value max_value: Maximum estimated value currency: Filter by currency code (e.g., 'EUR', 'GBP') published_after: Filter notices published after this date (ISO format, e.g., '2024-01-01') published_before: Filter notices published before this date (ISO format) deadline_after: Filter by submission deadline after this date (ISO format) deadline_before: Filter by submission deadline before this date (ISO format)

Returns: Search results with notice metadata and relevance scores

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNohybrid
limitNo
queryYes
countryNo
cpv_codeNo
currencyNo
max_valueNo
min_valueNo
nuts_codeNo
fts_weightNo
vector_weightNo
deadline_afterNo
deadline_beforeNo
published_afterNo
procurement_typeNo
published_beforeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adequately explains the hybrid scoring mechanism (fts_weight/vector_weight) and mentions relevance scores in returns, but lacks critical operational context such as rate limits, authentication requirements, or read-only status.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The docstring-style format (Args/Returns) is well-structured and front-loaded with the core purpose. While lengthy due to documenting 16 parameters, every line earns its place given the complete lack of schema field descriptions. No redundant or wasted sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high parameter count (16), complex filtering capabilities (CPV/NUTS codes, date ranges, value ranges), and 0% schema coverage, the description provides sufficient coverage of all parameters with appropriate examples. The mention of 'relevance scores' in returns is sufficient given that an output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the Args section comprehensively documents all 16 parameters with semantic meaning, examples (e.g., 'POL', 'FRA' for country; 'FR10' for NUTS), and constraints (0-1 for weights, 'exact match' vs 'prefix' logic). This fully compensates for theschema's lack of descriptions and significantly aids correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches TED procurement notices and specifies the three available search modes (hybrid, full-text, vector). However, it does not explicitly distinguish when to use this general search tool versus specialized siblings like 'search_ted_by_cpv' or 'search_ted_fts', which could cause confusion given the overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no explicit guidance on when to use this tool versus the numerous specialized search siblings (e.g., search_ted_by_buyer, search_ted_fts). With 16 available search-related tools, the description fails to provide 'when-to-use' or 'when-not-to-use' criteria, leaving the agent to guess which search tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_ted_by_buyerBInspect

Search TED notices by contracting authority / buyer name.

Uses full-text search on the contracting_party_name field.

Args: buyer_name: Name (or partial name) of the buyer / contracting authority country: Filter by country code (optional) limit: Maximum number of results (default: 10)

Returns: Notices matching the buyer name

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
countryNo
buyer_nameYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses useful behavioral details: it uses 'full-text search' on the specific field 'contracting_party_name'. However, with no annotations provided, the description misses opportunity to clarify rate limits, authentication requirements, or partial matching behavior (only noting that partial names are accepted).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses structured Args/Returns format that efficiently organizes information. The content is front-loaded with the core purpose, and parameter descriptions are concise. The Returns section is minimal but acceptable given the output schema exists.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic invocation given the parameter documentation and existing output schema. However, given the high number of similar search siblings (search_by_buyer, search_ted, etc.), the description fails to provide necessary selection context to help the agent choose the correct tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description successfully compensates by documenting all three parameters in the Args section: buyer_name (noting partial names accepted), country (noting optional), and limit (noting default). This prevents the parameters from being undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the action (search), resource (TED notices), and specific field (contracting authority/buyer name). However, it does not explicitly distinguish from sibling tool 'search_by_buyer', leaving potential ambiguity about which to use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like 'search_by_buyer' or 'search_ted_fts'. No prerequisites or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_ted_by_cpvAInspect

Search TED notices by CPV (Common Procurement Vocabulary) code.

Supports both exact codes (e.g., '45000000') and prefix matching (e.g., '45'). Optionally combine with a text query for hybrid search.

Args: cpv_code: CPV code — exact (e.g., '45000000') or prefix (e.g., '45') query: Optional text query to combine with CPV filter (enables hybrid search) country: Filter by country code (optional) limit: Maximum number of results (default: 10)

Returns: Notices matching the CPV code

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo
countryNo
cpv_codeYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It adds valuable behavioral context about prefix vs exact code matching and hybrid search capability. However, it omits auth requirements, rate limits, and safety characteristics that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses a standard Args/Returns structure that is appropriately front-loaded. While the Args block is lengthy, it is necessary given the 0% schema coverage. The Returns section is concise, appropriate since output schema exists.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with zero schema coverage, the description adequately documents all inputs and acknowledges output existence. However, with numerous sibling search tools available, completeness would benefit from explicit differentiation criteria to aid tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate fully. The Args section provides detailed semantics for all four parameters: cpv_code (with exact/prefix examples), query (hybrid search context), country (filter semantics), and limit (default value), effectively documenting what the schema lacks.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with 'Search TED notices by CPV (Common Procurement Vocabulary) code', providing a specific verb (Search), clear resource (TED notices), and distinct method (CPV code) that differentiates it from siblings like search_ted_by_buyer or find_similar_ted_tenders.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides usage context for the query parameter ('Optionally combine with a text query for hybrid search'), but lacks explicit guidance on when to use this tool versus siblings like search_by_cpv or search_ted_fts, or when to prefer exact vs prefix matching.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_ted_by_nutsAInspect

Search TED notices by NUTS (Nomenclature of Territorial Units for Statistics) code.

Supports prefix matching: 'FR' matches all of France, 'FR10' matches Ile-de-France, etc.

Args: nuts_code: NUTS code prefix (e.g., 'FR', 'FR10', 'DE300') query: Optional text query to combine with NUTS filter (enables hybrid search) country: Filter by country code (optional) limit: Maximum number of results (default: 10)

Returns: Notices matching the NUTS code

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo
countryNo
nuts_codeYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the prefix matching behavior and hybrid search functionality, but omits other behavioral traits like rate limits, pagination behavior, result sorting, or maximum allowed limits. It implies read-only safety through 'Search' but does not explicitly confirm this.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description uses a clean docstring structure with clear Args and Returns sections. Every sentence serves a purpose: the opening establishes scope, the second explains critical prefix matching behavior, and the Args section provides essential parameter semantics without redundancy. The length is appropriate for the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with four parameters and an existing output schema, the description is adequately complete. It leverages the presence of the output schema (noted in context signals) to keep the Returns section minimal while focusing descriptive effort on the input parameters where the schema lacks descriptions. It successfully explains the domain-specific NUTS concept for users unfamiliar with the acronym.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage, the description effectively compensates by documenting all four parameters in the Args section. It provides helpful examples for nuts_code ('FR', 'FR10'), explains the hybrid search purpose of query, and notes the default for limit. It could be improved by specifying the country code format (ISO 3166-1 alpha-2) and query syntax rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search'), resource ('TED notices'), and filtering mechanism ('NUTS code'). It distinguishes itself from siblings like 'search_by_nuts' by explicitly targeting 'TED notices' and from other 'search_ted_by_*' tools via the specific NUTS geographic focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the prefix matching behavior ('FR' vs 'FR10') which helps users understand how to construct queries, and mentions hybrid search capability with the query parameter. However, it lacks explicit guidance on when to use this versus siblings like 'search_by_nuts' or 'search_ted_fts', or when to combine with the 'country' parameter versus using NUTS codes alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_ted_by_value_rangeBInspect

Search TED notices by estimated contract value range.

Args: min_value: Minimum estimated value (optional) max_value: Maximum estimated value (optional) currency: Currency code (default: 'EUR') query: Optional text query to combine with value filter (enables hybrid search) country: Filter by country code (optional) limit: Maximum number of results (default: 10)

Returns: Notices within the specified value range

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo
countryNo
currencyNoEUR
max_valueNo
min_valueNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'hybrid search' for the query parameter (useful), but fails to disclose authentication requirements, rate limits, pagination behavior beyond the limit parameter, whether value bounds are inclusive/exclusive, or the structure/format of returned notices. The Returns section is tautological ('Notices within...').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The docstring format with Args/Returns headers is structured and scannable. Each sentence earns its place by conveying param meanings or the core function. It avoids narrative fluff while remaining readable. The Returns section is minimal but not wasteful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description adequately covers the parameter semantics given the schema's lack of descriptions, but leaves significant gaps regarding the TED domain (assuming the agent knows what TED is), the output schema details (despite the flag indicating one exists), and crucially, its relationship to the numerous sibling search tools. For a tool with 6 optional parameters in a complex search ecosystem, it needs more contextual framing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage, the description successfully compensates by documenting all 6 parameters with meaningful semantics (e.g., 'currency code' clarifies the string type, 'hybrid search' explains the query interaction with value filters). It loses a point for omitting format constraints (e.g., ISO 4217 for currency, ISO 3166-1 alpha-2 for country codes) and value constraints (e.g., whether min_value must be less than max_value).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The opening sentence provides a specific verb (Search), resource (TED notices), and filter mechanism (estimated contract value range). However, it loses a point because sibling tools like 'search_by_value_range' (without the _ted prefix) and 'search_ted' create ambiguity about when to use this specific variant versus the generic search or other TED-specific filters.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the alternatives (e.g., 'search_ted' for general queries, 'search_ted_by_buyer' for buyer filtering, or 'search_by_value_range' which appears to be a sibling without the TED designation). There are no prerequisites, constraints, or 'when not to use' conditions specified despite the crowded tool namespace.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_ted_ftsAInspect

Search TED notices using full-text search (BM25 via Tantivy).

Best for keyword matching and exact phrase searches.

Args: query: The search query string limit: Maximum number of results (default: 10) procurement_type: Filter by procurement type (optional)

Returns: Search results ranked by BM25 text relevance

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYes
procurement_typeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses BM25 ranking algorithm and text relevance sorting, but omits scope (date range of notices), performance characteristics, rate limits, or permissions required.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Clean docstring format with clear sections (description, usage hint, Args, Returns). No extraneous text; every line provides actionable information. Appropriate front-loading of purpose before parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a search tool: covers inputs, usage context, and return behavior (citing BM25 ranking). Output schema exists, so detailed return structure documentation isn't required. Minor gap: could clarify procurement_type valid values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Critical compensation: schema has 0% description coverage, yet the Args section documents all 3 parameters (query, limit, procurement_type) with types, defaults, and optionality, fully explaining their semantic purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states exact action ('Search TED notices'), mechanism ('full-text search (BM25 via Tantivy)'), and distinguishes from sibling semantic/search tools via 'keyword matching and exact phrase searches'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides positive guidance ('Best for keyword matching and exact phrase searches'), implying appropriate use cases. Lacks explicit negative guidance or named sibling alternatives (e.g., when to use search_ted_semantic instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_ted_semanticAInspect

Search TED notices using semantic vector search.

Best for conceptual queries and finding related tenders.

Args: query: The search query (natural language) limit: Maximum number of results (default: 10) country: Filter by country code (optional)

Returns: Search results ranked by semantic similarity

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYes
countryNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Effectively discloses the ranking behavior ('ranked by semantic similarity') and search methodology (semantic vector), which is crucial given no annotations exist. However, it omits other expected behavioral details for a search API: rate limiting, timeout behavior, or data coverage scope.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses clean Args/Returns structure with zero waste. Front-loaded purpose statement followed by specific parameter documentation and return value description. All 5 lines earn their place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete given the output schema exists (description need not detail return structure). With no annotations and 0% schema coverage, the description successfully covers parameter semantics and basic behavioral context, though could disclose auth requirements or performance characteristics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage. Documents all 3 parameters, notably specifying that 'query' expects 'natural language' (critical for semantic search distinction) and that 'country' expects a country code. Minor gap: does not specify the country code format (e.g., ISO 3166-1 alpha-2).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action (search), resource (TED notices), and specific method (semantic vector search). The phrase 'Best for conceptual queries' effectively distinguishes this from sibling keyword search tools (e.g., search_ted_fts), though it does not explicitly name those alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit guidance through 'Best for conceptual queries and finding related tenders,' indicating appropriate use cases. However, it lacks explicit when-not-to-use guidance or named alternatives from the extensive sibling tool list (15+ search/browse tools).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_tendersAInspect

Search below-threshold procurement tenders across FR, GB, DE, ES, IT, NL, IE, PT, DK, PL, and AT.

Uses hybrid search combining full-text (BM25) and vector (semantic) search with RRF.

Args: query: Search query string mode: Search mode — 'hybrid' (default), 'fts' (full-text/BM25), or 'vector' (semantic) limit: Maximum results (default: 10) fts_weight: Weight for full-text search in hybrid mode (0-1) vector_weight: Weight for vector search in hybrid mode (0-1) country: Filter by country — 'FR', 'GB'/'UK', 'DE', 'ES', 'IT', 'NL', 'IE', 'PT', 'DK', 'PL', 'AT' status: Filter by status — 'active', 'awarded', 'cancelled' cpv_code: Filter by CPV code (exact match in cpv_codes array) nuts_code: Filter by NUTS code prefix (e.g., 'FR', 'FR10', 'DE300') source: Filter by data source — 'BOAMP', 'Contracts Finder', 'oeffentlichevergabe.de', 'PLACSP', 'ANAC', 'TenderNed', 'eTenders', 'BASE', 'udbud.dk', 'ezamowienia.gov.pl', 'ausschreibungen.usp.gv.at' min_value: Minimum estimated value max_value: Maximum estimated value currency: Filter by currency code (e.g., 'EUR', 'GBP') published_after: Filter by publication date after this date (ISO format, e.g., '2025-01-01') published_before: Filter by publication date before this date (ISO format) deadline_after: Filter by submission deadline after this date (ISO format) deadline_before: Filter by submission deadline before this date (ISO format)

Returns: Search results with tender metadata and relevance scores

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNohybrid
limitNo
queryYes
sourceNo
statusNo
countryNo
cpv_codeNo
currencyNo
max_valueNo
min_valueNo
nuts_codeNo
fts_weightNo
vector_weightNo
deadline_afterNo
deadline_beforeNo
published_afterNo
published_beforeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden and explains the search algorithm in detail ('hybrid search combining full-text (BM25) and vector (semantic) search with RRF'). However, it omits operational details like rate limits, pagination behavior, or confirming the read-only nature of the operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is lengthy due to the inline parameter documentation necessitated by zero schema coverage, but it is well-structured and front-loaded: purpose first, algorithm second, then detailed arguments and returns. Every sentence earns its place given the schema gaps.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description successfully documents all 17 parameters comprehensively. The returns section is minimal ('Search results with tender metadata') but acceptable since an output schema exists. It lacks only details on pagination mechanics or result scoring interpretation to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage, the description fully compensates with an 'Args:' section documenting all 17 parameters. It provides rich semantics including valid country codes ('FR', 'GB'/'UK'), date formats ('ISO format'), weight ranges ('0-1'), and exact data sources, effectively serving as the schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a precise scope: 'Search below-threshold procurement tenders across [11 specific countries]'. It identifies the exact resource type, uses a specific verb, and distinguishes itself from sibling TED tools (search_ted, etc.) by explicitly stating 'below-threshold' (national tenders) versus the EU-wide TED database.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the scope (below-threshold vs TED) but provides no explicit guidance on when to choose this over siblings like browse_by_deadline, search_by_cpv, or find_similar_tenders. It does not state prerequisites, exclusions, or selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.