Skip to main content
Glama
Ownership verified

Server Details

Access to ADEME datasets (French ecological transition agency) - data on energy, environment, waste, transport, housing

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

7 tools
aggregate_dataAggregate data from a datasetA
Read-only
Inspect

Aggregate dataset rows by 1-3 columns with optional metrics (sum, avg, min, max, count). Defaults to counting rows per group. Use for grouped counts or grouped metrics (e.g., average salary per city). For a single global metric without grouping, use calculate_metric instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
bboxNoGeographic bounding box filter (only for geolocalized datasets). Format: "lonMin,latMin,lonMax,latMax". Example: "-2.5,43,3,47".
sortNoSort order for aggregation results. Use special keys: "count" or "-count" (by row count asc/desc), "key" or "-key" (by column value asc/desc), "metric" or "-metric" (by metric value asc/desc). Default: sorts by metric desc (if metric specified), then count desc. Example: "-count" to sort by most frequent values first
metricNoOptional metric to compute ON EACH GROUP. If not provided, defaults to counting rows per group.
filtersNoColumn filters as key-value pairs. Key format: column_key + suffix (see server instructions for available suffixes). All values must be strings, even for numbers/dates. If a column key has underscores (e.g., code_postal), just append the suffix: code_postal_eq. Example: { "nom_search": "Jean", "age_lte": "30", "ville_eq": "Paris" }
datasetIdYesThe exact dataset ID from the "id" field in search_datasets results. Do not use the title or slug.
dateMatchNoTemporal filter (only for temporal datasets with date fields). Accepts a single date "YYYY-MM-DD" to match that day, or a date range "YYYY-MM-DD,YYYY-MM-DD" to match an overlapping period. ISO datetimes also accepted. Example: "2023-11-21" or "2023-01-01,2023-12-31".
geoDistanceNoGeographic proximity filter (only for geolocalized datasets). Restricts results to within a distance from a point. Format: "lon,lat,distance". Example: "2.35,48.85,10km". Use distance "0" for point-in-polygon containment.
groupByColumnsYesColumns to GROUP BY (like SQL GROUP BY). These define the categories/buckets, NOT the column to compute metrics on. Use column keys from describe_dataset (min 1, max 3).

Output Schema

ParametersJSON Schema
NameRequiredDescription
totalYesThe total number of rows in the dataset
datasetIdYesThe dataset ID that was aggregated
requestUrlYesDirect URL to API results in JSON format (must be included in responses for citation and direct access to aggregated view)
aggregationsYesArray of aggregation results for each specified column (limited to 20 rows)
nonRepresentedYesNumber of rows NOT included in the returned aggregations (only the top 20 groups are returned). Add this to the sum of all group totals to reconstruct the dataset total.
totalAggregatedYesThe total number of different values aggregated across all specified columns
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With readOnlyHint=true in annotations, the description correctly doesn't belabor safety. It adds valuable behavioral context: the default counting behavior, the 1-3 column grouping limit, and the specific metric types available. It clarifies that 'count' counts rows per group (not non-null values), addressing a common SQL confusion. Does not mention performance characteristics or result limits, but covers primary operational semantics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured in three sentences: (1) core functionality with defaults, (2) use case with example, (3) alternative tool reference. Zero redundancy. Every sentence advances understanding—no filler, no tautology, no excessive length despite handling a complex 8-parameter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given high schema coverage (100%) and presence of output schema, the description appropriately focuses on core concepts rather than exhaustively listing all 8 parameters. It captures the essential mental model (grouping + metrics) and critical constraints (max 3 groups). Does not mention specialized filters (bbox, geoDistance, dateMatch), but these are well-documented in schema and represent advanced optional features rather than core functionality gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline clarity. The description adds conceptual framework beyond raw schema: it explains the relationship between grouping columns and metric columns through the 'average salary per city' example, helping users understand which parameter serves which purpose. It explicitly notes the '1-3 columns' constraint (matching minItems/maxItems) and default counting behavior, reinforcing parameter optionality.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states exact action ('Aggregate'), resource ('dataset rows'), constraint ('by 1-3 columns'), available operations ('sum, avg, min, max, count'), and default behavior ('Defaults to counting rows per group'). Explicitly distinguishes from sibling 'calculate_metric' by contrasting grouped vs ungrouped operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('Use for grouped counts or grouped metrics') with concrete example ('average salary per city'). Critically, it explicitly names the alternative tool for different use cases ('use calculate_metric instead' for global metrics without grouping), which prevents tool selection errors.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_metricCalculate a metric on a dataset columnA
Read-only
Inspect

Calculate a single metric (avg, sum, min, max, stats, value_count, cardinality, percentiles) on a dataset column. Supports filters to restrict the calculation to a subset of rows. Use for a single statistic on the whole dataset or a filtered subset. For per-group breakdowns, use aggregate_data instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
bboxNoGeographic bounding box filter (only for geolocalized datasets). Format: "lonMin,latMin,lonMax,latMax". Example: "-2.5,43,3,47".
metricYesMetric to calculate. Available: avg, sum, min, max (for numbers); min, max, cardinality, value_count (for strings); value_count (for others); stats returns count/min/max/avg/sum; percentiles returns distribution.
filtersNoColumn filters as key-value pairs. Key format: column_key + suffix (see server instructions for available suffixes). All values must be strings, even for numbers/dates. If a column key has underscores (e.g., code_postal), just append the suffix: code_postal_eq. Example: { "nom_search": "Jean", "age_lte": "30", "ville_eq": "Paris" }
fieldKeyYesThe column key to calculate the metric on (use keys from describe_dataset)
percentsNoComma-separated percentages for percentiles metric (default: "1,5,25,50,75,95,99"). Only used when metric is "percentiles".
datasetIdYesThe exact dataset ID from the "id" field in search_datasets results. Do not use the title or slug.
dateMatchNoTemporal filter (only for temporal datasets with date fields). Accepts a single date "YYYY-MM-DD" to match that day, or a date range "YYYY-MM-DD,YYYY-MM-DD" to match an overlapping period. ISO datetimes also accepted. Example: "2023-11-21" or "2023-01-01,2023-12-31".
geoDistanceNoGeographic proximity filter (only for geolocalized datasets). Restricts results to within a distance from a point. Format: "lon,lat,distance". Example: "2.35,48.85,10km". Use distance "0" for point-in-polygon containment.

Output Schema

ParametersJSON Schema
NameRequiredDescription
totalYesTotal number of rows included in the calculation
valueNoThe calculated metric value. For avg/sum/min/max/value_count/cardinality: a single number. For stats: an object {count, min, max, avg, sum}. For percentiles: an object mapping percentage strings to values, e.g. {"25": 30000, "50": 42000, "75": 55000}.
metricYesThe metric that was calculated
fieldKeyYesThe column key that was queried
datasetIdYesThe dataset ID that was queried
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm readOnlyHint=true. Description adds calculation scope (single metric vs aggregates) and mentions filter support ('restrict the calculation to a subset of rows'). Does not elaborate on performance, error handling, or empty result behavior, but output schema exists to cover return structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste. First sentence establishes capability with specific metric list, second notes filtering, third differentiates from sibling. Front-loaded with essential action and metric types.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a calculation tool: covers metrics, filtering, dataset identification, and sibling differentiation. Has output schema and 100% param coverage. Minor gap: no mention of empty result behavior or specific data type constraints beyond what's in schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description enumerates metric types in opening sentence which mirrors schema enum but provides quick readable reference. No additional syntax or constraint details beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Calculate' with clear resource 'metric on a dataset column'. Explicitly lists available metrics (avg, sum, min, max, etc.) and distinguishes from sibling tool aggregate_data by stating this is for 'single statistic' vs 'per-group breakdowns'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit when-to-use guidance: 'Use for a single statistic on the whole dataset or a filtered subset.' Explicit alternative named: 'For per-group breakdowns, use aggregate_data instead.' Also references describe_dataset for fieldKey resolution.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

describe_datasetDescribe DatasetA
Read-only
Inspect

Get detailed metadata for a dataset: column schema, sample rows, license, spatial/temporal coverage.

ParametersJSON Schema
NameRequiredDescriptionDefault
datasetIdYesThe exact dataset ID from the "id" field in search_datasets results. Do not use the title or slug.

Output Schema

ParametersJSON Schema
NameRequiredDescription
idYesUnique dataset Id (required for search_data tools)
bboxNoSpatial bounding box of the dataset: [lonMin, latMin, lonMax, latMax]. Present only for geolocalized datasets.
linkYesLink to the dataset page (must be included in responses as citation source)
slugNoHuman-readable unique identifier for the dataset, used in URLs
countYesTotal number of data rows in the dataset
titleYesDataset title
originNoSource or provider of the dataset
schemaYesDataset column schema with types and metadata
topicsNoTopics/categories the dataset belongs to
licenseNoDataset license information (must be included in responses)
spatialNoSpatial coverage information
summaryNoA brief summary of the dataset content
keywordsNoKeywords associated with the dataset
temporalNoTemporal coverage information
frequencyNoUpdate frequency of the dataset
timePeriodNoTemporal coverage of the dataset data. Present only for temporal datasets.
descriptionNoA markdown description of the dataset content
sampleLinesYesArray of 3 sample data rows showing real values from the dataset. Use these examples to understand exact formatting, casing, and typical values for _eq and _search filters.
geolocalizedNoWhether this dataset has geographic data. When true, geo filters (bbox, geoDistance) are available in search_data, aggregate_data, and calculate_metric.
temporalDatasetNoWhether this dataset has temporal data (date fields). When true, the dateMatch filter is available in search_data, aggregate_data, and calculate_metric.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The readOnlyHint annotation confirms this is a safe read operation, and the description adds valuable context about what specific metadata is returned (schema, samples, license, coverage) without contradicting the annotation. It could improve by mentioning if samples are limited or if this requires specific dataset access permissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with zero waste. The colon-separated list efficiently communicates the four metadata categories without verbosity. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and simple single-parameter input, the description appropriately enumerates the key metadata categories returned. It is complete enough for tool selection, though noting whether this retrieves live statistics or cached metadata would provide additional value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents the datasetId parameter including its source from search_datasets. The description adds no parameter-specific details, meeting the baseline expectation when the schema carries the full burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('detailed metadata'), then enumerates exact metadata types returned (column schema, sample rows, license, spatial/temporal coverage). This clearly distinguishes it from sibling 'search_datasets' (which finds datasets) and analysis tools like 'aggregate_data'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the schema parameter description references 'search_datasets results' implying a workflow, the tool description itself lacks explicit when-to-use guidance or alternatives. It does not clarify when to choose this over 'get_field_values' or 'search_data' for understanding data content.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

geocode_addressGeocode French AddressA
Read-only
Inspect

Convert a French address or place name into geographic coordinates using the IGN Géoplateforme geocoding service. Returns matching locations with coordinates, postal code, city, and relevance score.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesAddress or place name to search for in France. Examples: "20 avenue de Segur, Paris", "Mairie de Bordeaux", "33000"
limitNoMaximum number of results to return (default: 5)

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNumber of results returned
resultsYesGeocoding results ordered by relevance
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true, the description adds valuable context: the external service dependency (IGN Géoplateforme) and specific return value fields (coordinates, postal code, city, relevance score). It does not mention rate limits or caching behavior, but provides essential behavioral details beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. The first sentence establishes the core function and service provider; the second discloses return value structure. Information is front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (2 parameters, 100% coverage), existing output schema, and readOnly annotations, the description is complete. It covers the service provider, geographic scope, operation type, and return value preview without needing to replicate output schema details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters (q and limit) including examples and constraints. The description aligns with the schema by mentioning 'address or place name' but does not add additional semantic meaning, syntax details, or validation rules beyond the structured schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Convert') with clear resource ('French address or place name') and output ('geographic coordinates'). It clearly distinguishes from siblings (data analysis/statistics tools) by specifying geocoding functionality and geographic scope (France).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by restricting usage to 'French' addresses and specifying the IGN Géoplateforme service, implying geographic limitations. However, it lacks explicit 'when not to use' guidance or named alternatives for non-French addresses.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_field_valuesGet distinct values of a dataset columnA
Read-only
Inspect

List distinct values of a specific column. Useful to discover what values exist before filtering, or to populate a filter list. Always call this before using _eq or _in filters to get exact values and avoid case-sensitivity errors.

ParametersJSON Schema
NameRequiredDescriptionDefault
sizeNoNumber of values to return (default: 10, max: 1000)
sortNoSort order for the values (default: asc)
queryNoOptional text to filter values (prefix/substring match within this column)
fieldKeyYesThe column key to get values for (use keys from describe_dataset)
datasetIdYesThe exact dataset ID from the "id" field in search_datasets results. Do not use the title or slug.

Output Schema

ParametersJSON Schema
NameRequiredDescription
valuesYesArray of distinct values for the specified column
fieldKeyYesThe column key that was queried
datasetIdYesThe dataset ID that was queried
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, establishing safety. The description adds critical behavioral context: the case-sensitivity warning for filter operations and the workflow pattern (discovery before filtering). It could optionally mention pagination limits or caching, but the case-sensitivity disclosure provides genuine value beyond structured data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose (sentence 1), general utility (sentence 2), and critical technical constraint/warning (sentence 3). Well front-loaded with the core action first. No filler words or redundant restatements of the title or schema properties.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, presence of output schema (per context signals), readOnly annotations, and 5 simple parameters, the description is appropriately complete. It covers the tool's purpose, specific usage patterns, and operational risks without needing to describe return values (handled by output schema) or exhaustively document parameters (handled by schema descriptions).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (size, sort, query, fieldKey, datasetId all fully documented). With complete schema coverage, baseline is 3. The description does not need to repeat parameter specifics; it focuses on workflow. The schema adequately conveys parameter semantics without redundancy from the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with 'List distinct values of a specific column' providing a precise verb (list) and resource (distinct values). It clearly distinguishes from siblings: unlike search_data (returns records), describe_dataset (returns schema), or aggregate_data (returns calculations), this tool is specifically for value discovery to support filtering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance: 'Always call this before using _eq or _in filters' and explains the specific risk prevented ('avoid case-sensitivity errors'). It also identifies two use cases ('discover what values exist before filtering' and 'populate a filter list'), giving clear context for selection over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_dataSearch data from a datasetA
Read-only
Inspect

Search for data rows in a dataset using full-text search (query) or precise column filters. Returns matching rows and a filtered view URL. Use to retrieve individual rows. Do NOT use to compute statistics — use calculate_metric or aggregate_data instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
bboxNoGeographic bounding box filter (only for geolocalized datasets). Format: "lonMin,latMin,lonMax,latMax". Example: "-2.5,43,3,47".
nextNoURL from a previous search_data response to fetch the next page of results. When provided, all other parameters (query, filters, select, sort, size) are ignored since the URL already contains them.
sizeNoNumber of rows to return per page (default: 10, max: 50). Increase when you know you need more results upfront to avoid multiple pagination round-trips.
sortNoSort order for results. Comma-separated list of column keys. Prefix with - for descending order. Special keys: _score (relevance), _i (index order), _updatedAt, _rand (random), _geo_distance:lon:lat (distance from point, for geolocalized datasets). Examples: "population" (ascending), "-population" (descending), "_geo_distance:2.35:48.85" (closest first)
queryNoFrench keywords for full-text search across all dataset columns (simple keywords, not sentences). Can be combined with filters, but prefer filters alone when criteria target specific columns. Use query for broad keyword matching across all columns. Examples: "Jean Dupont", "Paris", "2025"
selectNoOptional comma-separated list of column keys to include in the results. Useful when the dataset has many columns to reduce output size. If not provided, all columns are returned. Use column keys from describe_dataset. Format: column1,column2,column3 (No spaces after commas). Example: "nom,age,ville"
filtersNoColumn filters as key-value pairs. Key format: column_key + suffix (see server instructions for available suffixes). All values must be strings, even for numbers/dates. If a column key has underscores (e.g., code_postal), just append the suffix: code_postal_eq. Example: { "nom_search": "Jean", "age_lte": "30", "ville_eq": "Paris" }
datasetIdYesThe exact dataset ID from the "id" field in search_datasets results. Do not use the title or slug.
dateMatchNoTemporal filter (only for temporal datasets with date fields). Accepts a single date "YYYY-MM-DD" to match that day, or a date range "YYYY-MM-DD,YYYY-MM-DD" to match an overlapping period. ISO datetimes also accepted. Example: "2023-11-21" or "2023-01-01,2023-12-31".
geoDistanceNoGeographic proximity filter (only for geolocalized datasets). Restricts results to within a distance from a point. Format: "lon,lat,distance". Example: "2.35,48.85,10km". Use distance "0" for point-in-polygon containment.

Output Schema

ParametersJSON Schema
NameRequiredDescription
nextNoURL to fetch the next page of results. Absent when there are no more results. Pass this value as the next input parameter to get the next page.
countYesNumber of data rows matching the search criteria and filters
linesYesAn array of data rows matching the search criteria (up to the requested size).
datasetIdYesThe dataset ID that was searched
filteredViewUrlYesLink to view the filtered dataset results in table format (must be included in responses for citation and direct access to filtered view)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, confirming the safe read-only nature. The description adds valuable behavioral context that the tool 'Returns matching rows and a filtered view URL'—information not present in annotations. It does not contradict the read-only annotation. Minor gap: it doesn't explicitly mention pagination behavior, though the schema's 'next' parameter implies this.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: (1) core functionality, (2) return values, (3) positive usage, (4) negative usage + alternatives. Information is front-loaded with the specific action before constraints. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity (10 parameters including geospatial/temporal filters, nested objects) and existence of an output schema, the description appropriately covers purpose, usage constraints, and return format. It omits explicit mention of pagination limits or default behaviors, but the schema compensates for these operational details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds semantic value by conceptually grouping parameters into 'full-text search (query)' versus 'precise column filters,' helping the agent understand the distinction between the query and filters parameters beyond their technical schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Search[es] for data rows in a dataset' (specific verb + resource) and clarifies the two search modalities (full-text vs column filters). It clearly distinguishes from sibling tools search_datasets (finding datasets vs searching within one) and contrasts with calculate_metric/aggregate_data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit positive guidance ('Use to retrieve individual rows') and negative guidance ('Do NOT use to compute statistics'). Critically, it names specific alternatives ('use calculate_metric or aggregate_data instead'), making sibling selection unambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_datasetsSearch DatasetsB
Read-only
Inspect

Full-text search for datasets by French keywords. Returns matching datasets with ID, title, summary, and page link.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesFrench keywords for full-text search (simple terms, not sentences). If 0 results, try synonyms or broader terms. Examples: "élus", "DPE", "entreprises", "logement social"

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNumber of datasets matching the full-text search criteria
datasetsYesAn array of the top 20 datasets matching the full-text search criteria.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations declare readOnlyHint=true, confirming this is a safe read operation. The description adds value by disclosing the specific fields returned (ID, title, summary, page link), providing useful context about the output structure. However, it omits behavioral details such as pagination behavior, result limits, or whether the full-text search supports fuzzy matching.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. It front-loads the core action ('Full-text search for datasets') and immediately follows with the return value specification. Every word earns its place; there is no redundant or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single parameter, 100% schema coverage, output schema exists), the description is sufficiently complete. It appropriately summarizes the return values without needing to replicate the full output schema. A minor gap is the lack of mention regarding result pagination or maximum result limits, which would be helpful for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameter is already well-documented in the schema (including examples like 'élus', 'DPE'). The description reinforces the 'French keywords' constraint but does not add semantic meaning beyond what the schema already provides, warranting the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Full-text search') and resource ('datasets'), and distinguishes the tool by specifying 'French keywords' and the specific return fields (ID, title, summary, page link). However, it does not explicitly differentiate from the sibling tool 'search_data', which could confuse agents about whether to search dataset metadata or actual data records.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus siblings like 'search_data' or 'describe_dataset'. While the parameter description suggests trying synonyms if zero results are found, there is no tool-level guidance on prerequisites, when-not-to-use, or alternative approaches for different use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources