Opendata Ademe
Server Details
Access to ADEME datasets (French ecological transition agency) - data on energy, environment, waste, transport, housing
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
7 toolsaggregate_dataAggregate data from a datasetARead-onlyInspect
Aggregate dataset rows by 1-3 columns with optional metrics (sum, avg, min, max, count). Defaults to counting rows per group. Use for grouped counts or grouped metrics (e.g., average salary per city). For a single global metric without grouping, use calculate_metric instead.
| Name | Required | Description | Default |
|---|---|---|---|
| bbox | No | Geographic bounding box filter (only for geolocalized datasets). Format: "lonMin,latMin,lonMax,latMax". Example: "-2.5,43,3,47". | |
| sort | No | Sort order for aggregation results. Use special keys: "count" or "-count" (by row count asc/desc), "key" or "-key" (by column value asc/desc), "metric" or "-metric" (by metric value asc/desc). Default: sorts by metric desc (if metric specified), then count desc. Example: "-count" to sort by most frequent values first | |
| metric | No | Optional metric to compute ON EACH GROUP. If not provided, defaults to counting rows per group. | |
| filters | No | Column filters as key-value pairs. Key format: column_key + suffix (see server instructions for available suffixes). All values must be strings, even for numbers/dates. If a column key has underscores (e.g., code_postal), just append the suffix: code_postal_eq. Example: { "nom_search": "Jean", "age_lte": "30", "ville_eq": "Paris" } | |
| datasetId | Yes | The exact dataset ID from the "id" field in search_datasets results. Do not use the title or slug. | |
| dateMatch | No | Temporal filter (only for temporal datasets with date fields). Accepts a single date "YYYY-MM-DD" to match that day, or a date range "YYYY-MM-DD,YYYY-MM-DD" to match an overlapping period. ISO datetimes also accepted. Example: "2023-11-21" or "2023-01-01,2023-12-31". | |
| geoDistance | No | Geographic proximity filter (only for geolocalized datasets). Restricts results to within a distance from a point. Format: "lon,lat,distance". Example: "2.35,48.85,10km". Use distance "0" for point-in-polygon containment. | |
| groupByColumns | Yes | Columns to GROUP BY (like SQL GROUP BY). These define the categories/buckets, NOT the column to compute metrics on. Use column keys from describe_dataset (min 1, max 3). |
Output Schema
| Name | Required | Description |
|---|---|---|
| total | Yes | The total number of rows in the dataset |
| datasetId | Yes | The dataset ID that was aggregated |
| requestUrl | Yes | Direct URL to API results in JSON format (must be included in responses for citation and direct access to aggregated view) |
| aggregations | Yes | Array of aggregation results for each specified column (limited to 20 rows) |
| nonRepresented | Yes | Number of rows NOT included in the returned aggregations (only the top 20 groups are returned). Add this to the sum of all group totals to reconstruct the dataset total. |
| totalAggregated | Yes | The total number of different values aggregated across all specified columns |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With readOnlyHint=true in annotations, the description correctly doesn't belabor safety. It adds valuable behavioral context: the default counting behavior, the 1-3 column grouping limit, and the specific metric types available. It clarifies that 'count' counts rows per group (not non-null values), addressing a common SQL confusion. Does not mention performance characteristics or result limits, but covers primary operational semantics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured in three sentences: (1) core functionality with defaults, (2) use case with example, (3) alternative tool reference. Zero redundancy. Every sentence advances understanding—no filler, no tautology, no excessive length despite handling a complex 8-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given high schema coverage (100%) and presence of output schema, the description appropriately focuses on core concepts rather than exhaustively listing all 8 parameters. It captures the essential mental model (grouping + metrics) and critical constraints (max 3 groups). Does not mention specialized filters (bbox, geoDistance, dateMatch), but these are well-documented in schema and represent advanced optional features rather than core functionality gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline clarity. The description adds conceptual framework beyond raw schema: it explains the relationship between grouping columns and metric columns through the 'average salary per city' example, helping users understand which parameter serves which purpose. It explicitly notes the '1-3 columns' constraint (matching minItems/maxItems) and default counting behavior, reinforcing parameter optionality.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: states exact action ('Aggregate'), resource ('dataset rows'), constraint ('by 1-3 columns'), available operations ('sum, avg, min, max, count'), and default behavior ('Defaults to counting rows per group'). Explicitly distinguishes from sibling 'calculate_metric' by contrasting grouped vs ungrouped operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('Use for grouped counts or grouped metrics') with concrete example ('average salary per city'). Critically, it explicitly names the alternative tool for different use cases ('use calculate_metric instead' for global metrics without grouping), which prevents tool selection errors.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calculate_metricCalculate a metric on a dataset columnARead-onlyInspect
Calculate a single metric (avg, sum, min, max, stats, value_count, cardinality, percentiles) on a dataset column. Supports filters to restrict the calculation to a subset of rows. Use for a single statistic on the whole dataset or a filtered subset. For per-group breakdowns, use aggregate_data instead.
| Name | Required | Description | Default |
|---|---|---|---|
| bbox | No | Geographic bounding box filter (only for geolocalized datasets). Format: "lonMin,latMin,lonMax,latMax". Example: "-2.5,43,3,47". | |
| metric | Yes | Metric to calculate. Available: avg, sum, min, max (for numbers); min, max, cardinality, value_count (for strings); value_count (for others); stats returns count/min/max/avg/sum; percentiles returns distribution. | |
| filters | No | Column filters as key-value pairs. Key format: column_key + suffix (see server instructions for available suffixes). All values must be strings, even for numbers/dates. If a column key has underscores (e.g., code_postal), just append the suffix: code_postal_eq. Example: { "nom_search": "Jean", "age_lte": "30", "ville_eq": "Paris" } | |
| fieldKey | Yes | The column key to calculate the metric on (use keys from describe_dataset) | |
| percents | No | Comma-separated percentages for percentiles metric (default: "1,5,25,50,75,95,99"). Only used when metric is "percentiles". | |
| datasetId | Yes | The exact dataset ID from the "id" field in search_datasets results. Do not use the title or slug. | |
| dateMatch | No | Temporal filter (only for temporal datasets with date fields). Accepts a single date "YYYY-MM-DD" to match that day, or a date range "YYYY-MM-DD,YYYY-MM-DD" to match an overlapping period. ISO datetimes also accepted. Example: "2023-11-21" or "2023-01-01,2023-12-31". | |
| geoDistance | No | Geographic proximity filter (only for geolocalized datasets). Restricts results to within a distance from a point. Format: "lon,lat,distance". Example: "2.35,48.85,10km". Use distance "0" for point-in-polygon containment. |
Output Schema
| Name | Required | Description |
|---|---|---|
| total | Yes | Total number of rows included in the calculation |
| value | No | The calculated metric value. For avg/sum/min/max/value_count/cardinality: a single number. For stats: an object {count, min, max, avg, sum}. For percentiles: an object mapping percentage strings to values, e.g. {"25": 30000, "50": 42000, "75": 55000}. |
| metric | Yes | The metric that was calculated |
| fieldKey | Yes | The column key that was queried |
| datasetId | Yes | The dataset ID that was queried |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations confirm readOnlyHint=true. Description adds calculation scope (single metric vs aggregates) and mentions filter support ('restrict the calculation to a subset of rows'). Does not elaborate on performance, error handling, or empty result behavior, but output schema exists to cover return structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste. First sentence establishes capability with specific metric list, second notes filtering, third differentiates from sibling. Front-loaded with essential action and metric types.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a calculation tool: covers metrics, filtering, dataset identification, and sibling differentiation. Has output schema and 100% param coverage. Minor gap: no mention of empty result behavior or specific data type constraints beyond what's in schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description enumerates metric types in opening sentence which mirrors schema enum but provides quick readable reference. No additional syntax or constraint details beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Calculate' with clear resource 'metric on a dataset column'. Explicitly lists available metrics (avg, sum, min, max, etc.) and distinguishes from sibling tool aggregate_data by stating this is for 'single statistic' vs 'per-group breakdowns'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit when-to-use guidance: 'Use for a single statistic on the whole dataset or a filtered subset.' Explicit alternative named: 'For per-group breakdowns, use aggregate_data instead.' Also references describe_dataset for fieldKey resolution.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
describe_datasetDescribe DatasetARead-onlyInspect
Get detailed metadata for a dataset: column schema, sample rows, license, spatial/temporal coverage.
| Name | Required | Description | Default |
|---|---|---|---|
| datasetId | Yes | The exact dataset ID from the "id" field in search_datasets results. Do not use the title or slug. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | Unique dataset Id (required for search_data tools) |
| bbox | No | Spatial bounding box of the dataset: [lonMin, latMin, lonMax, latMax]. Present only for geolocalized datasets. |
| link | Yes | Link to the dataset page (must be included in responses as citation source) |
| slug | No | Human-readable unique identifier for the dataset, used in URLs |
| count | Yes | Total number of data rows in the dataset |
| title | Yes | Dataset title |
| origin | No | Source or provider of the dataset |
| schema | Yes | Dataset column schema with types and metadata |
| topics | No | Topics/categories the dataset belongs to |
| license | No | Dataset license information (must be included in responses) |
| spatial | No | Spatial coverage information |
| summary | No | A brief summary of the dataset content |
| keywords | No | Keywords associated with the dataset |
| temporal | No | Temporal coverage information |
| frequency | No | Update frequency of the dataset |
| timePeriod | No | Temporal coverage of the dataset data. Present only for temporal datasets. |
| description | No | A markdown description of the dataset content |
| sampleLines | Yes | Array of 3 sample data rows showing real values from the dataset. Use these examples to understand exact formatting, casing, and typical values for _eq and _search filters. |
| geolocalized | No | Whether this dataset has geographic data. When true, geo filters (bbox, geoDistance) are available in search_data, aggregate_data, and calculate_metric. |
| temporalDataset | No | Whether this dataset has temporal data (date fields). When true, the dateMatch filter is available in search_data, aggregate_data, and calculate_metric. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The readOnlyHint annotation confirms this is a safe read operation, and the description adds valuable context about what specific metadata is returned (schema, samples, license, coverage) without contradicting the annotation. It could improve by mentioning if samples are limited or if this requires specific dataset access permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with zero waste. The colon-separated list efficiently communicates the four metadata categories without verbosity. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and simple single-parameter input, the description appropriately enumerates the key metadata categories returned. It is complete enough for tool selection, though noting whether this retrieves live statistics or cached metadata would provide additional value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents the datasetId parameter including its source from search_datasets. The description adds no parameter-specific details, meeting the baseline expectation when the schema carries the full burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('detailed metadata'), then enumerates exact metadata types returned (column schema, sample rows, license, spatial/temporal coverage). This clearly distinguishes it from sibling 'search_datasets' (which finds datasets) and analysis tools like 'aggregate_data'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the schema parameter description references 'search_datasets results' implying a workflow, the tool description itself lacks explicit when-to-use guidance or alternatives. It does not clarify when to choose this over 'get_field_values' or 'search_data' for understanding data content.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
geocode_addressGeocode French AddressARead-onlyInspect
Convert a French address or place name into geographic coordinates using the IGN Géoplateforme geocoding service. Returns matching locations with coordinates, postal code, city, and relevance score.
| Name | Required | Description | Default |
|---|---|---|---|
| q | Yes | Address or place name to search for in France. Examples: "20 avenue de Segur, Paris", "Mairie de Bordeaux", "33000" | |
| limit | No | Maximum number of results to return (default: 5) |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | Number of results returned |
| results | Yes | Geocoding results ordered by relevance |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true, the description adds valuable context: the external service dependency (IGN Géoplateforme) and specific return value fields (coordinates, postal code, city, relevance score). It does not mention rate limits or caching behavior, but provides essential behavioral details beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first sentence establishes the core function and service provider; the second discloses return value structure. Information is front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (2 parameters, 100% coverage), existing output schema, and readOnly annotations, the description is complete. It covers the service provider, geographic scope, operation type, and return value preview without needing to replicate output schema details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents both parameters (q and limit) including examples and constraints. The description aligns with the schema by mentioning 'address or place name' but does not add additional semantic meaning, syntax details, or validation rules beyond the structured schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Convert') with clear resource ('French address or place name') and output ('geographic coordinates'). It clearly distinguishes from siblings (data analysis/statistics tools) by specifying geocoding functionality and geographic scope (France).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by restricting usage to 'French' addresses and specifying the IGN Géoplateforme service, implying geographic limitations. However, it lacks explicit 'when not to use' guidance or named alternatives for non-French addresses.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_field_valuesGet distinct values of a dataset columnARead-onlyInspect
List distinct values of a specific column. Useful to discover what values exist before filtering, or to populate a filter list. Always call this before using _eq or _in filters to get exact values and avoid case-sensitivity errors.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Number of values to return (default: 10, max: 1000) | |
| sort | No | Sort order for the values (default: asc) | |
| query | No | Optional text to filter values (prefix/substring match within this column) | |
| fieldKey | Yes | The column key to get values for (use keys from describe_dataset) | |
| datasetId | Yes | The exact dataset ID from the "id" field in search_datasets results. Do not use the title or slug. |
Output Schema
| Name | Required | Description |
|---|---|---|
| values | Yes | Array of distinct values for the specified column |
| fieldKey | Yes | The column key that was queried |
| datasetId | Yes | The dataset ID that was queried |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, establishing safety. The description adds critical behavioral context: the case-sensitivity warning for filter operations and the workflow pattern (discovery before filtering). It could optionally mention pagination limits or caching, but the case-sensitivity disclosure provides genuine value beyond structured data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose (sentence 1), general utility (sentence 2), and critical technical constraint/warning (sentence 3). Well front-loaded with the core action first. No filler words or redundant restatements of the title or schema properties.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage, presence of output schema (per context signals), readOnly annotations, and 5 simple parameters, the description is appropriately complete. It covers the tool's purpose, specific usage patterns, and operational risks without needing to describe return values (handled by output schema) or exhaustively document parameters (handled by schema descriptions).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (size, sort, query, fieldKey, datasetId all fully documented). With complete schema coverage, baseline is 3. The description does not need to repeat parameter specifics; it focuses on workflow. The schema adequately conveys parameter semantics without redundancy from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with 'List distinct values of a specific column' providing a precise verb (list) and resource (distinct values). It clearly distinguishes from siblings: unlike search_data (returns records), describe_dataset (returns schema), or aggregate_data (returns calculations), this tool is specifically for value discovery to support filtering.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance: 'Always call this before using _eq or _in filters' and explains the specific risk prevented ('avoid case-sensitivity errors'). It also identifies two use cases ('discover what values exist before filtering' and 'populate a filter list'), giving clear context for selection over alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_dataSearch data from a datasetARead-onlyInspect
Search for data rows in a dataset using full-text search (query) or precise column filters. Returns matching rows and a filtered view URL. Use to retrieve individual rows. Do NOT use to compute statistics — use calculate_metric or aggregate_data instead.
| Name | Required | Description | Default |
|---|---|---|---|
| bbox | No | Geographic bounding box filter (only for geolocalized datasets). Format: "lonMin,latMin,lonMax,latMax". Example: "-2.5,43,3,47". | |
| next | No | URL from a previous search_data response to fetch the next page of results. When provided, all other parameters (query, filters, select, sort, size) are ignored since the URL already contains them. | |
| size | No | Number of rows to return per page (default: 10, max: 50). Increase when you know you need more results upfront to avoid multiple pagination round-trips. | |
| sort | No | Sort order for results. Comma-separated list of column keys. Prefix with - for descending order. Special keys: _score (relevance), _i (index order), _updatedAt, _rand (random), _geo_distance:lon:lat (distance from point, for geolocalized datasets). Examples: "population" (ascending), "-population" (descending), "_geo_distance:2.35:48.85" (closest first) | |
| query | No | French keywords for full-text search across all dataset columns (simple keywords, not sentences). Can be combined with filters, but prefer filters alone when criteria target specific columns. Use query for broad keyword matching across all columns. Examples: "Jean Dupont", "Paris", "2025" | |
| select | No | Optional comma-separated list of column keys to include in the results. Useful when the dataset has many columns to reduce output size. If not provided, all columns are returned. Use column keys from describe_dataset. Format: column1,column2,column3 (No spaces after commas). Example: "nom,age,ville" | |
| filters | No | Column filters as key-value pairs. Key format: column_key + suffix (see server instructions for available suffixes). All values must be strings, even for numbers/dates. If a column key has underscores (e.g., code_postal), just append the suffix: code_postal_eq. Example: { "nom_search": "Jean", "age_lte": "30", "ville_eq": "Paris" } | |
| datasetId | Yes | The exact dataset ID from the "id" field in search_datasets results. Do not use the title or slug. | |
| dateMatch | No | Temporal filter (only for temporal datasets with date fields). Accepts a single date "YYYY-MM-DD" to match that day, or a date range "YYYY-MM-DD,YYYY-MM-DD" to match an overlapping period. ISO datetimes also accepted. Example: "2023-11-21" or "2023-01-01,2023-12-31". | |
| geoDistance | No | Geographic proximity filter (only for geolocalized datasets). Restricts results to within a distance from a point. Format: "lon,lat,distance". Example: "2.35,48.85,10km". Use distance "0" for point-in-polygon containment. |
Output Schema
| Name | Required | Description |
|---|---|---|
| next | No | URL to fetch the next page of results. Absent when there are no more results. Pass this value as the next input parameter to get the next page. |
| count | Yes | Number of data rows matching the search criteria and filters |
| lines | Yes | An array of data rows matching the search criteria (up to the requested size). |
| datasetId | Yes | The dataset ID that was searched |
| filteredViewUrl | Yes | Link to view the filtered dataset results in table format (must be included in responses for citation and direct access to filtered view) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, confirming the safe read-only nature. The description adds valuable behavioral context that the tool 'Returns matching rows and a filtered view URL'—information not present in annotations. It does not contradict the read-only annotation. Minor gap: it doesn't explicitly mention pagination behavior, though the schema's 'next' parameter implies this.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: (1) core functionality, (2) return values, (3) positive usage, (4) negative usage + alternatives. Information is front-loaded with the specific action before constraints. No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (10 parameters including geospatial/temporal filters, nested objects) and existence of an output schema, the description appropriately covers purpose, usage constraints, and return format. It omits explicit mention of pagination limits or default behaviors, but the schema compensates for these operational details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds semantic value by conceptually grouping parameters into 'full-text search (query)' versus 'precise column filters,' helping the agent understand the distinction between the query and filters parameters beyond their technical schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Search[es] for data rows in a dataset' (specific verb + resource) and clarifies the two search modalities (full-text vs column filters). It clearly distinguishes from sibling tools search_datasets (finding datasets vs searching within one) and contrasts with calculate_metric/aggregate_data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit positive guidance ('Use to retrieve individual rows') and negative guidance ('Do NOT use to compute statistics'). Critically, it names specific alternatives ('use calculate_metric or aggregate_data instead'), making sibling selection unambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_datasetsSearch DatasetsBRead-onlyInspect
Full-text search for datasets by French keywords. Returns matching datasets with ID, title, summary, and page link.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | French keywords for full-text search (simple terms, not sentences). If 0 results, try synonyms or broader terms. Examples: "élus", "DPE", "entreprises", "logement social" |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | Yes | Number of datasets matching the full-text search criteria |
| datasets | Yes | An array of the top 20 datasets matching the full-text search criteria. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations declare readOnlyHint=true, confirming this is a safe read operation. The description adds value by disclosing the specific fields returned (ID, title, summary, page link), providing useful context about the output structure. However, it omits behavioral details such as pagination behavior, result limits, or whether the full-text search supports fuzzy matching.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. It front-loads the core action ('Full-text search for datasets') and immediately follows with the return value specification. Every word earns its place; there is no redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter, 100% schema coverage, output schema exists), the description is sufficiently complete. It appropriately summarizes the return values without needing to replicate the full output schema. A minor gap is the lack of mention regarding result pagination or maximum result limits, which would be helpful for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameter is already well-documented in the schema (including examples like 'élus', 'DPE'). The description reinforces the 'French keywords' constraint but does not add semantic meaning beyond what the schema already provides, warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Full-text search') and resource ('datasets'), and distinguishes the tool by specifying 'French keywords' and the specific return fields (ID, title, summary, page link). However, it does not explicitly differentiate from the sibling tool 'search_data', which could confuse agents about whether to search dataset metadata or actual data records.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers no guidance on when to use this tool versus siblings like 'search_data' or 'describe_dataset'. While the parameter description suggests trying synonyms if zero results are found, there is no tool-level guidance on prerequisites, when-not-to-use, or alternative approaches for different use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!