Server Details
It connects with Smart Data models database with more than 1000 open licensed data models and helps you to create applications and systems interoperable with existing standards or real applications
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
See and control every tool call
Tool Definition Quality
Average 4.3/5 across 15 of 15 tools scored.
Tools are generally well-differentiated with distinct purposes, though list_models_by_domain and get_full_catalog overlap significantly (both retrieve models within a domain, differing only in metadata depth), and search_data_models versus fuzzy_find_model serve similar discovery functions that could confuse agents despite descriptive nuances.
Strong adherence to snake_case verb_noun conventions throughout (get_*, list_*, validate_*, export_*, find_*). Minor deviations include fuzzy_find_model (adjective-verb ordering inconsistent with the verb-first pattern) and inconsistent use of prepositions (by_model_name, for_model) across the set.
Fifteen tools appropriately cover the full workflow for Smart Data Models interaction: domain discovery, model search/retrieval, attribute inspection, NGSI-LD context resolution, validation, and export. Neither bloated nor incomplete for the stated purpose.
Comprehensive read-only surface covering model discovery (fuzzy and exact), schema retrieval, attribute drilling, NGSI-LD context generation, and payload validation. Minor gap: no direct subject listing capability (only reverse lookup via find_subject_by_model_name), though domain-level discovery suffices for most workflows.
Available Tools
15 toolsexport_model_attributesAInspect
Export the attributes of a data model as a flat delimited string, with caller-selected meta-fields per attribute. Use this when a data engineer needs a CSV-ready or pipe-delimited listing for database column mapping, ontology alignment, or spreadsheet import. Available meta-fields: property, type, dataModel, repoName, description, typeNGSI, modelTags, format, units, model. Example: export_model_attributes({"model_name": "WeatherObserved", "separator": ",", "fields": ["property", "type", "typeNGSI", "units"]})
| Name | Required | Description | Default |
|---|---|---|---|
| fields | No | List of meta-fields to include per attribute. Valid values: property, type, dataModel, repoName, description, typeNGSI, modelTags, format, units, model. Default: ["property", "type", "typeNGSI", "units", "description"]. | |
| separator | No | Delimiter between fields per attribute row. Default: ','. | , |
| model_name | Yes | The exact data model name — e.g., 'WeatherObserved'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the output format (flat delimited string) and available meta-fields, but fails to mention error handling (e.g., invalid model_name), whether the operation is read-only/safe, or any rate limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure: opens with core functionality, follows with use case, lists available options, and closes with a concrete example. Every sentence serves a distinct purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool with 100% schema coverage and no output schema, the description is complete. It explains the return format (flat string), enumerates the valid meta-fields matching the schema enum, and provides an invocation example compensating for the missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% providing a baseline of 3. The description adds value through the concrete JSON example showing proper field array construction, and contextualizes the 'separator' parameter by mentioning both CSV (comma) and pipe-delimited use cases in the usage guidelines.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (export), resource (attributes of a data model), and output format (flat delimited string). It inherently distinguishes from siblings like get_attributes_for_model by emphasizing the CSV/pipe-delimited export aspect rather than structured data retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'when a data engineer needs a CSV-ready or pipe-delimited listing for database column mapping, ontology alignment, or spreadsheet import.' Lacks explicit 'when not to use' or named sibling alternatives, but the use case specificity provides clear selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_subject_by_model_nameAInspect
Find which SDM subject (repository group, always prefixed with 'dataModel.') a data model belongs to, given only the model name. Use this when you have an entity type from an incoming NGSI-LD payload but don't know its subject — for example, if a payload contains 'type: WeatherObserved' and you need to resolve its subject before calling get_data_model or validate_data. Example: find_subject_by_model_name({"model_name": "WeatherObserved"})
| Name | Required | Description | Default |
|---|---|---|---|
| model_name | Yes | The entity type or data model name to look up — e.g., 'WeatherObserved', 'OffStreetParking' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden and successfully discloses key behavioral traits: subjects are 'always prefixed with dataModel.' and the tool performs a lookup/mapping operation. Minor gap: does not specify error behavior if model_name is not found or describe the return format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with purpose front-loaded, followed by usage context, workflow positioning, and example. Every sentence earns its place; no redundancy or filler. Length is appropriate for a single-parameter lookup tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lookup tool with 1 parameter and no output schema, the description is nearly complete. It covers purpose, usage context, sibling relationships, and provides an example. Minor deduction for not describing the return value format (e.g., does it return the full subject string or an object?).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. The description includes an example call with 'WeatherObserved', but the schema already documents this example in the property description. No additional semantic details (e.g., case sensitivity, valid character sets) are added beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the verb ('Find'), resource ('SDM subject'), and specific constraint ('given only the model name'). It distinguishes from siblings by noting this resolves the subject required 'before calling get_data_model or validate_data', clearly positioning it as a prerequisite step.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use scenario ('when you have an entity type from an incoming NGSI-LD payload but don't know its subject') and concrete example ('payload contains type: WeatherObserved'). Names specific alternative tools (get_data_model, validate_data) that require this tool's output, establishing clear workflow precedence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fuzzy_find_modelAInspect
Search for a data model by approximate or misspelled name using fuzzy matching. Use this as the recovery step whenever get_data_model returns MODEL_NOT_FOUND — it finds the closest real model names even when the spelling is off. Returns ranked candidates with similarity scores. Example: fuzzy_find_model({"model_name": "WeatherFora", "threshold": 80})
| Name | Required | Description | Default |
|---|---|---|---|
| threshold | No | Minimum similarity score 0–100. Higher = stricter. Default: 70. | |
| model_name | Yes | The approximate or misspelled model name to search for. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the fuzzy matching behavior and return format ('ranked candidates with similarity scores'), but does not explicitly state whether the operation is read-only or what occurs when no matches exceed the threshold.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, each earning its place: purpose, usage guideline, return value description, and working example. Information is front-loaded with no filler or redundancy, making it immediately scannable for an agent evaluating multiple tools.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description compensates by explaining the return value structure ('ranked candidates with similarity scores'). Given the tool's narrow scope (2 parameters, simple fuzzy search), the description is complete, though it could optionally mention the behavior when zero matches are found.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100% (baseline 3), the description adds valuable semantic context through the concrete JSON example showing exact syntax and realistic values ('WeatherFora', threshold: 80), which helps the agent understand the expected parameter structure beyond the raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Search') plus resource ('data model') and mechanism ('fuzzy matching'), clearly distinguishing it from exact-match siblings like get_data_model. It explicitly identifies the target input ('approximate or misspelled name'), leaving no ambiguity about its distinct purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('Use this as the recovery step whenever get_data_model returns MODEL_NOT_FOUND') and implicitly defines when-not-to-use (when you have the exact name, use get_data_model instead). Directly names the sibling tool, creating a clear decision tree for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_attribute_detailsAInspect
Get full details for a single attribute within a Smart Data Model — including its description, NGSI type (Property/GeoProperty/Relationship), data type, recommended units, format constraints, and reference model URL. Use this when the user asks what type a specific field is, what units it uses, or how to populate it. Example: get_attribute_details({"model_name": "WeatherObserved", "attribute": "temperature"})
| Name | Required | Description | Default |
|---|---|---|---|
| attribute | Yes | The exact attribute (field) name to look up — e.g., 'temperature', 'location', 'refDevice' | |
| model_name | Yes | The data model entity name — e.g., 'WeatherObserved' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of disclosure. It compensates partially by detailing the return payload structure (description, NGSI type, data type, etc.), but omits operational characteristics like error handling when attributes don't exist, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three components efficiently structured: definition (what it does), usage guidelines (when to use), and example invocation. Zero wasted words; every sentence provides distinct value. Front-loaded with the core action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter lookup tool without output schema or annotations, the description is complete. It enumerates all returned fields (compensating for missing output schema), provides usage context, and includes a working example. No gaps remain for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema description coverage (baseline 3), the description adds value through a concrete JSON example showing valid parameter values ('WeatherObserved', 'temperature'), which illustrates the expected format and semantics better than the schema descriptions alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('full details for a single attribute within a Smart Data Model'), clearly distinguishing it from sibling tools like 'get_attributes_for_model' by emphasizing 'single attribute' and enumerating the specific metadata returned (NGSI type, units, constraints).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('Use this when the user asks what type a specific field is...') with three concrete use cases, giving the agent clear signals for invocation. Lacks explicit when-not-to-use or named alternative tools, preventing a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_attributes_for_modelAInspect
List all attributes (properties) of a specific Smart Data Model, including each attribute's NGSI type (Property, GeoProperty, or Relationship), data type, description, recommended units, and reference model URL. Use this after get_data_model when the user wants to understand what fields a model has, what values they accept, or how to construct a valid NGSI-LD payload. Example: get_attributes_for_model({"model_name": "WeatherObserved"})
| Name | Required | Description | Default |
|---|---|---|---|
| model_name | Yes | The exact data model name — e.g., 'WeatherObserved', 'AirQualityObserved' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden and discloses rich return structure (NGSI types, recommended units, reference URLs). Implies read-only operation via 'List'. Minor gap: doesn't mention rate limits or pagination behavior if applicable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three components efficiently structured: capability statement (sentence 1), usage guidance (sentence 2), concrete example (JSON). No redundant information; every clause adds distinct value about functionality, sequencing, or syntax.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description comprehensively details returned data structure (NGSI type, data type, description, units, reference URL). For a single-parameter lookup tool, this fully compensates for missing output documentation and enables effective agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% so baseline is 3. Description adds concrete examples ('WeatherObserved', 'AirQualityObserved') and contextual usage pattern that parameter represents 'exact data model name', reinforcing precision requirements beyond schema's basic type definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'List' and resource 'attributes (properties) of a specific Smart Data Model'. Distinguishes from siblings by detailing specific return fields (NGSI type, data type, units) and mentioning workflow relationship to get_data_model.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this after get_data_model' providing clear sequencing. Defines specific user intents triggering use: 'understand what fields a model has, what values they accept, or how to construct a valid NGSI-LD payload'. Provides clear alternative context to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_data_modelAInspect
Retrieve the complete JSON Schema and metadata for a specific Smart Data Model. Use this when the user already knows the model name and wants its full definition, required fields, attribute types, or documentation URL. Example: get_data_model({"model_name": "WeatherObserved", "include_examples": true})
| Name | Required | Description | Default |
|---|---|---|---|
| model_name | Yes | The exact name of the data model entity — e.g., 'WeatherObserved', 'OffStreetParking', 'AirQualityObserved'. Use search_data_models first if unsure of the exact name. | |
| include_examples | No | Include NGSI-LD payload examples in the response (default: true) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adequately describes what is returned (schema, metadata, examples, documentation URL) but omits details about error handling (e.g., model not found), authentication requirements, or rate limits. It confirms this is a read operation via the verb 'Retrieve.'
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: first states purpose, second states usage condition, third provides concrete example. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter retrieval tool with no output schema, the description compensates well by listing the specific content returned (fields, types, examples, URLs). It is complete enough for an agent to understand the tool's value proposition, though explicit error behavior would improve it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage (baseline 3), the description adds value by providing a concrete JSON example call showing exact syntax and valid values ('WeatherObserved', true), which helps the agent understand how to structure the input object beyond the individual field descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific action ('Retrieve'), the exact resource ('complete JSON Schema and metadata'), and scope ('for a specific Smart Data Model'). It effectively distinguishes itself from sibling search tools by specifying this is for when the user 'already knows the model name.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use ('when the user already knows the model name and wants its full definition...'), which implicitly excludes cases where the name is unknown. However, it does not explicitly name the alternative tool (e.g., search_data_models) in the description text itself, though the schema does.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_full_catalogAInspect
Return the complete list of all data models within a specific domain, including names, subjects, descriptions, and repository links. Use this when the user needs a comprehensive structured overview of an entire domain for bulk processing, analysis, or selection. A domain filter is mandatory to prevent returning the entire catalog in one call. Example: get_full_catalog({"domain": "SmartEnvironment"})
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Required. The SDM domain to return catalog entries for. Must be one of the 13 valid domain values. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Explains mandatory domain filter rationale ('to prevent returning the entire catalog') and enumerates returned fields. Lacks disclosure on pagination, payload size limits, or error behavior for invalid domains (though enum constrains input).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences: purpose statement, usage condition, constraint explanation, and working example. Front-loaded with action verb, zero redundancy, every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description compensates by listing returned fields (names, subjects, descriptions, links). Given single-parameter input with 100% schema coverage and clear enumeration of return data, description provides sufficient context for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with complete enum documentation. Description adds value via concrete JSON example ('get_full_catalog({"domain": "SmartEnvironment"})') and emphasizes mandatory nature of filter, aiding correct invocation syntax.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Return') + resource ('complete list of all data models') + scope ('within a specific domain') + payload details ('names, subjects, descriptions, and repository links'). Clearly distinguishes from sibling 'list_models_by_domain' by emphasizing comprehensiveness and structured overview.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'Use this when' guidance targets 'comprehensive structured overview' and 'bulk processing' scenarios. Implies distinction from search tools but does not explicitly name alternatives like 'search_data_models' for targeted queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_model_contextAInspect
Return the NGSI-LD @context URL and context content for a specific data model. This is required when building valid NGSI-LD linked-data payloads — without the correct @context, the payload is not valid linked data and will be rejected by NGSI-LD brokers. Use this before constructing or validating a normalized payload. Example: get_model_context({"model_name": "WeatherObserved"})
| Name | Required | Description | Default |
|---|---|---|---|
| model_name | Yes | The exact data model name — e.g., 'WeatherObserved', 'AirQualityObserved'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context about failure modes ('will be rejected by NGSI-LD brokers'), but lacks details on return format structure, caching behavior, or rate limiting that would be expected given zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences with zero waste: (1) core function, (2) importance/why, (3) when to use, (4) example. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no output schema, the description adequately explains the return values ('@context URL and context content') and workflow integration. For a single-parameter retrieval tool, this is sufficient, though explicit mention of the return format (e.g., JSON-LD) would improve it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds an explicit usage example (get_model_context({'model_name': 'WeatherObserved'})) that clarifies exact syntax and expected input format, adding value beyond the schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns 'NGSI-LD @context URL and context content' using specific verbs and resources. It distinguishes itself from siblings like get_data_model or validate_data by emphasizing the specific @context retrieval for linked-data payload validity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit temporal guidance: 'Use this before constructing or validating a normalized payload.' Establishes clear workflow context (prerequisite step), though it doesn't explicitly name alternative tools for comparison.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_model_repo_urlAInspect
Return the GitHub repository URL(s) for a specific data model by name. Use this when the user needs a direct link to the schema source, examples, or documentation on GitHub. Example: get_model_repo_url({"model_name": "WeatherObserved"})
| Name | Required | Description | Default |
|---|---|---|---|
| model_name | Yes | The exact data model name — e.g., 'WeatherObserved', 'OffStreetParking'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds useful context that URLs contain 'schema source, examples, or documentation' and hints at cardinality with 'URL(s)'. However, lacks disclosure of error behavior (e.g., unknown model name), return format structure, or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with zero waste: sentence 1 states purpose, sentence 2 states usage context, sentence 3 provides concrete example. Appropriately front-loaded with the most critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool with 100% schema coverage and no output schema, description is adequate. Mentions return value is URL(s), though could specify data structure (string vs array). Missing error handling documentation, but acceptable given low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline of 3. Description includes example invocation (get_model_repo_url({'model_name': 'WeatherObserved'})) which reinforces usage but does not add semantic meaning beyond the schema's existing description and examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Return'), resource ('GitHub repository URL(s)'), and scope ('for a specific data model by name') clearly stated. Distinguishes from siblings like get_data_model or get_model_context by specifying GitHub repository links specifically, not model content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit when-to-use guidance provided: 'Use this when the user needs a direct link to the schema source, examples, or documentation on GitHub.' Lacks explicit named alternatives or exclusions, but the GitHub-specific context effectively differentiates from other model retrieval tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_subject_repo_urlAInspect
Return the GitHub repository URL for an SDM subject (e.g., 'dataModel.Weather'). Use this when the user wants to browse the source files, open issues, or cite the authoritative schema location for a subject group. Example: get_subject_repo_url({"subject": "dataModel.Weather"})
| Name | Required | Description | Default |
|---|---|---|---|
| subject | Yes | The SDM subject name with 'dataModel.' prefix — e.g., 'dataModel.Weather', 'dataModel.Parking'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It clarifies the return type (GitHub repository URL) and authority ('authoritative schema location'), but omits error handling behavior (what happens if subject doesn't exist), caching, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with zero waste: sentence 1 defines purpose, sentence 2 provides usage context, sentence 3 gives a concrete example. Information is front-loaded and dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool without output schema, the description adequately covers the return value (GitHub URL) and provides sufficient context to distinguish subject-level from model-level operations. Minor gap in not describing error states or exact return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting the 'dataModel.' prefix requirement. The description adds a concrete JSON usage example that reinforces the expected format, but does not significantly expand on parameter semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Return[s] the GitHub repository URL for an SDM subject' with specific verb, resource, and scope. It distinguishes itself from sibling get_model_repo_url by focusing on 'subject group' level (e.g., 'dataModel.Weather') rather than individual models.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'when the user wants to browse the source files, open issues, or cite the authoritative schema location.' This provides clear context for selecting this tool over alternatives like get_model_repo_url or get_data_model.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_domainsAInspect
List all available SDM domains (top-level industry categories) with the count of data models in each. Use this as the entry point when the user wants an overview of what sectors are covered, or before calling list_models_by_domain. No parameters required. Example: list_domains({})
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It successfully discloses the return structure (domains with counts) and the hierarchical concept (top-level categories), but does not mention performance characteristics or error behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: first defines the action, second provides usage context, third gives technical details. Every sentence earns its place with no redundancy; information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description adequately explains the return values (domains + counts). It covers the domain concept, usage workflow, and parameter requirements, making it complete for a simple listing tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters, the baseline is 4. The description adds value by explicitly stating 'No parameters required' and providing the example invocation syntax 'list_domains({})', which clarifies the expected empty object input.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('List') with clear resource ('SDM domains') and scope ('all available'). It defines 'domains' as 'top-level industry categories' and specifies the return includes 'count of data models in each', distinguishing it from generic listing tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use ('entry point when the user wants an overview') and specifically names the sibling tool sequence ('before calling list_models_by_domain'), establishing clear workflow context relative to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_models_by_domainAInspect
List all data model names available within a specific SDM domain. Use this when the user wants to browse all models in a sector such as SmartCities or SmartHealth. Call list_domains first to confirm valid domain names. Example: list_models_by_domain({"domain": "SmartEnergy"})
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results (default: 50) | |
| domain | Yes | The SDM domain name to list models from. Must exactly match one of the values returned by list_domains. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to describe the return format, pagination behavior, or the fact that 'all' is constrained by the limit parameter (default 50). Does not mention read-only nature, error conditions, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences: purpose statement, usage context, prerequisite warning, and concrete example. Every sentence earns its place with zero redundancy. Information is front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 2-parameter tool, but gaps remain given the lack of output schema or annotations. The description should clarify what data structure is returned and resolve the ambiguity between 'List all' and the limit parameter's truncation behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds value by providing a concrete JSON example ('Example: list_models_by_domain({...})') and reinforcing the relationship between the domain parameter and the list_domains tool ('Must exactly match values returned by list_domains').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('List') with clear resource ('data model names') and scope ('within a specific SDM domain'). It distinguishes from sibling tools by emphasizing the 'browse all models' use case versus searching or retrieving specific models.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('when the user wants to browse all models in a sector') and critical workflow prerequisite ('Call list_domains first to confirm valid domain names'). Could be improved by explicitly stating when NOT to use it (e.g., versus search_data_models).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_data_modelsAInspect
Search the Smart Data Models (SDM) catalog by keyword, domain name, or partial model name. Use this when the user wants to discover which data models exist for a topic (e.g., 'parking', 'weather', 'energy meter'). Returns a list of matching model names and their subjects. Example: search_data_models({"query": "air quality", "domain": "SmartEnvironment"})
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return. Defaults to 10. | |
| query | Yes | Keyword or phrase to search for (e.g., 'parking', 'water leak', 'noise level') | |
| domain | No | Optional. Filter results to a specific SDM domain. Valid values: SmartCities, SmartAgrifood, SmartWater, SmartEnergy, SmartEnvironment, SmartRobotics, SmartSensoring, CrossSector, SmartAeronautics, SmartDestinations, SmartHealth, SmartManufacturing, SmartLogistics |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It effectively compensates for the missing output schema by disclosing the return format: 'Returns a list of matching model names and their subjects.' This is exactly the behavioral transparency needed for a discovery tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured across four sentences: purpose definition, usage guidance, return value disclosure, and working example. Every sentence earns its place with zero redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of both annotations and output schema, the description successfully explains the tool's discovery purpose and return structure. It could marginally improve by noting the limit parameter's role in pagination, but the coverage is strong for a 3-parameter search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value by clarifying that the query parameter accepts 'partial model name' (extending beyond the schema's 'Keyword or phrase') and provides a concrete JSON example showing how query and domain parameters interact.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search') and resource ('Smart Data Models catalog'), and enumerates the search methods ('by keyword, domain name, or partial model name'). This effectively distinguishes it from sibling retrieval tools like get_data_model or list_models_by_domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool: 'Use this when the user wants to discover which data models exist for a topic.' While it provides clear context and examples, it does not explicitly name alternative tools for when this one is inappropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_dataAInspect
Validate a JSON payload against the schema of a specific Smart Data Model. Use this when the user has an IoT payload or NGSI-LD entity and wants to check if it conforms to the standard. Returns a structured result with pass/fail and a list of specific validation errors if any. Example: validate_data({"model_name": "WeatherObserved", "data": {"id": "urn:ngsi-ld:WeatherObserved:001", "type": "WeatherObserved", "temperature": {"type": "Property", "value": 22.3}}})
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | The JSON object to validate. Accepts both NGSI-LD normalized format (with type/value wrappers) and key-values format. | |
| strict | No | Strict validation mode — disallows additional properties (default: false) | |
| model_name | Yes | The exact data model name to validate against — e.g., 'WeatherObserved' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return structure ('structured result with pass/fail and a list of specific validation errors'), which compensates for missing output schema. Could explicitly state this is a safe, non-destructive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently structured: purpose → usage context → return value → concrete example. No filler text; every sentence provides actionable information. Example is appropriately detailed without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, description adequately explains return values (pass/fail status + error list). With 100% schema coverage and nested object example provided, the description is complete for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds concrete example demonstrating NGSI-LD normalized format syntax ('type': 'Property' wrappers), which significantly clarifies expected input structure beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Validate) + resource (JSON payload) + scope (against Smart Data Model schema). Clearly distinguishes from sibling retrieval/search tools by focusing on conformance checking of actual data payloads.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'when the user has an IoT payload or NGSI-LD entity and wants to check if it conforms to the standard.' Lacks explicit 'when not to use' or named alternatives, but the context sufficiently narrows applicability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Verify Ownership
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [
{
"email": "your-email@example.com"
}
]
}The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.
Sign in to verify ownershipControl your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!