dannet
Server Details
DanNet - Danish WordNet with rich lexical relationships and SPARQL access.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- kuhumcst/DanNet
- GitHub Stars
- 24
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 16 of 16 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes, but there is some overlap between get_entity_info and the specific entity tools (get_synset_info, get_word_info, get_sense_info). The descriptions help clarify that get_entity_info is a general-purpose tool for any entity, while the others are specialized, but an agent might still be confused about when to use which. Tools like analyze_namespace_usage and extract_semantic_data also have somewhat overlapping debugging/analysis roles.
Tool names follow a highly consistent verb_noun pattern throughout (e.g., get_synset_info, fetch_ddo_definition, autocomplete_danish_word). All names use snake_case with clear, descriptive verbs like get, fetch, analyze, switch, validate, etc. There are no deviations in naming conventions.
With 16 tools, this is well-scoped for a comprehensive Danish lexical database server. Each tool serves a clear purpose, from core data retrieval (get_synset_info, get_word_info) to advanced features like SPARQL queries, autocomplete, and server management. The count supports rich functionality without being overwhelming.
The tool set provides complete coverage for interacting with DanNet. It includes core CRUD-like operations (fetching entities), advanced querying (SPARQL, autocomplete), debugging tools (analyze_namespace_usage, validate_synset_structure), and utility functions (server switching, cache stats). There are no obvious gaps; agents can perform all expected tasks from basic lookups to complex semantic analysis.
Available Tools
16 toolsanalyze_namespace_usageBInspect
Analyze namespace usage and provide resolution for prefixed properties.
This debugging tool helps understand how namespaces are used in DanNet JSON-LD data and resolves prefixed URIs to full forms.
Args: entity_data: Any DanNet JSON-LD entity data
Returns: Dict with namespace analysis and URI resolution
| Name | Required | Description | Default |
|---|---|---|---|
| entity_data | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states this is a 'debugging tool' which suggests read-only analysis, but doesn't clarify if it modifies data, requires specific permissions, has rate limits, or what happens with invalid input. The description mentions 'provide resolution' but doesn't explain what that entails behaviorally.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with four sentences. It front-loads the purpose, adds context, then documents parameters and returns. The Args/Returns sections are clear but slightly redundant with the opening statement. Every sentence adds value, though it could be more tightly integrated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given one parameter with low schema coverage, the description adequately explains the input. An output schema exists (implied by 'Has output schema: true'), so the description doesn't need to detail return values beyond 'Dict with namespace analysis and URI resolution'. For a debugging tool with no annotations, it provides sufficient context, though more behavioral details would help.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context for the single parameter: 'entity_data: Any DanNet JSON-LD entity data' clarifies the expected content type. With 0% schema description coverage (schema only says 'object' with 'additionalProperties: true'), this compensates well by specifying the data domain. No other parameters exist to document.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Analyze namespace usage and provide resolution for prefixed properties' and specifies it's for 'DanNet JSON-LD data'. It distinguishes itself from sibling tools by focusing on namespace analysis rather than data retrieval or querying. However, it doesn't explicitly contrast with specific alternatives like 'validate_synset_structure' which might also handle JSON-LD data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some context: 'This debugging tool helps understand how namespaces are used...' implying it should be used for debugging namespace issues in DanNet JSON-LD. However, it doesn't specify when to use it versus alternatives like 'validate_synset_structure' or 'extract_semantic_data', nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
autocomplete_danish_wordAInspect
Get autocomplete suggestions for Danish word prefixes.
Useful for discovering Danish vocabulary or finding the correct spelling of words. Returns lemma forms (dictionary forms) of words.
Args: prefix: The beginning of a Danish word (minimum 3 characters required) max_results: Maximum number of suggestions to return (default: 10)
Returns: Comma-separated string of word completions in alphabetical order
Note: Autocomplete requires at least 3 characters to prevent excessive results.
Example: suggestions = autocomplete_danish_word("hyg", 5) # Returns: "hygge, hyggelig, hygiejne"
| Name | Required | Description | Default |
|---|---|---|---|
| prefix | Yes | ||
| max_results | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it specifies the minimum character requirement ('minimum 3 characters required'), describes the return format ('comma-separated string of word completions in alphabetical order'), and mentions the default value for max_results. However, it doesn't cover potential errors or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage context, parameter details, return format, and an example—all in well-structured paragraphs with zero wasted sentences. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no annotations, but with output schema), the description is complete: it covers purpose, usage, parameters, returns, constraints, and includes an example. The output schema handles return structure, so the description appropriately focuses on semantics rather than repeating schema details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate—and it does excellently. It fully explains both parameters: 'prefix' ('The beginning of a Danish word' with minimum length constraint) and 'max_results' ('Maximum number of suggestions to return' with default value), adding crucial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb ('Get autocomplete suggestions') and resource ('for Danish word prefixes'), distinguishing it from siblings like 'get_word_info' or 'fetch_ddo_definition' by focusing on prefix-based completion rather than full-word lookup or dictionary definitions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('useful for discovering Danish vocabulary or finding the correct spelling of words'), but does not explicitly state when not to use it or name specific alternatives among the sibling tools (e.g., when to use 'get_word_info' instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
extract_semantic_dataCInspect
Extract and normalize semantic data from any DanNet JSON-LD entity.
This tool provides a unified way to extract semantic information from synsets, words, or senses, handling different JSON-LD structures consistently.
Args: entity_data: Any DanNet entity JSON-LD data
Returns: Dict with normalized semantic information
| Name | Required | Description | Default |
|---|---|---|---|
| entity_data | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool extracts and normalizes data, implying a read-only transformation, but doesn't address critical aspects like error handling, performance characteristics (e.g., rate limits), authentication needs, or what 'normalize' entails (e.g., data formatting, deduplication). This leaves significant gaps in understanding the tool's operational behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately concise. It opens with a clear purpose statement, follows with context on handling different structures, and includes separate 'Args' and 'Returns' sections. Each sentence adds value without redundancy, though the 'Args' and 'Returns' labels are slightly redundant given the structured fields, keeping it from a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (processing JSON-LD entities), lack of annotations, and presence of an output schema (which covers return values), the description is minimally adequate. It explains what the tool does and its parameter, but misses behavioral details like error cases or normalization specifics. The output schema relieves some burden, but more context on usage and behavior would enhance completeness for effective agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description includes an 'Args' section that documents the single parameter 'entity_data' as 'Any DanNet entity JSON-LD data,' adding meaning beyond the input schema's generic 'object' type. However, with 0% schema description coverage and only one parameter, this provides basic but incomplete context—it doesn't detail expected JSON-LD structure or validation rules. The baseline is 3 due to the single parameter, but more specificity would improve clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Extract and normalize semantic data from any DanNet JSON-LD entity.' It specifies the verb ('extract and normalize'), resource ('semantic data'), and source ('DanNet JSON-LD entity'). However, it doesn't explicitly differentiate from sibling tools like 'get_entity_info' or 'get_synset_info' that might also process DanNet entities, leaving some ambiguity about when to use this versus those alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance. It mentions handling 'different JSON-LD structures consistently' and unifying extraction from 'synsets, words, or senses,' which implies a broad applicability. However, it lacks explicit when-to-use rules, prerequisites, or comparisons to sibling tools like 'get_entity_info' or 'validate_synset_structure,' leaving the agent to infer context without clear direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch_ddo_definitionAInspect
Fetch the full, untruncated definition from DDO (Den Danske Ordbog) for a synset.
This tool addresses the issue that DanNet synset definitions (:skos/definition) may be capped at a certain length. It retrieves the complete definition from the authoritative DDO source by following sense source URLs.
WORKFLOW:
Get synset information to find associated senses
Extract DDO source URLs from sense data (dns:source)
Fetch DDO HTML pages and parse for definitions
Find elements with class "definitionBox selected" and extract span.definition content
IMPORTANT NOTES:
Looks for CSS classes "definitionBox selected" and child span.definition
DDO and DanNet have diverged over time, so source URLs may not always work
This implementation uses httpx for web requests and regex-based HTML parsing
Args: synset_id: Synset identifier (e.g., "synset-1876" or just "1876")
Returns: Dict containing: - synset_id: The queried synset ID - ddo_definitions: List of definitions found from DDO pages - source_urls: List of DDO URLs that were attempted - success_urls: List of URLs that successfully returned definitions - errors: List of any errors encountered - truncated_definition: The original DanNet definition for comparison
Example: result = fetch_ddo_definition("synset-3047") # Check result['ddo_definitions'] for full DDO definitions # Compare with result['truncated_definition'] from DanNet
| Name | Required | Description | Default |
|---|---|---|---|
| synset_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's workflow, important notes about CSS classes and potential URL failures, and implementation details (httpx, regex parsing). However, it lacks explicit information on rate limits, authentication needs, or error handling specifics beyond general 'errors' in returns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, workflow, notes, args, returns, example) and front-loaded key information. Some details like implementation specifics (httpx, regex) could be trimmed for brevity, but overall it's efficient and each sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (web scraping, parsing), no annotations, and an output schema exists, the description is complete. It explains the purpose, workflow, limitations, parameters, and return structure in detail. The example aids understanding, and the output schema covers return values, so no need to duplicate that information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaningful semantics: the parameter 'synset_id' is explained as 'Synset identifier (e.g., "synset-1876" or just "1876")', providing examples and clarifying format. This goes beyond the schema's basic title, though it doesn't detail validation rules or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Fetch the full, untruncated definition from DDO (Den Danske Ordbog) for a synset.' It specifies the verb ('fetch'), resource ('definition from DDO'), and distinguishes it from sibling tools by addressing a specific limitation of DanNet definitions being truncated, unlike general info tools like get_synset_info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'This tool addresses the issue that DanNet synset definitions may be capped at a certain length.' It provides context on why to use it (to get untruncated definitions) and implies when not to use it (if the original DanNet definition is sufficient). The example further clarifies usage by comparing results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_cache_statsAInspect
Return statistics about the session-scoped resource cache.
Useful for verifying that caching is working: call get_synset_info (or similar) twice for the same ID and check that cache_size grows by 1 on the first call but not on the second, and that cached_keys contains the expected IDs.
Returns: Dict with: - cache_size: Total number of cached entries - cached_keys: List of (base_url, resource_id) pairs currently cached
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and discloses key behavioral traits: it's a read-only operation (implied by 'Return statistics'), session-scoped nature, and practical use for cache verification. It doesn't mention rate limits or error handling, but covers essential behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with three focused paragraphs: purpose statement, usage guidance with example, and return value specification. Every sentence adds value with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, output schema exists), the description is complete: it explains what the tool does, when to use it, and what it returns. The output schema handles return value details, so the description appropriately focuses on context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema coverage, so no parameter documentation is needed. The description appropriately focuses on purpose and output rather than inputs, earning a baseline 4 for this scenario.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Return') and resource ('statistics about the session-scoped resource cache'), distinguishing it from sibling tools that focus on linguistic data queries rather than cache monitoring.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly provides when to use this tool ('Useful for verifying that caching is working') and includes a concrete example with get_synset_info, offering clear guidance on application context without alternatives needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_current_dannet_serverAInspect
Get information about the currently active DanNet server.
Returns: Dict with current server information: - server_url: The base URL of the current DanNet server - server_type: "local", "remote", or "custom" - status: Connection status information
Example: info = get_current_dannet_server() # Returns: {"server_url": "https://wordnet.dk", "server_type": "remote", "status": "active"}
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It specifies that the tool returns a dictionary with server information, including connection status, which adds useful context. However, it does not mention potential errors, performance characteristics, or authentication needs, leaving gaps in behavioral transparency for a tool that likely involves network interaction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by a clear returns section and an example. Every sentence adds value: the first states what it does, the second details the output structure, and the third provides a concrete example. There is no wasted text, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there is an output schema (implied by 'Has output schema: true'), the description does not need to fully explain return values, and it provides a good overview with an example. However, for a tool that interacts with a server, additional context like error handling or prerequisites could enhance completeness, though the current level is adequate for a read-only operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, focusing instead on the return value. This meets the baseline of 4 for tools with no parameters, as it avoids unnecessary repetition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'information about the currently active DanNet server', making the purpose specific and unambiguous. It distinguishes this tool from siblings like 'switch_dannet_server' (which changes servers) or 'get_cache_stats' (which focuses on cache metrics).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when information about the current DanNet server is needed, such as checking connection status or server type. However, it does not explicitly state when to use this tool versus alternatives like 'switch_dannet_server' for server management or other 'get_' tools for specific data queries, leaving some guidance implicit rather than explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_entity_infoAInspect
Get comprehensive RDF data for any entity in the DanNet database.
Supports both DanNet entities and external vocabulary entities loaded into the triplestore from various schemas and datasets.
UNDERSTANDING THE DATA MODEL: The DanNet database contains entities from multiple sources:
DanNet entities (namespace="dn"): synsets, words, senses, and other resources
External entities (other namespaces): OntoLex vocabulary, Inter-Lingual Index, etc.
All entities follow RDF patterns with namespace prefixes for properties and relationships.
NAVIGATION TIPS:
DanNet synsets have rich semantic relationships (wn:hypernym, wn:hyponym, etc.)
External entities provide vocabulary definitions and cross-references
Use parse_resource_id() on URI references to get clean IDs
Check @type to understand what kind of entity you're working with
Args: identifier: Entity identifier (e.g., "synset-3047", "word-11021628", "LexicalConcept", "i76470") namespace: Namespace for the entity (default: "dn" for DanNet entities) - "dn": DanNet entities via /dannet/data/ endpoint - Other values: External entities via /dannet/external/{namespace}/ endpoint - Common external namespaces: "ontolex", "ili", "wn", "lexinfo", etc.
Returns: Dict containing JSON-LD format with: - @context → namespace mappings (if applicable) - @id → entity identifier - @type → entity type - All RDF properties with namespace prefixes (e.g., wn:hypernym, ontolex:evokes) - For DanNet synsets: dns:ontologicalType and dns:sentiment (if applicable) - Entity-specific convenience fields (synset_id, resource_id, etc.)
Examples: # DanNet entities get_entity_info("synset-3047") # DanNet synset get_entity_info("word-11021628") # DanNet word get_entity_info("sense-21033604") # DanNet sense
# External vocabulary entities
get_entity_info("LexicalConcept", namespace="ontolex") # OntoLex class definition
get_entity_info("i76470", namespace="ili") # Inter-Lingual Index entry
get_entity_info("noun", namespace="lexinfo") # Lexinfo part-of-speech| Name | Required | Description | Default |
|---|---|---|---|
| namespace | No | dn | |
| identifier | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by explaining the data model, namespace handling, endpoint routing differences, and return format details. It discloses that DanNet entities use '/dannet/data/' while external entities use '/dannet/external/{namespace}/', and provides concrete examples of common namespaces. However, it doesn't mention rate limits, authentication requirements, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, data model, navigation tips, args, returns, examples) and front-loaded with the core functionality. While comprehensive, some sections like 'UNDERSTANDING THE DATA MODEL' and 'NAVIGATION TIPS' could potentially be condensed as they contain information that might be inferred from the examples and parameter explanations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (handling multiple namespaces and entity types), no annotations, 0% schema coverage, but with an output schema present, the description is remarkably complete. It explains the data model, provides usage guidance, documents parameters thoroughly, describes the return format in detail, and gives comprehensive examples covering both DanNet and external entities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing rich semantic context for both parameters. For 'identifier', it explains the format with multiple examples across different entity types. For 'namespace', it documents the default value ('dn'), explains what different values mean, lists common external namespaces, and describes how namespace affects endpoint routing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Get comprehensive RDF data for any entity in the DanNet database' with specific verb ('Get'), resource ('RDF data'), and scope ('any entity in the DanNet database'). It clearly distinguishes from sibling tools like get_synset_info, get_word_info, and get_sense_info by covering all entity types across multiple namespaces, not just specific DanNet entity types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: it explains this is for 'any entity' across multiple namespaces, while sibling tools like get_synset_info, get_word_info, and get_sense_info appear to be specialized for specific DanNet entity types. The 'UNDERSTANDING THE DATA MODEL' and 'NAVIGATION TIPS' sections further clarify appropriate usage contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sense_infoAInspect
Get comprehensive RDF data for a DanNet sense (lexical sense).
UNDERSTANDING THE DATA MODEL: Senses are ontolex:LexicalSense instances connecting words to synsets. They represent specific meanings of words with examples and definitions.
KEY RELATIONSHIPS:
LEXICAL CONNECTIONS:
ontolex:isSenseOf → word this sense belongs to
ontolex:isLexicalizedSenseOf → synset this sense represents
SEMANTIC INFORMATION:
lexinfo:senseExample → usage examples in context
rdfs:label → sense label (e.g., "hund_1§1")
REGISTER AND STYLISTIC INFORMATION:
lexinfo:register → formal register classification (e.g., ":lexinfo/slangRegister")
lexinfo:usageNote → human-readable usage notes (e.g., "slang", "formal")
SOURCE INFORMATION:
dns:source → source URL for this sense entry
DDO CONNECTION (Den Danske Ordbog): DanNet senses are derived from DDO (ordnet.dk), the authoritative modern Danish dictionary.
SENSE LABELS: The format "word_entry§definition" connects to DDO structure:
"hund_1§1" = word "hund", entry 1, definition 1 in DDO
"forlygte_§2" = word "forlygte", definition 2 in DDO
The § notation directly corresponds to DDO's definition numbering
SOURCE TRACEABILITY: The dns:source URLs link back to specific DDO entries:
Format: https://ordnet.dk/ddo/ordbog?entry_id=X&def_id=Y&query=word
Note: Some DDO URLs may not resolve correctly if IDs have changed since import
If the DDO page loads correctly, the relevant definition has CSS class "selected"
METADATA ORIGINS: Usage examples, register information, and definitions flow from DDO's corpus-based lexicographic data, providing authoritative linguistic information.
NAVIGATION TIPS:
Follow ontolex:isSenseOf to find the parent word
Follow ontolex:isLexicalizedSenseOf to find the synset
Check lexinfo:senseExample for usage examples from DDO corpus
Check lexinfo:register and lexinfo:usageNote for stylistic information
Use dns:source to attempt tracing back to original DDO definition (with caveats)
Use parse_resource_id() on URI references to get clean IDs
Args: sense_id: Sense identifier (e.g., "sense-21033604" or just "21033604")
Returns: Dict containing: - All RDF properties with namespace prefixes (e.g., ontolex:isSenseOf) - resource_id → clean identifier for convenience - All sense properties and relationships
Example: info = get_sense_info("sense-21033604") # "hund_1§1" sense # Check info['ontolex:isSenseOf'] for parent word # Check info['ontolex:isLexicalizedSenseOf'] for synset # Check info['lexinfo:senseExample'] for usage examples from DDO # Check info['lexinfo:register'] for register classification # Check info['lexinfo:usageNote'] for usage notes like "slang" # Check info['dns:source'] for DDO source URL (may not always work)
| Name | Required | Description | Default |
|---|---|---|---|
| sense_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does this well by explaining the data model, source traceability (including caveats about DDO URLs), and metadata origins. However, it doesn't explicitly mention performance characteristics like rate limits or error handling, which keeps it from a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (Understanding, Key Relationships, Navigation Tips), but it's quite lengthy with multiple paragraphs. While all content is relevant, it could be more front-loaded; the core purpose appears in the first sentence, but detailed explanations follow extensively. Some sentences in the metadata origins section could be condensed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the tool (RDF data model, DDO connections) and the presence of an output schema, the description is remarkably complete. It explains the data model, relationships, source traceability, and provides a detailed example of the return structure. The output schema handles return values, so the description appropriately focuses on semantics and usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage for its single parameter, but the description compensates fully. It explains the 'sense_id' parameter with examples ('sense-21033604' or just '21033604'), connects it to the DDO structure ('hund_1§1'), and shows how to use it in the example. This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get comprehensive RDF data for a DanNet sense (lexical sense).' It specifies the verb ('Get'), resource ('RDF data'), and target ('DanNet sense'), and distinguishes it from siblings like get_word_info or get_synset_info by focusing specifically on lexical senses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool through the 'NAVIGATION TIPS' section, which lists specific scenarios (e.g., 'Follow ontolex:isSenseOf to find the parent word'). It also implicitly distinguishes from siblings by focusing on sense-level data rather than word or synset information, and references related tools like parse_resource_id() for clean IDs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_synset_infoAInspect
Get comprehensive RDF data for a DanNet synset (lexical concept).
UNDERSTANDING THE DATA MODEL: Synsets are ontolex:LexicalConcept instances representing word meanings. They connect to words via ontolex:isEvokedBy and have rich semantic relations.
KEY RELATIONSHIPS (by importance):
TAXONOMIC (most fundamental):
wn:hypernym → broader concept (e.g., "hund" → "pattedyr")
wn:hyponym → narrower concepts (e.g., "hund" → "puddel", "schæfer")
dns:orthogonalHypernym → cross-cutting categories [Danish: ortogonalt hyperonym]
LEXICAL CONNECTIONS:
ontolex:isEvokedBy → words expressing this concept [Danish: fremkaldes af]
ontolex:lexicalizedSense → sense instances [Danish: leksikaliseret betydning]
wn:similar → related but distinct concepts
PART-WHOLE RELATIONS:
wn:mero_part/wn:holo_part → component relationships [English: meronym/holonym part]
wn:mero_substance/wn:holo_substance → material composition
wn:mero_member/wn:holo_member → membership relations
SEMANTIC PROPERTIES:
dns:ontologicalType → semantic classification with @set array of dnc: types Common types: dnc:Animal, dnc:Human, dnc:Object, dnc:Physical, dnc:Dynamic (events/actions), dnc:Static (states)
dns:sentiment → emotional polarity with marl:hasPolarity and marl:polarityValue
wn:lexfile → semantic domain (e.g., "noun.food", "verb.motion")
skos:definition → synset definition (may be truncated for length)
CROSS-LINGUISTIC:
wn:ili → Interlingual Index for cross-language mapping
wn:eq_synonym → Open English WordNet equivalent
DDO CONNECTION FOR FULLER DEFINITIONS: DanNet synset definitions (skos:definition) may be truncated (ending with "…"). For complete definitions, use the fetch_ddo_definition() tool which automatically retrieves full DDO text, or manually examine sense source URLs via get_sense_info().
NAVIGATION TIPS:
Follow wn:hypernym chains to find semantic categories
Check dns:inherited for properties from parent synsets
Use parse_resource_id() on URI references to get clean IDs
For fuller definitions, examine individual sense source URLs via get_sense_info()
Args: synset_id: Synset identifier (e.g., "synset-1876" or just "1876")
Returns: Dict containing JSON-LD format with: - @context → namespace mappings - @id → entity identifier (e.g., "dn:synset-1876") - @type → "ontolex:LexicalConcept" - All RDF properties with namespace prefixes (e.g., wn:hypernym) - dns:ontologicalType → {"@set": ["dnc:Animal", ...]} (if applicable) - dns:sentiment → {"marl:hasPolarity": "marl:Positive", "marl:polarityValue": "3"} (if applicable) - synset_id → clean identifier for convenience
Example: info = get_synset_info("synset-52") # cake synset # Check info['wn:hypernym'] for parent concepts # Check info['dns:ontologicalType']['@set'] for semantic types # Check info['dns:sentiment']['marl:hasPolarity'] for sentiment
| Name | Required | Description | Default |
|---|---|---|---|
| synset_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure and does so comprehensively. It explains the data model, key relationships by importance, semantic properties, cross-linguistic mappings, and navigation tips. It also discloses that definitions may be truncated and provides workarounds, adding significant context beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (UNDERSTANDING THE DATA MODEL, KEY RELATIONSHIPS, etc.) and front-loads the core purpose. While comprehensive, some sections could be more concise (e.g., the detailed relationship explanations), but overall it maintains focus with minimal wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the tool (RDF data retrieval with rich semantic relationships), no annotations, and 0% schema coverage, the description provides exceptional completeness. It explains the data model, relationships, properties, cross-linguistic aspects, navigation tips, and includes a detailed example. The output schema exists, so the description appropriately focuses on interpretation rather than repeating return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description provides detailed parameter semantics despite 0% schema description coverage. It explains that synset_id is a 'Synset identifier' with examples ('synset-1876' or just '1876'), clarifies what it represents, and shows usage in the example. This fully compensates for the lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get comprehensive RDF data for a DanNet synset (lexical concept).' It specifies the exact resource (DanNet synset) and action (get RDF data), distinguishing it from sibling tools like get_word_info or get_sense_info that focus on different entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives. It specifically mentions using fetch_ddo_definition() for complete definitions when DanNet definitions are truncated, and get_sense_info() for examining individual sense source URLs. It also distinguishes this tool's focus on synsets from other lexical tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_word_infoAInspect
Get comprehensive RDF data for a DanNet word (lexical entry).
UNDERSTANDING THE DATA MODEL: Words are ontolex:LexicalEntry instances representing lexical forms. They connect to synsets via senses and have morphological information.
KEY RELATIONSHIPS:
LEXICAL CONNECTIONS:
ontolex:evokes → synsets this word can express
ontolex:sense → sense instances connecting word to synsets
ontolex:canonicalForm → canonical form with written representation
MORPHOLOGICAL PROPERTIES:
lexinfo:partOfSpeech → part of speech classification
wn:partOfSpeech → WordNet part of speech
ontolex:canonicalForm/ontolex:writtenRep → written form
CROSS-REFERENCES:
owl:sameAs → equivalent resources in other datasets
dns:source → source URL for this word entry
NAVIGATION TIPS:
Follow ontolex:evokes to find synsets this word expresses
Check ontolex:sense for detailed sense information
Use parse_resource_id() on URI references to get clean IDs
Args: word_id: Word identifier (e.g., "word-11021628" or just "11021628")
Returns: Dict containing: - All RDF properties with namespace prefixes (e.g., ontolex:evokes) - resource_id → clean identifier for convenience - All linguistic properties and relationships
Example: info = get_word_info("word-11021628") # "hund" word # Check info['ontolex:evokes'] for synsets this word can express # Check info['ontolex:sense'] for senses
| Name | Required | Description | Default |
|---|---|---|---|
| word_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by detailing the data model, relationships, and return structure. It explains that the tool fetches RDF properties, includes a convenience field (resource_id), and provides navigation tips. It doesn't cover potential errors, rate limits, or authentication needs, but offers substantial behavioral insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with sections like 'UNDERSTANDING THE DATA MODEL' and 'NAVIGATION TIPS', but it includes extensive explanatory content (e.g., RDF relationships) that might be verbose for a tool description. Some details could be streamlined, though they add value for understanding the tool's output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (RDF data retrieval), no annotations, and an output schema present, the description is highly complete. It thoroughly explains the data model, key relationships, return format, and provides an example, making it self-sufficient for an agent to understand and use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining the word_id parameter with examples ('word-11021628' or '11021628') and context (e.g., for the word 'hund'). This clarifies the parameter's format and usage beyond the bare schema, though it could detail validation rules or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get comprehensive RDF data for a DanNet word (lexical entry).' It specifies the verb ('Get'), resource ('RDF data'), and domain ('DanNet word'), distinguishing it from siblings like get_word_overview or get_word_synsets by emphasizing comprehensive data retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage context through 'NAVIGATION TIPS' and the example, suggesting when to use this tool (e.g., to explore word-synset relationships). However, it lacks explicit guidance on when to choose this over alternatives like get_word_overview or get_word_synsets, which might offer simpler or more focused data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_word_overviewAInspect
Get a complete overview of all senses for a Danish word in a single call.
Replaces the common pattern of calling get_word_synsets → get_synset_info per result → get_word_synonyms, collapsing 5-15 HTTP round-trips into one SPARQL query.
Only returns synsets where the word is a primary lexical member (i.e. the word itself has a direct sense in the synset), excluding multi-word expressions that merely contain the word as a component.
Args: word: The Danish word to look up
Returns: List of dicts, one per synset, each containing: - synset_id: Clean synset identifier (e.g. "synset-3047") - label: Human-readable synset label - definition: Synset definition (may be truncated with "…") - ontological_types: List of dnc: type URIs - synonyms: List of co-member lemmas (true synonyms only) - hypernym: Dict with synset_id and label of the immediate broader concept, or null - lexfile: WordNet lexicographer file name (e.g. "noun.animal"), or null if absent
Example: overview = get_word_overview("hund") # Returns list of 4 synsets, the first being: # {"synset_id": "synset-3047", # "label": "{hund_1§1; køter_§1; vovhund_§1; vovse_§1}", # "definition": "pattedyr som har god lugtesans ...", # "ontological_types": ["dnc:Animal", "dnc:Object"], # "synonyms": ["køter", "vovhund", "vovse"], # "lexfile": "noun.animal"}
# Pass synset_id to get_synset_info() for full JSON-LD data on any result:
# full_data = get_synset_info(overview[0]["synset_id"])| Name | Required | Description | Default |
|---|---|---|---|
| word | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it collapses multiple HTTP round-trips into one SPARQL query (performance benefit), specifies inclusion/exclusion criteria for synsets, and notes that definitions may be truncated. However, it lacks details on error handling, rate limits, or authentication needs, leaving some behavioral aspects uncovered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded, starting with a clear purpose statement. Every sentence adds value: it explains the efficiency gain, details filtering criteria, documents parameters and returns, and includes a practical example. There is no wasted text, and the information is organized logically for easy comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (aggregating data from multiple sources) and the presence of an output schema (implied by the detailed return description), the description is complete. It covers purpose, usage, behavioral traits, parameter semantics, and return structure with an example. The output schema reduces the need to explain return values in depth, and the description fills all gaps effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, so the description must fully compensate. It adds significant meaning beyond the schema: it explains that the 'word' parameter is 'The Danish word to look up,' clarifies it returns synsets only where the word is a primary lexical member, and excludes multi-word expressions. This provides essential context not present in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Get[s] a complete overview of all senses for a Danish word in a single call,' clearly specifying the verb ('Get'), resource ('overview of all senses'), and target ('Danish word'). It distinguishes from siblings like get_word_synsets and get_synset_info by explaining it replaces the pattern of calling those tools sequentially, making its unique purpose evident.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: it states when to use this tool ('Replaces the common pattern of calling get_word_synsets → get_synset_info per result → get_word_synonyms') and when not to use it ('Only returns synsets where the word is a primary lexical member, excluding multi-word expressions'). It also mentions alternatives like get_synset_info for full data, offering clear context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_word_synonymsAInspect
Find synonyms for a Danish word through shared synsets (word senses).
SYNONYM TYPES IN DANNET:
True synonyms: Words sharing the exact same synset
Context-specific: Different synonyms for different word senses Note: Near-synonyms via wn:similar relations are not currently included
The function returns all words that share synsets with the input word, effectively finding lexical alternatives that express the same concepts.
Args: word: The Danish word to find synonyms for
Returns: Comma-separated string of synonymous words (aggregated across all word senses)
Example: synonyms = get_word_synonyms("hund") # Returns: "køter, vovhund, vovse"
Note: Check synset definitions to understand which synonyms apply to which meaning (polysemy is common in Danish).
| Name | Required | Description | Default |
|---|---|---|---|
| word | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it explains the synonym types included (true synonyms, context-specific) and excluded (near-synonyms via wn:similar relations), clarifies that results are aggregated across all word senses, and specifies the return format (comma-separated string). It also notes polysemy is common in Danish, which is useful context. The description doesn't mention performance characteristics like rate limits or error conditions, but provides substantial operational clarity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized. It begins with a clear purpose statement, then provides important context about synonym types and limitations, explains what the function returns, documents parameters and return values, and gives a concrete example. Every sentence adds value without redundancy, and the information is front-loaded with the most important details first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, specific linguistic function), no annotations, and the presence of an output schema (implied by 'Returns' section), the description is complete enough. It covers purpose, usage context, behavioral details, parameter semantics, return format, and includes an example. The output schema existence means the description doesn't need to fully document return structure, and it provides sufficient guidance for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, so the description must compensate. It clearly explains the single parameter: 'word: The Danish word to find synonyms for.' This adds essential semantic meaning beyond the schema's basic string type. While it doesn't provide format constraints or examples beyond the general example, it adequately defines the parameter's purpose and usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find synonyms for a Danish word through shared synsets (word senses).' It specifies the verb ('find'), resource ('synonyms'), language ('Danish'), and methodology ('through shared synsets'), distinguishing it from sibling tools like get_word_info or get_word_overview that provide different types of word information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for finding lexical alternatives that express the same concepts. It mentions checking synset definitions to understand which synonyms apply to which meaning, which helps guide usage. However, it doesn't explicitly state when NOT to use it or name specific alternatives among siblings like get_word_synsets, which provides more detailed synset information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_word_synsetsAInspect
Get synsets (word meanings) for a Danish word, returning a sorted list of lexical concepts.
DanNet follows the OntoLex-Lemon model where:
Words (ontolex:LexicalEntry) evoke concepts through senses
Synsets (ontolex:LexicalConcept) represent units of meaning
Multiple words can share the same synset (synonyms)
One word can have multiple synsets (polysemy)
This function returns all synsets associated with a word, effectively giving you all the different meanings/senses that word can have. Each synset represents a distinct semantic concept with its own definition and semantic relationships.
Common patterns in Danish:
Nouns often have multiple senses (e.g., "kage" = cake/lump)
Verbs distinguish motion vs. state (e.g., "løbe" = run/flow)
Check synset's dns:ontologicalType for semantic classification
DDO CONNECTION AND SYNSET LABELS: Synset labels are compositions of DDO-derived sense labels, showing all words that express the same meaning. For example:
"{hund_1§1; køter_§1; vovhund_§1; vovse_§1}" = all words meaning "domestic dog"
"{forlygte_§2; babs_§1; bryst_§2; patte_1§1a}" = all words meaning "female breast"
Each individual sense label follows DDO structure:
"hund_1§1" = word "hund", entry 1, definition 1 in DDO (ordnet.dk)
"patte_1§1a" = word "patte", entry 1, definition 1, subdefinition a
The § notation connects directly to DDO's definition numbering system
This composition reveals the semantic relationships between Danish words and their shared meanings, all traceable back to authoritative DDO lexicographic data.
RETURN BEHAVIOR: This function has two possible return modes depending on search results:
MULTIPLE RESULTS: Returns List[SearchResult] with basic information for each synset
SINGLE RESULT (redirect): Returns full synset data Dict when DanNet automatically redirects to a single synset. This provides immediate access to all semantic relationships, ontological types, sentiment data, and other rich information without requiring a separate get_synset_info() call.
The single-result case is equivalent to calling get_synset_info() on the synset, providing the same comprehensive RDF data structure with all semantic relations.
Args: query: The Danish word or phrase to search for
language: Language for labels and definitions in results (default: "da" for Danish, "en" for English when available)
Note: Only Danish words can be searched regardless of this parameterReturns: MULTIPLE RESULTS: List of SearchResult objects with: - word: The lexical form - synset_id: Unique synset identifier (format: synset-NNNNN) - label: Human-readable synset label (e.g., "{kage_1§1}") - definition: Brief semantic definition (may be truncated with "...")
SINGLE RESULT: Dict with complete synset data including:
- All RDF properties with namespace prefixes (e.g., wn:hypernym)
- dns:ontologicalType → semantic types with @set array
- dns:sentiment → parsed sentiment (if present)
- synset_id → clean identifier for convenience
- All semantic relationships and linguistic propertiesExamples: # Multiple results case results = get_word_synsets("hund") # Returns list of search result dictionaries for all meanings of "hund" # => [{"word": "hund", "synset_id": "synset-3047", ...}, ...]
# Single result case (redirect)
result = get_word_synsets("svinkeærinde")
# Returns complete synset data for unique word
# => {'wn:hypernym': 'dn:synset-11677', 'dns:sentiment': {...}, ...}| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| language | No | da |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure and excels. It details the two return modes (multiple results vs. single result redirect), explains the DDO connection and synset label structure, describes the composition of labels showing semantic relationships, and clarifies that the single-result case provides comprehensive RDF data equivalent to get_synset_info(). This goes beyond basic functionality to explain how the tool behaves in different scenarios.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (DanNet model explanation, common patterns, DDO connection, return behavior, args, returns, examples). While comprehensive, some sections (like the detailed DDO label explanation) are quite lengthy. Every sentence adds value, but it could be more front-loaded; the core purpose is clear early, but the detailed behavioral nuances come later.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the tool (semantic search with dual return modes), no annotations, and an output schema that likely documents the return structure, the description is exceptionally complete. It explains the underlying data model (OntoLex-Lemon), provides practical examples of Danish word patterns, details the DDO connection, thoroughly describes the two return behaviors with examples, and clarifies parameter semantics. This provides all necessary context for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by explaining both parameters. For 'query', it specifies 'The Danish word or phrase to search for' and notes 'Only Danish words can be searched regardless of this parameter.' For 'language', it explains 'Language for labels and definitions in results (default: "da" for Danish, "en" for English when available).' This adds crucial semantic context not present in the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get synsets (word meanings) for a Danish word, returning a sorted list of lexical concepts.' It specifies the verb ('get'), resource ('synsets'), and scope ('Danish word'), and distinguishes it from siblings like get_word_info, get_word_overview, and get_word_synonyms by focusing on lexical concepts/meanings rather than general word data or synonym lists.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for retrieving all meanings/senses of a Danish word, with examples of common patterns (nouns with multiple senses, verbs distinguishing motion vs. state). It mentions checking synset's dns:ontologicalType for semantic classification. However, it doesn't explicitly state when NOT to use it or name specific alternatives among siblings, though it implies get_synset_info is for detailed data on a known synset ID.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sparql_queryAInspect
Execute a SPARQL SELECT query against the DanNet triplestore.
This tool provides direct access to DanNet's RDF data through SPARQL queries. The query is automatically prepended with common namespace prefix declarations, so you can use short prefixes instead of full URIs in your queries.
============================================================ CRITICAL PERFORMANCE RULES (read before writing any query):
ALWAYS start from a known entity URI or a word lookup — never scan the whole graph. FAST: dn:synset-3047 wn:hypernym ?x . SLOW: ?x wn:hypernym ?y . (scans every synset)
ALWAYS use DISTINCT for SELECT queries to avoid duplicate rows.
NEVER use FILTER(CONTAINS(...)) on labels across the whole graph. SLOW: ?s rdfs:label ?l . FILTER(CONTAINS(?l, "hund")) FAST: Use get_word_synsets("hund") first, then query specific synset URIs.
NEVER create cartesian products — every triple pattern must share a variable with at least one other pattern. SLOW: ?x a ontolex:LexicalConcept . ?y a ontolex:LexicalEntry . (cross join!)
ALWAYS add LIMIT (even if max_results caps it server-side, explicit LIMIT lets the query engine optimize).
Use property paths for multi-hop traversals: FAST: dn:synset-3047 wn:hypernym+ ?ancestor . (transitive closure) FAST: ?entry ontolex:canonicalForm/ontolex:writtenRep "hund"@da . (path)
Prefer VALUES over FILTER for matching multiple known entities: FAST: VALUES ?synset { dn:synset-3047 dn:synset-3048 } ?synset rdfs:label ?l . SLOW: ?synset rdfs:label ?l . FILTER(?synset = dn:synset-3047 || ?synset = dn:synset-3048)
The triplestore contains BOTH DanNet (Danish, dn: namespace) AND the Open English WordNet (en: namespace). Unanchored queries will scan both. To restrict to Danish data, anchor on dn: URIs or use @da language tags.
============================================ FAST QUERY TEMPLATES (copy and adapt these):
TEMPLATE 1: Find synsets for a Danish word (via word lookup)
SELECT DISTINCT ?synset ?label ?def WHERE { ?entry ontolex:canonicalForm/ontolex:writtenRep "WORD"@da . ?entry ontolex:sense/ontolex:isLexicalizedSenseOf ?synset . ?synset rdfs:label ?label . OPTIONAL { ?synset skos:definition ?def } }
TEMPLATE 2: Get all properties of a known synset
SELECT ?p ?o WHERE { dn:synset-NNNN ?p ?o . } LIMIT 50
TEMPLATE 3: Find hypernyms (broader concepts) of a known synset
SELECT DISTINCT ?hypernym ?label WHERE { dn:synset-NNNN wn:hypernym ?hypernym . ?hypernym rdfs:label ?label . }
TEMPLATE 4: Find hyponyms (narrower concepts) of a known synset
SELECT DISTINCT ?hyponym ?label WHERE { ?hyponym wn:hypernym dn:synset-NNNN . ?hyponym rdfs:label ?label . }
TEMPLATE 5: Trace full hypernym chain (taxonomic ancestors)
SELECT DISTINCT ?ancestor ?label WHERE { dn:synset-NNNN wn:hypernym+ ?ancestor . ?ancestor rdfs:label ?label . }
TEMPLATE 6: Find all relationships OF a known synset
SELECT DISTINCT ?rel ?target ?targetLabel WHERE { dn:synset-NNNN ?rel ?target . ?target rdfs:label ?targetLabel . FILTER(isURI(?target)) } LIMIT 50
TEMPLATE 7: Find all relationships TO a known synset
SELECT DISTINCT ?source ?rel ?sourceLabel WHERE { ?source ?rel dn:synset-NNNN . ?source rdfs:label ?sourceLabel . FILTER(isURI(?source)) } LIMIT 50
TEMPLATE 8: Query multiple known synsets at once
SELECT DISTINCT ?synset ?label ?def WHERE { VALUES ?synset { dn:synset-3047 dn:synset-3048 dn:synset-6524 } ?synset rdfs:label ?label . OPTIONAL { ?synset skos:definition ?def } }
TEMPLATE 9: Find functional relations for a specific synset
SELECT DISTINCT ?rel ?target ?targetLabel WHERE { dn:synset-NNNN ?rel ?target . ?target rdfs:label ?targetLabel . VALUES ?rel { dns:usedFor dns:usedForObject wn:agent wn:instrument wn:causes } }
TEMPLATE 10: Find ontological type of a synset (stored as RDF Bag)
SELECT ?type WHERE { dn:synset-NNNN dns:ontologicalType ?bag . ?bag ?pos ?type . FILTER(STRSTARTS(STR(?pos), STR(rdf:_))) }
============================================ KNOWN PREFIXES (automatically declared):
dn: (DanNet data), dns: (DanNet schema), dnc: (DanNet concepts), wn: (WordNet relations), ontolex: (lexical model), skos: (definitions), rdfs: (labels), rdf: (types), owl: (ontology), lexinfo: (morphology), marl: (sentiment), dc: (metadata), ili: (interlingual index), en: (English WordNet), enl: (English lemmas), cor: (Danish register)
Args: query: SPARQL SELECT query string (prefixes will be automatically added) timeout: Query timeout in milliseconds (default: 8000, max: 15000) max_results: Maximum number of results to return (default: 100, max: 100) distinct: Auto-apply DISTINCT to SELECT queries (default: True). Set to False when you need duplicate rows, e.g. for frequency counts. inference: Control model selection for query execution (default: None). None = auto-detect: tries base model first, retries with inference if SELECT results are empty (best for most queries). True = force inference model: needed for inverse relations like wn:hyponym, wn:holonym, etc. that are derived by OWL reasoning. False = force base model only, no retry.
Returns: Dict containing SPARQL results in standard JSON format: - head: Query metadata with variable names - results: Bindings array with variable-value mappings Each value includes type (uri/literal) and language information when applicable
Note: Only SELECT queries are supported. The query is validated before execution.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| timeout | No | ||
| distinct | No | ||
| inference | No | ||
| max_results | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure and does so comprehensively. It explains performance constraints (8 critical rules), automatic prefix handling, timeout and result limits, inference model behavior, query validation, and return format. It also warns about scanning both Danish and English data without proper anchoring.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, rules, templates, prefixes, parameters, returns), but it's quite long due to the complexity of the tool. Every section adds value, though some template examples could potentially be referenced rather than fully included. The critical information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex query tool with 5 parameters, 0% schema coverage, no annotations, but with an output schema, the description provides exceptional completeness. It covers purpose, usage guidelines, performance constraints, parameter semantics, return format, limitations (SELECT only), and includes practical templates. The output schema handles return value documentation, allowing the description to focus on usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must compensate for the schema's lack of parameter documentation. It thoroughly explains all 5 parameters: query (with extensive examples), timeout (default and max), max_results (default and max), distinct (default and when to disable), and inference (three modes with detailed behavior explanations).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Execute a SPARQL SELECT query against the DanNet triplestore.' It specifies the exact action (execute), resource (DanNet triplestore), and query type (SPARQL SELECT), distinguishing it from sibling tools like get_word_synsets or get_synset_info that provide higher-level abstractions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives. It states that queries should 'start from a known entity URI or a word lookup' and recommends using get_word_synsets() first for word-based queries. It also distinguishes this low-level query tool from higher-level sibling tools by emphasizing direct SPARQL access.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
switch_dannet_serverAInspect
Switch between local and remote DanNet servers on the fly.
This tool allows you to change the DanNet server endpoint during runtime without restarting the MCP server. Useful for switching between development (local) and production (remote) servers.
Args: server: Server to switch to. Options: - "local": Use localhost:3456 (development server) - "remote": Use wordnet.dk (production server) - Custom URL: Any valid URL starting with http:// or https://
Returns: Dict with status information: - status: "success" or "error" - message: Description of the operation - previous_url: The URL that was previously active - current_url: The URL that is now active
Example: # Switch to local development server result = switch_dannet_server("local")
# Switch to production server
result = switch_dannet_server("remote")
# Switch to custom server
result = switch_dannet_server("https://my-custom-dannet.example.com")| Name | Required | Description | Default |
|---|---|---|---|
| server | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It effectively discloses that this is a runtime configuration change tool with no destructive implications, describes the three server options, and explains what information is returned. It doesn't mention potential side effects like session persistence or authentication requirements, but covers core behavior well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured and appropriately sized. The description is front-loaded with the core purpose, followed by usage context, parameter details, return format, and examples. Every sentence adds value with zero waste or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a configuration tool with no annotations. The description covers purpose, usage context, parameter semantics, and return values. With an output schema present, it appropriately doesn't need to explain the return structure in detail beyond the summary provided. The examples further clarify usage patterns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, so the description fully compensates. It provides detailed semantics for the single parameter: explains it's the 'server to switch to', lists three specific options with explanations ('local' = localhost:3456 development, 'remote' = wordnet.dk production, custom URLs), and includes format requirements ('valid URL starting with http:// or https://').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('switch between local and remote DanNet servers') and the resource ('DanNet server endpoint'). It distinguishes this tool from sibling tools like 'get_current_dannet_server' by focusing on changing rather than querying the server state.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool: 'during runtime without restarting the MCP server' and 'useful for switching between development (local) and production (remote) servers.' It provides clear context for its application without needing to reference alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_synset_structureBInspect
Validate and analyze the structure of synset JSON-LD data.
This enhanced tool helps debug and understand synset data structure, providing validation and insights into the JSON-LD format.
Args: synset_data: Synset data returned from get_synset_info()
Returns: Dict with validation results and structural analysis
| Name | Required | Description | Default |
|---|---|---|---|
| synset_data | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'validation and insights' but doesn't specify what validation checks are performed, what format the analysis takes, whether this is a read-only operation, potential error conditions, or performance characteristics. The description is too vague about the tool's actual behavior beyond high-level purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is reasonably concise with four sentences that each serve a purpose: stating the core function, elaborating on its utility, specifying the input, and describing the return. It's front-loaded with the main purpose. However, the 'Args:' and 'Returns:' sections could be integrated more smoothly rather than appearing as separate documentation blocks.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return value documentation), one parameter, and no annotations, the description provides adequate context for a validation/analysis tool. It specifies the input source and general purpose, though more detail about validation criteria and analysis output would be beneficial. The existence of an output schema reduces the burden on the description for return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides some parameter semantics by specifying that 'synset_data' should be 'Synset data returned from get_synset_info()', which adds context about the expected data provenance. However, it doesn't describe the structure, format, or constraints of the synset_data object beyond this reference, leaving significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Validate and analyze the structure of synset JSON-LD data.' It specifies both validation and analysis functions, though it doesn't explicitly differentiate from sibling tools like 'get_synset_info' or 'extract_semantic_data' which might have overlapping domains. The description avoids tautology by providing more detail than just the tool name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'helps debug and understand synset data structure' and specifies that input should be 'Synset data returned from get_synset_info()'. However, it doesn't provide explicit guidance on when to choose this tool versus alternatives like 'extract_semantic_data' or 'get_synset_info', nor does it mention any prerequisites or exclusions beyond the input requirement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!