dannet
Server Details
DanNet - Danish WordNet with rich lexical relationships and SPARQL access.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- kuhumcst/DanNet
- GitHub Stars
- 24
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
16 toolsanalyze_namespace_usageInspect
Analyze namespace usage and provide resolution for prefixed properties.
This debugging tool helps understand how namespaces are used in DanNet JSON-LD data and resolves prefixed URIs to full forms.
Args: entity_data: Any DanNet JSON-LD entity data
Returns: Dict with namespace analysis and URI resolution
| Name | Required | Description | Default |
|---|---|---|---|
| entity_data | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
autocomplete_danish_wordInspect
Get autocomplete suggestions for Danish word prefixes.
Useful for discovering Danish vocabulary or finding the correct spelling of words. Returns lemma forms (dictionary forms) of words.
Args: prefix: The beginning of a Danish word (minimum 3 characters required) max_results: Maximum number of suggestions to return (default: 10)
Returns: Comma-separated string of word completions in alphabetical order
Note: Autocomplete requires at least 3 characters to prevent excessive results.
Example: suggestions = autocomplete_danish_word("hyg", 5) # Returns: "hygge, hyggelig, hygiejne"
| Name | Required | Description | Default |
|---|---|---|---|
| prefix | Yes | ||
| max_results | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
extract_semantic_dataInspect
Extract and normalize semantic data from any DanNet JSON-LD entity.
This tool provides a unified way to extract semantic information from synsets, words, or senses, handling different JSON-LD structures consistently.
Args: entity_data: Any DanNet entity JSON-LD data
Returns: Dict with normalized semantic information
| Name | Required | Description | Default |
|---|---|---|---|
| entity_data | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
fetch_ddo_definitionInspect
Fetch the full, untruncated definition from DDO (Den Danske Ordbog) for a synset.
This tool addresses the issue that DanNet synset definitions (:skos/definition) may be capped at a certain length. It retrieves the complete definition from the authoritative DDO source by following sense source URLs.
WORKFLOW:
Get synset information to find associated senses
Extract DDO source URLs from sense data (dns:source)
Fetch DDO HTML pages and parse for definitions
Find elements with class "definitionBox selected" and extract span.definition content
IMPORTANT NOTES:
Looks for CSS classes "definitionBox selected" and child span.definition
DDO and DanNet have diverged over time, so source URLs may not always work
This implementation uses httpx for web requests and regex-based HTML parsing
Args: synset_id: Synset identifier (e.g., "synset-1876" or just "1876")
Returns: Dict containing: - synset_id: The queried synset ID - ddo_definitions: List of definitions found from DDO pages - source_urls: List of DDO URLs that were attempted - success_urls: List of URLs that successfully returned definitions - errors: List of any errors encountered - truncated_definition: The original DanNet definition for comparison
Example: result = fetch_ddo_definition("synset-3047") # Check result['ddo_definitions'] for full DDO definitions # Compare with result['truncated_definition'] from DanNet
| Name | Required | Description | Default |
|---|---|---|---|
| synset_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
get_cache_statsInspect
Return statistics about the session-scoped resource cache.
Useful for verifying that caching is working: call get_synset_info (or similar) twice for the same ID and check that cache_size grows by 1 on the first call but not on the second, and that cached_keys contains the expected IDs.
Returns: Dict with: - cache_size: Total number of cached entries - cached_keys: List of (base_url, resource_id) pairs currently cached
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
get_current_dannet_serverInspect
Get information about the currently active DanNet server.
Returns: Dict with current server information: - server_url: The base URL of the current DanNet server - server_type: "local", "remote", or "custom" - status: Connection status information
Example: info = get_current_dannet_server() # Returns: {"server_url": "https://wordnet.dk", "server_type": "remote", "status": "active"}
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
get_entity_infoInspect
Get comprehensive RDF data for any entity in the DanNet database.
Supports both DanNet entities and external vocabulary entities loaded into the triplestore from various schemas and datasets.
UNDERSTANDING THE DATA MODEL: The DanNet database contains entities from multiple sources:
DanNet entities (namespace="dn"): synsets, words, senses, and other resources
External entities (other namespaces): OntoLex vocabulary, Inter-Lingual Index, etc.
All entities follow RDF patterns with namespace prefixes for properties and relationships.
NAVIGATION TIPS:
DanNet synsets have rich semantic relationships (wn:hypernym, wn:hyponym, etc.)
External entities provide vocabulary definitions and cross-references
Use parse_resource_id() on URI references to get clean IDs
Check @type to understand what kind of entity you're working with
Args: identifier: Entity identifier (e.g., "synset-3047", "word-11021628", "LexicalConcept", "i76470") namespace: Namespace for the entity (default: "dn" for DanNet entities) - "dn": DanNet entities via /dannet/data/ endpoint - Other values: External entities via /dannet/external/{namespace}/ endpoint - Common external namespaces: "ontolex", "ili", "wn", "lexinfo", etc.
Returns: Dict containing JSON-LD format with: - @context → namespace mappings (if applicable) - @id → entity identifier - @type → entity type - All RDF properties with namespace prefixes (e.g., wn:hypernym, ontolex:evokes) - For DanNet synsets: dns:ontologicalType and dns:sentiment (if applicable) - Entity-specific convenience fields (synset_id, resource_id, etc.)
Examples: # DanNet entities get_entity_info("synset-3047") # DanNet synset get_entity_info("word-11021628") # DanNet word get_entity_info("sense-21033604") # DanNet sense
# External vocabulary entities
get_entity_info("LexicalConcept", namespace="ontolex") # OntoLex class definition
get_entity_info("i76470", namespace="ili") # Inter-Lingual Index entry
get_entity_info("noun", namespace="lexinfo") # Lexinfo part-of-speech| Name | Required | Description | Default |
|---|---|---|---|
| namespace | No | dn | |
| identifier | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
get_sense_infoInspect
Get comprehensive RDF data for a DanNet sense (lexical sense).
UNDERSTANDING THE DATA MODEL: Senses are ontolex:LexicalSense instances connecting words to synsets. They represent specific meanings of words with examples and definitions.
KEY RELATIONSHIPS:
LEXICAL CONNECTIONS:
ontolex:isSenseOf → word this sense belongs to
ontolex:isLexicalizedSenseOf → synset this sense represents
SEMANTIC INFORMATION:
lexinfo:senseExample → usage examples in context
rdfs:label → sense label (e.g., "hund_1§1")
REGISTER AND STYLISTIC INFORMATION:
lexinfo:register → formal register classification (e.g., ":lexinfo/slangRegister")
lexinfo:usageNote → human-readable usage notes (e.g., "slang", "formal")
SOURCE INFORMATION:
dns:source → source URL for this sense entry
DDO CONNECTION (Den Danske Ordbog): DanNet senses are derived from DDO (ordnet.dk), the authoritative modern Danish dictionary.
SENSE LABELS: The format "word_entry§definition" connects to DDO structure:
"hund_1§1" = word "hund", entry 1, definition 1 in DDO
"forlygte_§2" = word "forlygte", definition 2 in DDO
The § notation directly corresponds to DDO's definition numbering
SOURCE TRACEABILITY: The dns:source URLs link back to specific DDO entries:
Format: https://ordnet.dk/ddo/ordbog?entry_id=X&def_id=Y&query=word
Note: Some DDO URLs may not resolve correctly if IDs have changed since import
If the DDO page loads correctly, the relevant definition has CSS class "selected"
METADATA ORIGINS: Usage examples, register information, and definitions flow from DDO's corpus-based lexicographic data, providing authoritative linguistic information.
NAVIGATION TIPS:
Follow ontolex:isSenseOf to find the parent word
Follow ontolex:isLexicalizedSenseOf to find the synset
Check lexinfo:senseExample for usage examples from DDO corpus
Check lexinfo:register and lexinfo:usageNote for stylistic information
Use dns:source to attempt tracing back to original DDO definition (with caveats)
Use parse_resource_id() on URI references to get clean IDs
Args: sense_id: Sense identifier (e.g., "sense-21033604" or just "21033604")
Returns: Dict containing: - All RDF properties with namespace prefixes (e.g., ontolex:isSenseOf) - resource_id → clean identifier for convenience - All sense properties and relationships
Example: info = get_sense_info("sense-21033604") # "hund_1§1" sense # Check info['ontolex:isSenseOf'] for parent word # Check info['ontolex:isLexicalizedSenseOf'] for synset # Check info['lexinfo:senseExample'] for usage examples from DDO # Check info['lexinfo:register'] for register classification # Check info['lexinfo:usageNote'] for usage notes like "slang" # Check info['dns:source'] for DDO source URL (may not always work)
| Name | Required | Description | Default |
|---|---|---|---|
| sense_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
get_synset_infoInspect
Get comprehensive RDF data for a DanNet synset (lexical concept).
UNDERSTANDING THE DATA MODEL: Synsets are ontolex:LexicalConcept instances representing word meanings. They connect to words via ontolex:isEvokedBy and have rich semantic relations.
KEY RELATIONSHIPS (by importance):
TAXONOMIC (most fundamental):
wn:hypernym → broader concept (e.g., "hund" → "pattedyr")
wn:hyponym → narrower concepts (e.g., "hund" → "puddel", "schæfer")
dns:orthogonalHypernym → cross-cutting categories [Danish: ortogonalt hyperonym]
LEXICAL CONNECTIONS:
ontolex:isEvokedBy → words expressing this concept [Danish: fremkaldes af]
ontolex:lexicalizedSense → sense instances [Danish: leksikaliseret betydning]
wn:similar → related but distinct concepts
PART-WHOLE RELATIONS:
wn:mero_part/wn:holo_part → component relationships [English: meronym/holonym part]
wn:mero_substance/wn:holo_substance → material composition
wn:mero_member/wn:holo_member → membership relations
SEMANTIC PROPERTIES:
dns:ontologicalType → semantic classification with @set array of dnc: types Common types: dnc:Animal, dnc:Human, dnc:Object, dnc:Physical, dnc:Dynamic (events/actions), dnc:Static (states)
dns:sentiment → emotional polarity with marl:hasPolarity and marl:polarityValue
wn:lexfile → semantic domain (e.g., "noun.food", "verb.motion")
skos:definition → synset definition (may be truncated for length)
CROSS-LINGUISTIC:
wn:ili → Interlingual Index for cross-language mapping
wn:eq_synonym → Open English WordNet equivalent
DDO CONNECTION FOR FULLER DEFINITIONS: DanNet synset definitions (skos:definition) may be truncated (ending with "…"). For complete definitions, use the fetch_ddo_definition() tool which automatically retrieves full DDO text, or manually examine sense source URLs via get_sense_info().
NAVIGATION TIPS:
Follow wn:hypernym chains to find semantic categories
Check dns:inherited for properties from parent synsets
Use parse_resource_id() on URI references to get clean IDs
For fuller definitions, examine individual sense source URLs via get_sense_info()
Args: synset_id: Synset identifier (e.g., "synset-1876" or just "1876")
Returns: Dict containing JSON-LD format with: - @context → namespace mappings - @id → entity identifier (e.g., "dn:synset-1876") - @type → "ontolex:LexicalConcept" - All RDF properties with namespace prefixes (e.g., wn:hypernym) - dns:ontologicalType → {"@set": ["dnc:Animal", ...]} (if applicable) - dns:sentiment → {"marl:hasPolarity": "marl:Positive", "marl:polarityValue": "3"} (if applicable) - synset_id → clean identifier for convenience
Example: info = get_synset_info("synset-52") # cake synset # Check info['wn:hypernym'] for parent concepts # Check info['dns:ontologicalType']['@set'] for semantic types # Check info['dns:sentiment']['marl:hasPolarity'] for sentiment
| Name | Required | Description | Default |
|---|---|---|---|
| synset_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
get_word_infoInspect
Get comprehensive RDF data for a DanNet word (lexical entry).
UNDERSTANDING THE DATA MODEL: Words are ontolex:LexicalEntry instances representing lexical forms. They connect to synsets via senses and have morphological information.
KEY RELATIONSHIPS:
LEXICAL CONNECTIONS:
ontolex:evokes → synsets this word can express
ontolex:sense → sense instances connecting word to synsets
ontolex:canonicalForm → canonical form with written representation
MORPHOLOGICAL PROPERTIES:
lexinfo:partOfSpeech → part of speech classification
wn:partOfSpeech → WordNet part of speech
ontolex:canonicalForm/ontolex:writtenRep → written form
CROSS-REFERENCES:
owl:sameAs → equivalent resources in other datasets
dns:source → source URL for this word entry
NAVIGATION TIPS:
Follow ontolex:evokes to find synsets this word expresses
Check ontolex:sense for detailed sense information
Use parse_resource_id() on URI references to get clean IDs
Args: word_id: Word identifier (e.g., "word-11021628" or just "11021628")
Returns: Dict containing: - All RDF properties with namespace prefixes (e.g., ontolex:evokes) - resource_id → clean identifier for convenience - All linguistic properties and relationships
Example: info = get_word_info("word-11021628") # "hund" word # Check info['ontolex:evokes'] for synsets this word can express # Check info['ontolex:sense'] for senses
| Name | Required | Description | Default |
|---|---|---|---|
| word_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
get_word_overviewInspect
Get a complete overview of all senses for a Danish word in a single call.
Replaces the common pattern of calling get_word_synsets → get_synset_info per result → get_word_synonyms, collapsing 5-15 HTTP round-trips into one SPARQL query.
Only returns synsets where the word is a primary lexical member (i.e. the word itself has a direct sense in the synset), excluding multi-word expressions that merely contain the word as a component.
Args: word: The Danish word to look up
Returns: List of dicts, one per synset, each containing: - synset_id: Clean synset identifier (e.g. "synset-3047") - label: Human-readable synset label - definition: Synset definition (may be truncated with "…") - ontological_types: List of dnc: type URIs - synonyms: List of co-member lemmas (true synonyms only) - hypernym: Dict with synset_id and label of the immediate broader concept, or null - lexfile: WordNet lexicographer file name (e.g. "noun.animal"), or null if absent
Example: overview = get_word_overview("hund") # Returns list of 4 synsets, the first being: # {"synset_id": "synset-3047", # "label": "{hund_1§1; køter_§1; vovhund_§1; vovse_§1}", # "definition": "pattedyr som har god lugtesans ...", # "ontological_types": ["dnc:Animal", "dnc:Object"], # "synonyms": ["køter", "vovhund", "vovse"], # "lexfile": "noun.animal"}
# Pass synset_id to get_synset_info() for full JSON-LD data on any result:
# full_data = get_synset_info(overview[0]["synset_id"])| Name | Required | Description | Default |
|---|---|---|---|
| word | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
get_word_synonymsInspect
Find synonyms for a Danish word through shared synsets (word senses).
SYNONYM TYPES IN DANNET:
True synonyms: Words sharing the exact same synset
Context-specific: Different synonyms for different word senses Note: Near-synonyms via wn:similar relations are not currently included
The function returns all words that share synsets with the input word, effectively finding lexical alternatives that express the same concepts.
Args: word: The Danish word to find synonyms for
Returns: Comma-separated string of synonymous words (aggregated across all word senses)
Example: synonyms = get_word_synonyms("hund") # Returns: "køter, vovhund, vovse"
Note: Check synset definitions to understand which synonyms apply to which meaning (polysemy is common in Danish).
| Name | Required | Description | Default |
|---|---|---|---|
| word | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
get_word_synsetsInspect
Get synsets (word meanings) for a Danish word, returning a sorted list of lexical concepts.
DanNet follows the OntoLex-Lemon model where:
Words (ontolex:LexicalEntry) evoke concepts through senses
Synsets (ontolex:LexicalConcept) represent units of meaning
Multiple words can share the same synset (synonyms)
One word can have multiple synsets (polysemy)
This function returns all synsets associated with a word, effectively giving you all the different meanings/senses that word can have. Each synset represents a distinct semantic concept with its own definition and semantic relationships.
Common patterns in Danish:
Nouns often have multiple senses (e.g., "kage" = cake/lump)
Verbs distinguish motion vs. state (e.g., "løbe" = run/flow)
Check synset's dns:ontologicalType for semantic classification
DDO CONNECTION AND SYNSET LABELS: Synset labels are compositions of DDO-derived sense labels, showing all words that express the same meaning. For example:
"{hund_1§1; køter_§1; vovhund_§1; vovse_§1}" = all words meaning "domestic dog"
"{forlygte_§2; babs_§1; bryst_§2; patte_1§1a}" = all words meaning "female breast"
Each individual sense label follows DDO structure:
"hund_1§1" = word "hund", entry 1, definition 1 in DDO (ordnet.dk)
"patte_1§1a" = word "patte", entry 1, definition 1, subdefinition a
The § notation connects directly to DDO's definition numbering system
This composition reveals the semantic relationships between Danish words and their shared meanings, all traceable back to authoritative DDO lexicographic data.
RETURN BEHAVIOR: This function has two possible return modes depending on search results:
MULTIPLE RESULTS: Returns List[SearchResult] with basic information for each synset
SINGLE RESULT (redirect): Returns full synset data Dict when DanNet automatically redirects to a single synset. This provides immediate access to all semantic relationships, ontological types, sentiment data, and other rich information without requiring a separate get_synset_info() call.
The single-result case is equivalent to calling get_synset_info() on the synset, providing the same comprehensive RDF data structure with all semantic relations.
Args: query: The Danish word or phrase to search for
language: Language for labels and definitions in results (default: "da" for Danish, "en" for English when available)
Note: Only Danish words can be searched regardless of this parameterReturns: MULTIPLE RESULTS: List of SearchResult objects with: - word: The lexical form - synset_id: Unique synset identifier (format: synset-NNNNN) - label: Human-readable synset label (e.g., "{kage_1§1}") - definition: Brief semantic definition (may be truncated with "...")
SINGLE RESULT: Dict with complete synset data including:
- All RDF properties with namespace prefixes (e.g., wn:hypernym)
- dns:ontologicalType → semantic types with @set array
- dns:sentiment → parsed sentiment (if present)
- synset_id → clean identifier for convenience
- All semantic relationships and linguistic propertiesExamples: # Multiple results case results = get_word_synsets("hund") # Returns list of search result dictionaries for all meanings of "hund" # => [{"word": "hund", "synset_id": "synset-3047", ...}, ...]
# Single result case (redirect)
result = get_word_synsets("svinkeærinde")
# Returns complete synset data for unique word
# => {'wn:hypernym': 'dn:synset-11677', 'dns:sentiment': {...}, ...}| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| language | No | da |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
sparql_queryInspect
Execute a SPARQL SELECT query against the DanNet triplestore.
This tool provides direct access to DanNet's RDF data through SPARQL queries. The query is automatically prepended with common namespace prefix declarations, so you can use short prefixes instead of full URIs in your queries.
============================================================ CRITICAL PERFORMANCE RULES (read before writing any query):
ALWAYS start from a known entity URI or a word lookup — never scan the whole graph. FAST: dn:synset-3047 wn:hypernym ?x . SLOW: ?x wn:hypernym ?y . (scans every synset)
ALWAYS use DISTINCT for SELECT queries to avoid duplicate rows.
NEVER use FILTER(CONTAINS(...)) on labels across the whole graph. SLOW: ?s rdfs:label ?l . FILTER(CONTAINS(?l, "hund")) FAST: Use get_word_synsets("hund") first, then query specific synset URIs.
NEVER create cartesian products — every triple pattern must share a variable with at least one other pattern. SLOW: ?x a ontolex:LexicalConcept . ?y a ontolex:LexicalEntry . (cross join!)
ALWAYS add LIMIT (even if max_results caps it server-side, explicit LIMIT lets the query engine optimize).
Use property paths for multi-hop traversals: FAST: dn:synset-3047 wn:hypernym+ ?ancestor . (transitive closure) FAST: ?entry ontolex:canonicalForm/ontolex:writtenRep "hund"@da . (path)
Prefer VALUES over FILTER for matching multiple known entities: FAST: VALUES ?synset { dn:synset-3047 dn:synset-3048 } ?synset rdfs:label ?l . SLOW: ?synset rdfs:label ?l . FILTER(?synset = dn:synset-3047 || ?synset = dn:synset-3048)
The triplestore contains BOTH DanNet (Danish, dn: namespace) AND the Open English WordNet (en: namespace). Unanchored queries will scan both. To restrict to Danish data, anchor on dn: URIs or use @da language tags.
============================================ FAST QUERY TEMPLATES (copy and adapt these):
TEMPLATE 1: Find synsets for a Danish word (via word lookup)
SELECT DISTINCT ?synset ?label ?def WHERE { ?entry ontolex:canonicalForm/ontolex:writtenRep "WORD"@da . ?entry ontolex:sense/ontolex:isLexicalizedSenseOf ?synset . ?synset rdfs:label ?label . OPTIONAL { ?synset skos:definition ?def } }
TEMPLATE 2: Get all properties of a known synset
SELECT ?p ?o WHERE { dn:synset-NNNN ?p ?o . } LIMIT 50
TEMPLATE 3: Find hypernyms (broader concepts) of a known synset
SELECT DISTINCT ?hypernym ?label WHERE { dn:synset-NNNN wn:hypernym ?hypernym . ?hypernym rdfs:label ?label . }
TEMPLATE 4: Find hyponyms (narrower concepts) of a known synset
SELECT DISTINCT ?hyponym ?label WHERE { ?hyponym wn:hypernym dn:synset-NNNN . ?hyponym rdfs:label ?label . }
TEMPLATE 5: Trace full hypernym chain (taxonomic ancestors)
SELECT DISTINCT ?ancestor ?label WHERE { dn:synset-NNNN wn:hypernym+ ?ancestor . ?ancestor rdfs:label ?label . }
TEMPLATE 6: Find all relationships OF a known synset
SELECT DISTINCT ?rel ?target ?targetLabel WHERE { dn:synset-NNNN ?rel ?target . ?target rdfs:label ?targetLabel . FILTER(isURI(?target)) } LIMIT 50
TEMPLATE 7: Find all relationships TO a known synset
SELECT DISTINCT ?source ?rel ?sourceLabel WHERE { ?source ?rel dn:synset-NNNN . ?source rdfs:label ?sourceLabel . FILTER(isURI(?source)) } LIMIT 50
TEMPLATE 8: Query multiple known synsets at once
SELECT DISTINCT ?synset ?label ?def WHERE { VALUES ?synset { dn:synset-3047 dn:synset-3048 dn:synset-6524 } ?synset rdfs:label ?label . OPTIONAL { ?synset skos:definition ?def } }
TEMPLATE 9: Find functional relations for a specific synset
SELECT DISTINCT ?rel ?target ?targetLabel WHERE { dn:synset-NNNN ?rel ?target . ?target rdfs:label ?targetLabel . VALUES ?rel { dns:usedFor dns:usedForObject wn:agent wn:instrument wn:causes } }
TEMPLATE 10: Find ontological type of a synset (stored as RDF Bag)
SELECT ?type WHERE { dn:synset-NNNN dns:ontologicalType ?bag . ?bag ?pos ?type . FILTER(STRSTARTS(STR(?pos), STR(rdf:_))) }
============================================ KNOWN PREFIXES (automatically declared):
dn: (DanNet data), dns: (DanNet schema), dnc: (DanNet concepts), wn: (WordNet relations), ontolex: (lexical model), skos: (definitions), rdfs: (labels), rdf: (types), owl: (ontology), lexinfo: (morphology), marl: (sentiment), dc: (metadata), ili: (interlingual index), en: (English WordNet), enl: (English lemmas), cor: (Danish register)
Args: query: SPARQL SELECT query string (prefixes will be automatically added) timeout: Query timeout in milliseconds (default: 8000, max: 15000) max_results: Maximum number of results to return (default: 100, max: 100) distinct: Auto-apply DISTINCT to SELECT queries (default: True). Set to False when you need duplicate rows, e.g. for frequency counts. inference: Control model selection for query execution (default: None). None = auto-detect: tries base model first, retries with inference if SELECT results are empty (best for most queries). True = force inference model: needed for inverse relations like wn:hyponym, wn:holonym, etc. that are derived by OWL reasoning. False = force base model only, no retry.
Returns: Dict containing SPARQL results in standard JSON format: - head: Query metadata with variable names - results: Bindings array with variable-value mappings Each value includes type (uri/literal) and language information when applicable
Note: Only SELECT queries are supported. The query is validated before execution.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| timeout | No | ||
| distinct | No | ||
| inference | No | ||
| max_results | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
switch_dannet_serverInspect
Switch between local and remote DanNet servers on the fly.
This tool allows you to change the DanNet server endpoint during runtime without restarting the MCP server. Useful for switching between development (local) and production (remote) servers.
Args: server: Server to switch to. Options: - "local": Use localhost:3456 (development server) - "remote": Use wordnet.dk (production server) - Custom URL: Any valid URL starting with http:// or https://
Returns: Dict with status information: - status: "success" or "error" - message: Description of the operation - previous_url: The URL that was previously active - current_url: The URL that is now active
Example: # Switch to local development server result = switch_dannet_server("local")
# Switch to production server
result = switch_dannet_server("remote")
# Switch to custom server
result = switch_dannet_server("https://my-custom-dannet.example.com")| Name | Required | Description | Default |
|---|---|---|---|
| server | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
validate_synset_structureInspect
Validate and analyze the structure of synset JSON-LD data.
This enhanced tool helps debug and understand synset data structure, providing validation and insights into the JSON-LD format.
Args: synset_data: Synset data returned from get_synset_info()
Returns: Dict with validation results and structural analysis
| Name | Required | Description | Default |
|---|---|---|---|
| synset_data | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!