Skip to main content
Glama

Server Details

Search and retrieve articles from the Sovereign AI Blog. A practical engineering log of self-hosted AI on NVIDIA DGX Spark with articles covering SGLang, Mistral, Voxtral, OpenClaw. Tools: search_blog, get_article, diagnose_sglang.

Endpoint URL: https://mcp.sovgrid.org/self-hosted-ai?ref=smithery

Transport: streamable-http (oder „HTTP Streaming")

Tags/Categories: knowledge-base, search, self-hosted-ai, sovereign

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a unique purpose: diagnose_sglang handles configuration validation, while the other three cover blog article retrieval, tag listing, and search. No functional overlap exists.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern: diagnose_sglang, get_article, list_tags, search_blog. No mixing of styles.

Tool Count4/5

4 tools is a compact but reasonable set for a blog server. The diagnostic tool adds a distinct function, keeping the count from feeling too thin.

Completeness4/5

The blog tools cover search, retrieval, and tag browsing, which suffices for reading. The inclusion of a diagnostic tool is slightly tangential but does not leave major gaps.

Available Tools

4 tools
diagnose_sglangA
Read-onlyIdempotent
Inspect

Validate an SGLang configuration for NVIDIA DGX Spark (GB10/SM121A).

Pure pattern-matching against known failure modes documented in the Sovereign AI Blog. No inference, no external calls. Returns critical issues, non-fatal warnings, and a recommended baseline config.

All parameters are optional; supply only what you have. With no inputs you get the recommended config and a 'unknown' verdict.

ParametersJSON Schema
NameRequiredDescriptionDefault
hardwareNoHardware description (e.g. 'GB10', 'DGX Spark', 'SM121A'). Empty = skip GB10-specific rules.
image_tagNoDocker image tag in use (e.g. 'lmsysorg/sglang:latest', 'lmsysorg/sglang:v0.4.0'). Empty = skip.
mem_fractionNoSGLang --mem-fraction-static value (e.g. 0.88). 0.0 = skip this check.
error_messageNoPaste error log output here for pattern matching against known failure modes.
attention_backendNoSGLang --attention-backend value (e.g. 'flashinfer', 'triton'). Empty string = skip this check.
cuda_graph_max_bsNoSGLang --cuda-graph-max-bs value. 0 = skip this check.

Output Schema

ParametersJSON Schema
NameRequiredDescription
issuesYesCritical issues that will prevent SGLang from running correctly
verdictYesOverall verdict. 'unknown' = no inputs provided.
warningsYesNon-fatal warnings (suboptimal but non-blocking)
recommended_configYesVerified-good baseline config for GB10/SM121A
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds valuable context: 'Pure pattern-matching against known failure modes. No inference, no external calls.' It also discloses return types (critical issues, warnings, recommended baseline config), fully characterizing behavior beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loaded with the main purpose, and every sentence adds essential information. No redundant or confusing phrasing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 optional parameters, no required, no enums, no nested objects, output schema exists), the description fully explains what it does, its modus operandi, return types, and edge cases like no inputs. It is complete for an AI agent to select and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds minor value by summarizing parameter optionality and the 'unknown' verdict when none provided, but does not elaborate on individual parameters beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Validate an SGLang configuration for NVIDIA DGX Spark (GB10/SM121A)' with specific verb and resource. It distinguishes from sibling tools (get_article, search_blog) by its diagnostic nature and mentions pure pattern-matching with no external calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use the tool: 'All parameters are optional; supply only what you have' and 'With no inputs you get the recommended config'. It implies use for configuration validation but does not explicitly state when not to use it or alternatives, though siblings are unrelated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_articleA
Read-onlyIdempotent
Inspect
Retrieve the full content of a blog article by its slug.

Returns the article body (Markdown) plus metadata. If the slug does not
match any article, returns an Article with `error='article_not_found'`
and other fields at their defaults.
ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesArticle slug as returned by search_blog (e.g. 'setup-mistral-sglang-setup'). Lower-case, hyphenated.

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlNoPublic URL of the article
bodyNoFull article body in Markdown
dateNoPublication date (ISO 8601)
slugYesArticle slug
tagsNoTopic tags assigned to the article
errorNoSet to 'article_not_found' if no article matches the slug
titleNoArticle title
word_countNoWord count of the article body
descriptionNoShort article description
quality_classNoEditorial content class (e.g. 'Ephemeral', 'Evergreen'). Empty if not classified.
quality_scoreNoBuild-time quality score from the editorial pipeline (unbounded weighted composite across 13 signals, higher is better; thresholds depend on style)
quality_styleNoEditorial style category (e.g. 'best_practice_learnings', 'werthaltige_code_beispiele'). Empty if not categorised.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly and idempotent. The description adds valuable behavioral context: the error handling for non-existent slugs, which is beyond what annotations provide. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff, front-loaded with purpose. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With an output schema present and clear annotations, the description covers the essential behavioral aspects (error handling) and references the sibling tool. It is complete for its complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description adds little beyond the schema. The example format and reference to search_blog provide minor value, but the schema already documents the parameter thoroughly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves a blog article by slug, specifies returned content (Markdown body + metadata), and distinguishes from search_blog by referencing it in the parameter description.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when slug is known, and connects to search_blog for discovery. However, it does not explicitly state when not to use or provide exclusions, but the context is strong enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tagsA
Read-onlyIdempotent
Inspect
List all topic tags used across the Sovereign AI Blog corpus, with article
counts. Use this to browse the topic space before calling search_blog with
a tag filter.
ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoResult ordering. 'count_desc' lists most-used tags first (default). 'alpha' sorts alphabetically.count_desc

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, openWorldHint. Description adds that it returns article counts, but this is partially inferred. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundant information, front-loaded with the key action and result. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with an output schema and one optional parameter, the description adequately covers purpose and usage. Could mention tag format but not necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the schema already describes the enum values and default. Description does not add new meaning beyond what is in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'list' and resource 'topic tags' plus inclusion of article counts. It distinguishes from sibling tools like search_blog by specifying the corpus scope and the use of counts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly suggests using this tool before search_blog with a tag filter, providing clear context. Does not include when-not-to-use scenarios, but the alternative is implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_blogA
Read-onlyIdempotent
Inspect
Search the Sovereign AI Blog for articles matching a natural language query,
optionally filtered by tag and sorted by relevance or date.

Behaviour matrix:
  - query='', sort=*           -> list newest-first, optionally tag-filtered
  - query!='', sort=relevance  -> TF-IDF ranked, optionally tag-filtered
  - query!='', sort=date_desc  -> TF-IDF filtered (score > 0.001), then sorted by date

Pure read-only, deterministic for a given KB snapshot.
ParametersJSON Schema
NameRequiredDescriptionDefault
nNoMaximum number of results to return
tagNoOptional tag filter (e.g. 'setup', 'fixes', 'strategy'). Only articles with this tag are considered. Use list_tags to discover available tags.
sortNoResult ordering. 'relevance' uses TF-IDF score (default for non-empty query). 'date_desc' sorts newest first (default behaviour when query is empty). When query is empty, 'relevance' is treated as 'date_desc'.relevance
queryNoNatural language search query (e.g. 'flashinfer OOM on GB10'). Multi-word queries are tokenized and TF-IDF ranked. Pass empty string to list articles without ranking by relevance.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the annotations (readOnlyHint=true, idempotentHint=true), the description adds details: uses TF-IDF over title, description, tags, and first 500 chars of body; returns up to n results ranked by cosine similarity; deterministic for a given knowledge base snapshot. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. The key action and context are front-loaded. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a search tool: explains algorithm, result ranking, deterministic behavior, and leverages rich schema and annotations. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter descriptions (natural language query example, n bounds). The overall description adds little beyond the schema—only a brief restatement. Baseline 3 is appropriate as schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Search the Sovereign AI Blog for articles matching a natural language query,' specifying the verb 'Search' and the resource 'Blog articles.' This distinguishes it from sibling tools like 'get_article' (likely retrieves a single article) and 'diagnose_sglang' (unrelated).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context on when to use this tool (for natural language queries) and describes the matching algorithm (TF-IDF over specific fields). However, it does not explicitly state when not to use it or suggest alternatives, but the purpose and siblings imply appropriate use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources