Sovereign AI Blog
Server Details
Search and retrieve articles from the Sovereign AI Blog. A practical engineering log of self-hosted AI on NVIDIA DGX Spark with articles covering SGLang, Mistral, Voxtral, OpenClaw. Tools: search_blog, get_article, diagnose_sglang.
Endpoint URL: https://mcp.sovgrid.org/self-hosted-ai?ref=smithery
Transport: streamable-http (oder „HTTP Streaming")
Tags/Categories: knowledge-base, search, self-hosted-ai, sovereign
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 4 of 4 tools scored.
Each tool has a unique purpose: diagnose_sglang handles configuration validation, while the other three cover blog article retrieval, tag listing, and search. No functional overlap exists.
All tools follow a consistent verb_noun pattern: diagnose_sglang, get_article, list_tags, search_blog. No mixing of styles.
4 tools is a compact but reasonable set for a blog server. The diagnostic tool adds a distinct function, keeping the count from feeling too thin.
The blog tools cover search, retrieval, and tag browsing, which suffices for reading. The inclusion of a diagnostic tool is slightly tangential but does not leave major gaps.
Available Tools
4 toolsdiagnose_sglangARead-onlyIdempotentInspect
Validate an SGLang configuration for NVIDIA DGX Spark (GB10/SM121A).
Pure pattern-matching against known failure modes documented in the Sovereign AI Blog. No inference, no external calls. Returns critical issues, non-fatal warnings, and a recommended baseline config.
All parameters are optional; supply only what you have. With no inputs you get the recommended config and a 'unknown' verdict.
| Name | Required | Description | Default |
|---|---|---|---|
| hardware | No | Hardware description (e.g. 'GB10', 'DGX Spark', 'SM121A'). Empty = skip GB10-specific rules. | |
| image_tag | No | Docker image tag in use (e.g. 'lmsysorg/sglang:latest', 'lmsysorg/sglang:v0.4.0'). Empty = skip. | |
| mem_fraction | No | SGLang --mem-fraction-static value (e.g. 0.88). 0.0 = skip this check. | |
| error_message | No | Paste error log output here for pattern matching against known failure modes. | |
| attention_backend | No | SGLang --attention-backend value (e.g. 'flashinfer', 'triton'). Empty string = skip this check. | |
| cuda_graph_max_bs | No | SGLang --cuda-graph-max-bs value. 0 = skip this check. |
Output Schema
| Name | Required | Description |
|---|---|---|
| issues | Yes | Critical issues that will prevent SGLang from running correctly |
| verdict | Yes | Overall verdict. 'unknown' = no inputs provided. |
| warnings | Yes | Non-fatal warnings (suboptimal but non-blocking) |
| recommended_config | Yes | Verified-good baseline config for GB10/SM121A |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds valuable context: 'Pure pattern-matching against known failure modes. No inference, no external calls.' It also discloses return types (critical issues, warnings, recommended baseline config), fully characterizing behavior beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loaded with the main purpose, and every sentence adds essential information. No redundant or confusing phrasing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 optional parameters, no required, no enums, no nested objects, output schema exists), the description fully explains what it does, its modus operandi, return types, and edge cases like no inputs. It is complete for an AI agent to select and invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds minor value by summarizing parameter optionality and the 'unknown' verdict when none provided, but does not elaborate on individual parameters beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Validate an SGLang configuration for NVIDIA DGX Spark (GB10/SM121A)' with specific verb and resource. It distinguishes from sibling tools (get_article, search_blog) by its diagnostic nature and mentions pure pattern-matching with no external calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear guidance on when to use the tool: 'All parameters are optional; supply only what you have' and 'With no inputs you get the recommended config'. It implies use for configuration validation but does not explicitly state when not to use it or alternatives, though siblings are unrelated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_articleARead-onlyIdempotentInspect
Retrieve the full content of a blog article by its slug.
Returns the article body (Markdown) plus metadata. If the slug does not
match any article, returns an Article with `error='article_not_found'`
and other fields at their defaults.| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Article slug as returned by search_blog (e.g. 'setup-mistral-sglang-setup'). Lower-case, hyphenated. |
Output Schema
| Name | Required | Description |
|---|---|---|
| url | No | Public URL of the article |
| body | No | Full article body in Markdown |
| date | No | Publication date (ISO 8601) |
| slug | Yes | Article slug |
| tags | No | Topic tags assigned to the article |
| error | No | Set to 'article_not_found' if no article matches the slug |
| title | No | Article title |
| word_count | No | Word count of the article body |
| description | No | Short article description |
| quality_class | No | Editorial content class (e.g. 'Ephemeral', 'Evergreen'). Empty if not classified. |
| quality_score | No | Build-time quality score from the editorial pipeline (unbounded weighted composite across 13 signals, higher is better; thresholds depend on style) |
| quality_style | No | Editorial style category (e.g. 'best_practice_learnings', 'werthaltige_code_beispiele'). Empty if not categorised. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly and idempotent. The description adds valuable behavioral context: the error handling for non-existent slugs, which is beyond what annotations provide. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff, front-loaded with purpose. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With an output schema present and clear annotations, the description covers the essential behavioral aspects (error handling) and references the sibling tool. It is complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds little beyond the schema. The example format and reference to search_blog provide minor value, but the schema already documents the parameter thoroughly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves a blog article by slug, specifies returned content (Markdown body + metadata), and distinguishes from search_blog by referencing it in the parameter description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when slug is known, and connects to search_blog for discovery. However, it does not explicitly state when not to use or provide exclusions, but the context is strong enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_tagsARead-onlyIdempotentInspect
List all topic tags used across the Sovereign AI Blog corpus, with article
counts. Use this to browse the topic space before calling search_blog with
a tag filter.| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Result ordering. 'count_desc' lists most-used tags first (default). 'alpha' sorts alphabetically. | count_desc |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, openWorldHint. Description adds that it returns article counts, but this is partially inferred. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no redundant information, front-loaded with the key action and result. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with an output schema and one optional parameter, the description adequately covers purpose and usage. Could mention tag format but not necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the schema already describes the enum values and default. Description does not add new meaning beyond what is in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'list' and resource 'topic tags' plus inclusion of article counts. It distinguishes from sibling tools like search_blog by specifying the corpus scope and the use of counts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly suggests using this tool before search_blog with a tag filter, providing clear context. Does not include when-not-to-use scenarios, but the alternative is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_blogARead-onlyIdempotentInspect
Search the Sovereign AI Blog for articles matching a natural language query,
optionally filtered by tag and sorted by relevance or date.
Behaviour matrix:
- query='', sort=* -> list newest-first, optionally tag-filtered
- query!='', sort=relevance -> TF-IDF ranked, optionally tag-filtered
- query!='', sort=date_desc -> TF-IDF filtered (score > 0.001), then sorted by date
Pure read-only, deterministic for a given KB snapshot.| Name | Required | Description | Default |
|---|---|---|---|
| n | No | Maximum number of results to return | |
| tag | No | Optional tag filter (e.g. 'setup', 'fixes', 'strategy'). Only articles with this tag are considered. Use list_tags to discover available tags. | |
| sort | No | Result ordering. 'relevance' uses TF-IDF score (default for non-empty query). 'date_desc' sorts newest first (default behaviour when query is empty). When query is empty, 'relevance' is treated as 'date_desc'. | relevance |
| query | No | Natural language search query (e.g. 'flashinfer OOM on GB10'). Multi-word queries are tokenized and TF-IDF ranked. Pass empty string to list articles without ranking by relevance. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the annotations (readOnlyHint=true, idempotentHint=true), the description adds details: uses TF-IDF over title, description, tags, and first 500 chars of body; returns up to n results ranked by cosine similarity; deterministic for a given knowledge base snapshot. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. The key action and context are front-loaded. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a search tool: explains algorithm, result ranking, deterministic behavior, and leverages rich schema and annotations. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter descriptions (natural language query example, n bounds). The overall description adds little beyond the schema—only a brief restatement. Baseline 3 is appropriate as schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Search the Sovereign AI Blog for articles matching a natural language query,' specifying the verb 'Search' and the resource 'Blog articles.' This distinguishes it from sibling tools like 'get_article' (likely retrieves a single article) and 'diagnose_sglang' (unrelated).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context on when to use this tool (for natural language queries) and describes the matching algorithm (TF-IDF over specific fields). However, it does not explicitly state when not to use it or suggest alternatives, but the purpose and siblings imply appropriate use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!