Skip to main content
Glama

Server Details

Engineering log of self-hosted AI on NVIDIA DGX Spark (GB10/SM121A). 60+ articles indexed.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
cipherfoxie/sovereign-mcp
GitHub Stars
1
Server Listing
Sovereign AI Blog

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct function: SGLang config diagnosis versus blog article browsing (search, list tags, get article). No overlap in purpose.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern: diagnose_sglang, get_article, list_tags, search_blog.

Tool Count4/5

With 4 tools, the count is within the well-scoped range (3-15). However, the server mixes two separate domains, making the scope slightly thin.

Completeness3/5

The blog tools cover reading needs (search, list tags, get article), but the SGLang diagnostic tool is a single operation with no update or management tools. For a 'self-hosted-ai' server, broader AI operations are missing.

Available Tools

4 tools
diagnose_sglangA
Read-onlyIdempotent
Inspect

Validate an SGLang configuration for NVIDIA DGX Spark (GB10/SM121A).

Pure pattern-matching against known failure modes documented in the Sovereign AI Blog. No inference, no external calls. Returns critical issues, non-fatal warnings, and a recommended baseline config.

All parameters are optional; supply only what you have. With no inputs you get the recommended config and a 'unknown' verdict.

ParametersJSON Schema
NameRequiredDescriptionDefault
hardwareNoHardware description (e.g. 'GB10', 'DGX Spark', 'SM121A'). Empty = skip GB10-specific rules.
image_tagNoDocker image tag in use (e.g. 'lmsysorg/sglang:latest', 'lmsysorg/sglang:v0.4.0'). Empty = skip.
mem_fractionNoSGLang --mem-fraction-static value (e.g. 0.88). 0.0 = skip this check.
error_messageNoPaste error log output here for pattern matching against known failure modes.
attention_backendNoSGLang --attention-backend value (e.g. 'flashinfer', 'triton'). Empty string = skip this check.
cuda_graph_max_bsNoSGLang --cuda-graph-max-bs value. 0 = skip this check.

Output Schema

ParametersJSON Schema
NameRequiredDescription
issuesYesCritical issues that will prevent SGLang from running correctly
verdictYesOverall verdict. 'unknown' = no inputs provided.
warningsYesNon-fatal warnings (suboptimal but non-blocking)
recommended_configYesVerified-good baseline config for GB10/SM121A
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the annotations (readOnly, idempotent), the description adds that it is 'pure pattern-matching' with 'no inference, no external calls', and details the return types. This provides full transparency about the tool's mechanisms and limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences long, each serving a distinct purpose: purpose, method, outputs, parameter guidance. It is front-loaded and contains no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately summarizes return types without needing detail. All aspects (inputs, behavior, outputs) are covered sufficiently for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, each parameter is already documented. The description adds a general note that all parameters are optional and the 'no inputs' scenario, but does not enhance individual parameter meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool validates an SGLang configuration for a specific hardware platform (NVIDIA DGX Spark) using pattern matching. It specifies the outputs (critical issues, warnings, recommended config) and distinguishes itself from sibling tools that are blog/article related.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides guidance that all parameters are optional and explains the behavior with no inputs. It implicitly tells when to use (when diagnosing SGLang config issues) but does not explicitly mention when not to use or alternatives, though siblings are unrelated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_articleA
Read-onlyIdempotent
Inspect
Retrieve the full content of a blog article by its slug.

Returns the article body (Markdown) plus metadata. If the slug does not
match any article, returns an Article with `error='article_not_found'`
and other fields at their defaults.
ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesArticle slug as returned by search_blog (e.g. 'setup-mistral-sglang-setup'). Lower-case, hyphenated.

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlNoPublic URL of the article
bodyNoFull article body in Markdown
dateNoPublication date (ISO 8601)
slugYesArticle slug
tagsNoTopic tags assigned to the article
errorNoSet to 'article_not_found' if no article matches the slug
titleNoArticle title
word_countNoWord count of the article body
descriptionNoShort article description
quality_classNoEditorial content class (e.g. 'Ephemeral', 'Evergreen'). Empty if not classified.
quality_scoreNoBuild-time quality score from the editorial pipeline (unbounded weighted composite across 13 signals, higher is better; thresholds depend on style)
quality_styleNoEditorial style category (e.g. 'best_practice_learnings', 'werthaltige_code_beispiele'). Empty if not categorised.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and openWorldHint=false. The description adds valuable behavioral context: it returns Markdown body and metadata, and specifically documents the error case ('Article with error='article_not_found''). This goes beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences. The first sentence clearly states the purpose, and the second adds crucial error behavior. No filler or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, full schema coverage, output schema available), the description provides sufficient context. It mentions the return content (Markdown and metadata) and error handling, covering all necessary aspects for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for the single parameter 'slug' with a clear description. The tool description does not add extra parameter semantics beyond what's in the schema. Baseline is 3, which is appropriate here.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Retrieve the full content of a blog article by its slug.' The verb is specific ('retrieve') and the resource is clear ('blog article'). It distinguishes from siblings like 'diagnose_sglang', 'list_tags', and 'search_blog' by focusing on full content retrieval by slug.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly tells when to use the tool: when you have a slug and want full article content. Siblings have different purposes (diagnose, list tags, search), so usage is clear. However, it does not explicitly state when not to use it or provide alternative scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tagsA
Read-onlyIdempotent
Inspect
List all topic tags used across the Sovereign AI Blog corpus, with article
counts. Use this to browse the topic space before calling search_blog with
a tag filter.
ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoResult ordering. 'count_desc' lists most-used tags first (default). 'alpha' sorts alphabetically.count_desc

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds value beyond the annotations by noting that results include 'article counts,' which is behavioral detail not present in the `readOnlyHint` or `idempotentHint`. The description aligns with annotations (non-destructive, read-only).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, consisting of two short sentences with no redundant or extraneous information. The core purpose is stated first, meeting the front-loading expectation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema (not shown but indicated) and the simplicity of the tool (single optional parameter), the description provides sufficient context: it explains the output (tags + counts) and the intended use case. No missing critical information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already fully documents the single parameter 'sort' with enum values, default, and description. The tool description does not add any additional semantic value for this parameter, so it does not compensate beyond the schema's coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List all topic tags'), the specific resource ('across the Sovereign AI Blog corpus'), and the output includes article counts. It also differentiates its purpose from sibling tools by mentioning its use before 'search_blog'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises when to use this tool: 'before calling search_blog with a tag filter.' This provides clear context and a recommended workflow, though it does not mention when not to use it or alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_blogA
Read-onlyIdempotent
Inspect
Search the Sovereign AI Blog for articles matching a natural language query,
optionally filtered by tag and sorted by relevance or date.

Behaviour matrix:
  - query='', sort=*           -> list newest-first, optionally tag-filtered
  - query!='', sort=relevance  -> TF-IDF ranked, optionally tag-filtered
  - query!='', sort=date_desc  -> TF-IDF filtered (score > 0.001), then sorted by date

Pure read-only, deterministic for a given KB snapshot.
ParametersJSON Schema
NameRequiredDescriptionDefault
nNoMaximum number of results to return
tagNoOptional tag filter (e.g. 'setup', 'fixes', 'strategy'). Only articles with this tag are considered. Use list_tags to discover available tags.
sortNoResult ordering. 'relevance' uses TF-IDF score (default for non-empty query). 'date_desc' sorts newest first (default behaviour when query is empty). When query is empty, 'relevance' is treated as 'date_desc'.relevance
queryNoNatural language search query (e.g. 'flashinfer OOM on GB10'). Multi-word queries are tokenized and TF-IDF ranked. Pass empty string to list articles without ranking by relevance.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds a detailed behavior matrix explaining how query and sort interact, and states it's deterministic for a given KB snapshot. This goes beyond annotations (readOnlyHint, idempotentHint) and provides complete behavioral clarity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with no redundant sentences. The behavior matrix is a clear, structured way to present complex parameter interactions. Front-loaded with main purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description does not need to explain return values. It covers all key behaviors, filtering, sorting, and references to sibling tools. Complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds semantic value by explaining the interaction between query and sort parameters via the behavior matrix, which is not evident from the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches the Sovereign AI Blog for articles matching a natural language query, with optional filtering by tag and sorting by relevance or date. It distinguishes from siblings like get_article and list_tags.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The behavior matrix provides clear guidance on when to use different sort options based on query presence. It also references list_tags for discovering available tags. However, it does not explicitly list alternatives or when not to use the tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.