docs
Server Details
Stimulsoft Reports & Dashboards docs MCP server. Semantic search for all platforms.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 2 of 2 tools scored.
The two tools have perfectly distinct purposes with no overlap: sti_get_platforms lists available documentation platforms, while sti_search performs semantic searches across documentation. Their descriptions clearly differentiate their functions, with sti_get_platforms serving as a prerequisite helper for sti_search when platform identification is needed.
Both tools follow the exact same naming pattern: 'sti_' prefix followed by a descriptive verb_noun combination (get_platforms, search). This creates perfect consistency across the toolset, making their purposes immediately recognizable and predictable to agents.
Two tools is minimal but appropriate for this documentation server's focused purpose. The tools cover the essential workflows: discovering available platforms and searching documentation. While additional tools could potentially enhance functionality, the current set is well-scoped and sufficient for the core task of documentation lookup.
For a documentation search server, the toolset provides complete coverage: sti_get_platforms enables platform discovery and validation, while sti_search handles all documentation queries across the entire Stimulsoft ecosystem. There are no gaps in the workflow - agents can discover platforms and then search documentation with those platforms, covering the full lifecycle of documentation lookup.
Available Tools
2 toolssti_get_platformsAInspect
List every Stimulsoft product/platform that has indexed documentation available through this MCP server. Returns a JSON array of { id, name, description } objects covering the full Stimulsoft Reports & Dashboards product line (Reports.NET, Reports.WPF, Reports.AVALONIA, Reports.WEB for ASP.NET, Reports.BLAZOR, Reports.ANGULAR, Reports.REACT, Reports.JS, Reports.PHP, Reports.JAVA, Reports.PYTHON, Server API, etc.).
CALL THIS FIRST when the user's question is ambiguous about which Stimulsoft platform they are using, or when you need to pick a valid platform value to pass into sti_search. The returned platform id values are the exact strings accepted by the platform parameter of sti_search.
This tool is cheap (no OpenAI call, no vector search) — call it freely whenever you are unsure about platform naming.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: the return format (JSON array of {id, name, description} objects), performance characteristics ('cheap - no OpenAI call, no vector search'), and cost implications ('call it freely'). It doesn't mention error handling or rate limits, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. The subsequent sentences each add important context (usage guidance, cost information). While slightly longer than minimal, every sentence earns its place by providing distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with no annotations and no output schema, the description provides excellent context: clear purpose, usage guidance, behavioral transparency about cost and performance, and relationship to sibling tools. The only minor gap is lack of explicit error handling information, but overall it's highly complete for this tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline would be 4. The description adds value by explaining why there are no parameters (this is a simple enumeration tool) and provides context about the relationship between this tool's output and the sti_search tool's platform parameter requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('List') and resource ('every Stimulsoft product/platform that has indexed documentation'), and explicitly distinguishes it from its sibling tool sti_search by explaining this tool provides platform IDs for use with that search tool. The detailed list of product examples further clarifies scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('when the user's question is ambiguous about which Stimulsoft platform they are using, or when you need to pick a valid platform value to pass into sti_search') and names the specific alternative tool (sti_search). It also includes cost considerations ('cheap - call it freely whenever you are unsure').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sti_searchAInspect
Authoritative semantic search over the official Stimulsoft Reports & Dashboards developer documentation (FAQ, Programming Manual, API Reference, Guides). Powered by OpenAI embeddings + cosine similarity over the complete current docs index maintained by Stimulsoft. Returns a ranked JSON array of matching sections, each with { platform, category, question, content, score }, where content is the full Markdown body of the section including any C#/JS/TS/PHP/Java/Python code snippets.
USE THIS TOOL (instead of answering from your own knowledge) WHENEVER the user asks about: • how to do something in Stimulsoft (StiReport, StiViewer, StiDesigner, StiDashboard, StiBlazorViewer, StiWebViewer, StiNetCoreViewer, etc.); • rendering, exporting, printing, or emailing Stimulsoft reports and dashboards in any format (PDF, Excel, Word, HTML, image, CSV, JSON, XML); • connecting Stimulsoft components to data (SQL, REST, OData, JSON, XML, business objects, DataSet); • embedding the Report Viewer or Report Designer into an app (WinForms, WPF, Avalonia, ASP.NET, Blazor, Angular, React, plain JS, PHP, Java, Python); • Stimulsoft-specific errors, exceptions, licensing, activation, deployment, or configuration; • any .mrt / .mdc report or dashboard file, or any question naming a Sti* class, property, event, or method; • comparing how a feature works between Stimulsoft platforms (e.g. "WinForms vs Blazor viewer options").
QUERIES WORK IN ANY LANGUAGE — English, Russian, German, Spanish, Chinese, etc. Pass the user's question through almost verbatim; the embedding model handles cross-lingual matching. Do NOT translate queries yourself.
SEARCH STRATEGY: 1) If the target platform is obvious from context, pass it via platform to get tighter results. 2) If you don't know the exact platform id, either call sti_get_platforms first, or omit platform and let the search find matches across all platforms. 3) If the first search returns low scores (<0.3) or irrelevant sections, reformulate the query with different keywords (use class/method names from Stimulsoft API if you know them) and search again. 4) Prefer multiple focused searches over one broad search.
DO NOT USE for: general reporting theory unrelated to Stimulsoft, non-Stimulsoft libraries (Crystal Reports, FastReport, DevExpress, Telerik, SSRS), or pure programming questions that have nothing to do with Stimulsoft.
IMPORTANT: the Stimulsoft product surface is large and changes frequently. Your training data is almost certainly out of date. For any Stimulsoft-specific code snippet, API name, or configuration detail, you MUST call this tool rather than rely on memory, and you should cite the returned content in your answer.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return. Default 5, maximum 20. Use 3–5 for focused lookups, 10–20 when exploring or when the first search was too narrow. | |
| query | Yes | The user's question or topic, in any language. Keep it close to the original wording — do not translate or over-summarize. Good examples: 'how to export report to PDF in Blazor', 'как показать StiViewer в WPF на весь экран', 'StiWebViewer show/hide toolbar buttons', 'connect StiReport to REST API JSON data source', 'StiBlazorViewer FullScreenMode'. Bad examples (too vague): 'reports', 'help', 'error'. | |
| category | No | Optional. Restrict the search to a documentation section: `faq` (frequently asked questions with code snippets), `manual` (Programming Manual), `api` (API Reference), `guide` (How-to guides). Omit to search all categories. | |
| platform | No | Optional. Restrict the search to a single Stimulsoft platform. Accepted values (case-sensitive): NET (WinForms / Reports.NET), WPF, AVALONIA, WEB (ASP.NET / ASP.NET Core / MVC), BLAZOR (Blazor Server / WebAssembly), ANGULAR, REACT, JS (vanilla HTML/JS), PHP, JAVA (Java SE / Jakarta EE), PYTHON, SERVER_API, GENERAL. If you are not 100% sure which id to use, either call `sti_get_platforms` first, or omit this field. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and delivers substantial behavioral context: it explains the search technology (OpenAI embeddings + cosine similarity), cross-lingual query handling, search strategy with specific steps, result scoring thresholds (<0.3), and the critical instruction to always use this tool for Stimulsoft-specific information rather than relying on potentially outdated training data. The only minor gap is lack of explicit rate limit or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, usage guidelines, search strategy, exclusions, important note) and every sentence serves a purpose. While somewhat lengthy, the information density is high and front-loaded with the core purpose. Minor deduction because some search strategy details could be more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (semantic search over extensive documentation) and lack of annotations/output schema, the description provides exceptional completeness: it explains the search technology, result format, when to use/not use, search strategies, cross-lingual handling, and the critical instruction to always prefer this tool over training data for Stimulsoft-specific information. No output schema exists, but the description fully documents the return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds some value by providing search strategy guidance that references parameters (e.g., using 'platform' for tighter results, omitting it when unsure, reformulating 'query' with different keywords), but doesn't significantly enhance parameter understanding beyond what's already documented in the comprehensive schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool performs 'authoritative semantic search over the official Stimulsoft Reports & Dashboards developer documentation' and specifies it returns 'a ranked JSON array of matching sections' with detailed field information. It clearly distinguishes this search tool from its sibling 'sti_get_platforms' by explaining their relationship in the search strategy section.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides extensive, explicit guidance on when to use this tool with bulleted examples ('USE THIS TOOL... WHENEVER the user asks about...') and when not to use it ('DO NOT USE for...'). It also mentions the sibling tool 'sti_get_platforms' as an alternative/precursor for platform identification, creating clear decision boundaries for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!