Stimulsoft Documentation MCP Server
Server Details
Official MCP server providing AI assistants with direct access to Stimulsoft Reports & Dashboards developer documentation. Enables semantic search across FAQ, Programming Manual, Server Manual, User Manual, and Server/Cloud API references across all Stimulsoft platforms (.NET, WPF, Avalonia, WEB, Blazor, Angular, React, JS, PHP, Java, Python).
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.8/5 across 2 of 2 tools scored.
The two tools have perfectly distinct purposes: sti_get_platforms lists available documentation platforms, while sti_search performs semantic search across documentation. There is zero overlap in functionality, and the descriptions clearly differentiate their roles (metadata retrieval vs. content search).
Both tools follow an identical sti_verb_noun naming pattern with consistent snake_case formatting. The prefix 'sti_' clearly identifies the Stimulsoft domain, and the verbs 'get' and 'search' accurately describe their actions, creating a predictable and coherent naming convention.
With only two tools, the server feels minimal for a documentation search domain. While the tools cover the essential operations (list platforms and search content), the surface is thin compared to typical documentation servers that might include additional tools like get_document, list_categories, or search_history. However, the two tools do enable core functionality.
For a documentation search server, the toolset covers the fundamental workflow: discover available platforms and search content. The main gap is the lack of a direct document retrieval tool (e.g., get_document_by_id), but the search tool returns full content, mitigating this. The descriptions also provide excellent guidance on when to use each tool, reducing workflow dead ends.
Available Tools
2 toolssti_get_platformsAInspect
List every Stimulsoft product/platform that has indexed documentation available through this MCP server. Returns a JSON array of { id, name, description } objects covering the full Stimulsoft Reports & Dashboards product line (Reports.NET, Reports.WPF, Reports.AVALONIA, Reports.WEB for ASP.NET, Reports.BLAZOR, Reports.ANGULAR, Reports.REACT, Reports.JS, Reports.PHP, Reports.JAVA, Reports.PYTHON, Server API, etc.).
CALL THIS FIRST when the user's question is ambiguous about which Stimulsoft platform they are using, or when you need to pick a valid platform value to pass into sti_search. The returned platform id values are the exact strings accepted by the platform parameter of sti_search.
This tool is cheap (no OpenAI call, no vector search) — call it freely whenever you are unsure about platform naming.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the return format ('JSON array of { id, name, description } objects'), explains the relationship to sti_search ('returned platform `id` values are the exact strings accepted by the `platform` parameter of `sti_search`'), and provides performance/cost context ('cheap (no OpenAI call, no vector search)'). It doesn't mention error conditions or rate limits, but provides substantial behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with three focused paragraphs: (1) what the tool does and returns, (2) when to use it and its relationship to sti_search, (3) performance characteristics and usage encouragement. Every sentence adds value with zero redundancy or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with no annotations and no output schema, the description provides comprehensive context: clear purpose, explicit usage guidelines, detailed behavioral information including return format and tool relationships, and performance characteristics. It fully compensates for the lack of structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage. The description appropriately doesn't waste space discussing non-existent parameters. It does mention the relationship between this tool's output and sti_search's parameters, which adds valuable semantic context about how the tools interact.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'List every Stimulsoft product/platform that has indexed documentation available through this MCP server' - a specific verb (List) + resource (Stimulsoft product/platform) combination. It distinguishes from the sibling tool sti_search by explaining this tool provides platform metadata while sti_search performs searches within those platforms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'CALL THIS FIRST when the user's question is ambiguous about which Stimulsoft platform they are using, or when you need to pick a valid `platform` value to pass into `sti_search`.' It also mentions an alternative (sti_search) and includes cost considerations ('This tool is cheap... call it freely whenever you are unsure').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sti_searchAInspect
Authoritative semantic search over the official Stimulsoft Reports & Dashboards developer documentation (FAQ, Programming Manual, API Reference, Guides). Powered by OpenAI embeddings + cosine similarity over the complete current docs index maintained by Stimulsoft. Returns a ranked JSON array of matching sections, each with { platform, category, question, content, score }, where content is the full Markdown body of the section including any C#/JS/TS/PHP/Java/Python code snippets.
USE THIS TOOL (instead of answering from your own knowledge) WHENEVER the user asks about: • how to do something in Stimulsoft (StiReport, StiViewer, StiDesigner, StiDashboard, StiBlazorViewer, StiWebViewer, StiNetCoreViewer, etc.); • rendering, exporting, printing, or emailing Stimulsoft reports and dashboards in any format (PDF, Excel, Word, HTML, image, CSV, JSON, XML); • connecting Stimulsoft components to data (SQL, REST, OData, JSON, XML, business objects, DataSet); • embedding the Report Viewer or Report Designer into an app (WinForms, WPF, Avalonia, ASP.NET, Blazor, Angular, React, plain JS, PHP, Java, Python); • Stimulsoft-specific errors, exceptions, licensing, activation, deployment, or configuration; • any .mrt / .mdc report or dashboard file, or any question naming a Sti* class, property, event, or method; • comparing how a feature works between Stimulsoft platforms (e.g. "WinForms vs Blazor viewer options").
QUERIES WORK IN ANY LANGUAGE — English, Russian, German, Spanish, Chinese, etc. Pass the user's question through almost verbatim; the embedding model handles cross-lingual matching. Do NOT translate queries yourself.
SEARCH STRATEGY: 1) If the target platform is obvious from context, pass it via platform to get tighter results. 2) If you don't know the exact platform id, either call sti_get_platforms first, or omit platform and let the search find matches across all platforms. 3) If the first search returns low scores (<0.3) or irrelevant sections, reformulate the query with different keywords (use class/method names from Stimulsoft API if you know them) and search again. 4) Prefer multiple focused searches over one broad search.
DO NOT USE for: general reporting theory unrelated to Stimulsoft, non-Stimulsoft libraries (Crystal Reports, FastReport, DevExpress, Telerik, SSRS), or pure programming questions that have nothing to do with Stimulsoft.
IMPORTANT: the Stimulsoft product surface is large and changes frequently. Your training data is almost certainly out of date. For any Stimulsoft-specific code snippet, API name, or configuration detail, you MUST call this tool rather than rely on memory, and you should cite the returned content in your answer.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return. Default 5, maximum 20. Use 3–5 for focused lookups, 10–20 when exploring or when the first search was too narrow. | |
| query | Yes | The user's question or topic, in any language. Keep it close to the original wording — do not translate or over-summarize. Good examples: 'how to export report to PDF in Blazor', 'как показать StiViewer в WPF на весь экран', 'StiWebViewer show/hide toolbar buttons', 'connect StiReport to REST API JSON data source', 'StiBlazorViewer FullScreenMode'. Bad examples (too vague): 'reports', 'help', 'error'. | |
| category | No | Optional. Restrict the search to a documentation section: `faq` (frequently asked questions with code snippets), `manual` (Programming Manual), `api` (API Reference), `guide` (How-to guides). Omit to search all categories. | |
| platform | No | Optional. Restrict the search to a single Stimulsoft platform. Accepted values (case-sensitive): NET (WinForms / Reports.NET), WPF, AVALONIA, WEB (ASP.NET / ASP.NET Core / MVC), BLAZOR (Blazor Server / WebAssembly), ANGULAR, REACT, JS (vanilla HTML/JS), PHP, JAVA (Java SE / Jakarta EE), PYTHON, SERVER_API, GENERAL. If you are not 100% sure which id to use, either call `sti_get_platforms` first, or omit this field. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and delivers rich behavioral context. It explains the search is 'powered by OpenAI embeddings + cosine similarity', handles 'queries in any language', advises on search strategy (e.g., reformulating queries if low scores, preferring multiple focused searches), and emphasizes critical constraints like 'Your training data is almost certainly out of date' and the mandate to 'MUST call this tool rather than rely on memory'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (overview, usage guidelines, search strategy, exclusions, important notes) and uses bullet points for readability. While lengthy, every sentence adds value—no redundant information. It could be slightly more concise but remains highly efficient given the complexity of the tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (semantic search over extensive documentation), no annotations, and no output schema, the description provides comprehensive context. It details the return format ('ranked JSON array of matching sections' with fields like platform, category, content, score), explains cross-lingual handling, offers practical search strategies, and sets clear boundaries for usage. This fully compensates for the lack of structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds meaningful context beyond the schema: it explains that queries should be passed 'almost verbatim' without translation, provides specific examples of good vs. bad queries, and offers strategic advice on when to use 'platform' parameter (e.g., 'If the target platform is obvious from context') and how to handle uncertainty (call sibling tool or omit). This elevates the score above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool performs 'authoritative semantic search over the official Stimulsoft Reports & Dashboards developer documentation' and specifies it returns 'a ranked JSON array of matching sections' with detailed structure. It clearly distinguishes from its sibling 'sti_get_platforms' by being the search tool versus a platform-listing tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides extensive explicit guidance on when to use this tool with bulleted examples (e.g., 'how to do something in Stimulsoft', rendering/exporting, connecting to data, embedding components, errors/licensing, etc.) and when NOT to use it ('general reporting theory unrelated to Stimulsoft, non-Stimulsoft libraries, or pure programming questions'). It also references the sibling tool 'sti_get_platforms' for platform identification.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!