TSUNG AI Research & Analysis
Server Details
Multi-LLM AI Research & Analysis — smart routing, consensus analysis, due diligence reports
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
11 toolsanalyzeBInspect
AI analysis on any topic — market, technology, trend, company. Returns concise analysis with key findings, opportunities, and risks.
| Name | Required | Description | Default |
|---|---|---|---|
| topic | Yes | Topic to analyze (e.g., "AI chip market 2026", "Tesla competitive position") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the output structure (key findings, opportunities, risks) which compensates partially for the missing output schema, but omits data source information, freshness guarantees, or rate limiting details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste: the first defines scope (any topic with examples), the second defines output format. Information is front-loaded and appropriately sized for a single-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 parameter, 100% schema coverage) and lack of output schema, the description adequately compensates by describing the return value structure. It could be improved by noting limitations or data source constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the 'topic' parameter already containing helpful examples. The description does not add parameter-specific guidance beyond what the schema provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'AI analysis' on topics including markets, technology, trends, and companies. However, it does not explicitly differentiate from siblings like 'market_intel' or 'dd_report' which may overlap in functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool returns (concise analysis with key findings, opportunities, risks) but provides no guidance on when to use this versus alternatives like 'ask', 'market_intel', or 'compound_search'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
askAInspect
Ask any research question. Routed to the best LLM provider based on question type (coding, Chinese, speed, general).
| Name | Required | Description | Default |
|---|---|---|---|
| provider | No | LLM provider (default: auto) | |
| question | Yes | Your research question |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It adds valuable behavioral context about automatic routing logic, but omits operational details like external API dependencies, failure modes, or return format that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose, second explains the routing mechanism. Perfectly front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for the simple 2-parameter schema, but given the lack of output schema and annotations, the description should ideally disclose what the tool returns (e.g., 'returns LLM-generated answer') to complete the agent's understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage, the description adds meaningful context: it explains the 'auto' provider logic (routing by question type) and clarifies that 'question' expects research queries (coding, Chinese, etc.), enhancing the bare schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States the specific action ('Ask') and resource ('research question') clearly. The routing explanation ('based on question type...') provides helpful scope differentiation from siblings like 'analyze' or 'search', though it doesn't explicitly contrast with them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context ('research question') and hints at selection criteria ('coding, Chinese, speed, general'), but lacks explicit when-to-use/when-not-to-use guidance relative to siblings like 'analyze' or 'hunt'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compound_searchBInspect
Search TSUNG knowledge base (compound rules + insights). Returns matching rules.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search keyword |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds value by stating the tool 'Returns matching rules', indicating the output type. However, it lacks information on safety (read-only vs. destructive), authentication requirements, or result set limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. It is front-loaded with the action verb, immediately specifies the domain, and follows with the return value. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 string parameter, no nested objects) and complete schema coverage, the description is adequate. It compensates for the missing output schema by explicitly stating that 'matching rules' are returned, providing sufficient context for a simple search utility.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage ('Search keyword'), establishing a baseline of 3. The description adds minimal semantic context for the 'query' parameter beyond implying it searches the TSUNG knowledge base, but does not need to compensate for schema gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Search'), the specific resource ('TSUNG knowledge base'), and the content type ('compound rules + insights'). While it doesn't explicitly contrast with siblings like 'analyze' or 'market_intel', the specific domain naming effectively differentiates the tool's scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus siblings such as 'analyze', 'ask', or 'market_intel'. There are no prerequisites, exclusions, or conditional usage patterns specified.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dd_reportBInspect
Generate AI due diligence report on a company or topic. Tiers: basic (4 sections), standard (6 sections), premium (7 sections + multi-model consensus).
| Name | Required | Description | Default |
|---|---|---|---|
| tier | No | Report depth tier (default: basic) | |
| topic | Yes | Company or topic for due diligence |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully discloses output structure differences between tiers (4/6/7 sections, multi-model consensus for premium), but lacks operational details like whether this is a synchronous or async operation, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. It is appropriately front-loaded with the core purpose ('Generate AI due diligence report') followed immediately by the tier differentiation details. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema with no output schema, the description provides adequate context about the report structure through the tier explanations. However, it could improve by briefly indicating the return format (e.g., markdown, JSON) since no output schema exists to document this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds significant value by explaining what each tier actually provides (section counts and multi-model consensus), semantic information not present in the schema which only lists the enum values without explaining their implications.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Generate[s] AI due diligence report on a company or topic,' providing a specific verb, resource, and scope. However, it does not explicitly differentiate this tool from siblings like 'analyze' or 'market_intel' which could overlap in functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no explicit guidance on when to use this tool versus alternatives like 'analyze' or 'market_intel', nor are there prerequisites mentioned. The tier breakdown implies different use cases but doesn't state when to choose each (e.g., 'use premium for investment decisions').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
demand_radarCInspect
Check demand radar — top hiring signals and freelance opportunities.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to specify whether data is real-time or cached, pagination behavior, authentication requirements, or what structure/format the 'hiring signals' are returned in. 'Check' implies read-only but this is not explicitly confirmed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence using an em-dash to append clarifying details. While appropriately brief for a parameterless tool, it front-loads the action 'Check' before the resource, maintaining acceptable structure without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description should explain what data structure or fields to expect (e.g., job titles, company names, confidence scores). It mentions 'hiring signals' and 'freelance opportunities' but leaves the actual return format and granularity undisclosed, creating gaps for agent invocation planning.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool accepts zero parameters, establishing a baseline of 4 per the scoring guidelines. No parameter documentation is required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves 'top hiring signals and freelance opportunities,' which clarifies the domain beyond the jargon name 'demand_radar.' However, it uses the weak verb 'Check' and fails to differentiate from siblings like 'market_intel' or 'hunt' that may overlap in functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'market_intel,' 'hunt,' or 'analyze.' The description lacks prerequisites, conditions for use, or explicit comparisons to sibling tools despite significant potential overlap in functionality.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ghost_statusBInspect
Check Ghost task status — pending, completed, unread results.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the possible status values returned (pending, completed, unread results), but omits safety characteristics (read-only vs. destructive), error conditions, or how the specific task is identified given zero input parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence front-loaded with the action. The em-dash concisely lists the status categories without verbosity. No filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool without output schema, the description explains the core function and return categories adequately. However, it fails to clarify how the target task is identified (session state? global context?) or explain the relationship to the 'ghost_write' workflow, leaving operational gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters (empty schema), which per guidelines establishes a baseline of 4. No additional parameter context is required or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Check') and resource ('Ghost task status') and lists the possible states returned (pending, completed, unread). It implicitly distinguishes from the generic 'status' sibling by specifying 'Ghost task', though it could explicitly clarify the relationship to 'ghost_write'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus the generic 'status' sibling, or when to invoke it relative to 'ghost_write'. No prerequisites or conditions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ghost_writeBInspect
Create a Ghost research task. AI will process it in background and return results.
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | Research question or task | |
| priority | No | Priority (default P1) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. Discloses async processing ('background'), but omits critical behavioral details like whether it returns immediately with a task ID, how to poll for completion, or side effects/costs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero redundancy. Front-loaded with the core action, second sentence clarifies execution mode. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an async tool with no output schema and a 'ghost_status' sibling, the description fails to explain the task lifecycle or how results are retrieved. 'Return results' is ambiguous—implies immediate return of research content rather than a task reference.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, documenting both 'prompt' and 'priority'. Description adds no semantic detail beyond schema (e.g., examples of valid prompts, impact of P0 vs P2), warranting baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Create) and resource (Ghost research task). The 'background' qualifier adds distinctive scope, though it doesn't explicitly differentiate from synchronous siblings like 'ask' or 'analyze'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies asynchronous usage via 'process it in background,' but provides no explicit when-to-use guidance, prerequisites, or reference to sibling 'ghost_status' for retrieving results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
huntAInspect
Trigger AI intelligence hunt — scans 13 sources (HuggingFace, GitHub, ArXiv, Reddit, etc.) for latest AI developments.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only discloses scope (13 sources). It fails to clarify the execution model implied by 'Trigger' (likely async given the 'status' sibling), return value format, side effects, or operational constraints like rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, efficiently structured sentence that front-loads the action ('Trigger AI intelligence hunt') and uses an em-dash to provide supporting details without waste. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no output schema, the description inadequately explains what calling this tool returns (likely a job ID given the 'status' sibling) or how to retrieve results. For a tool with 'Trigger' semantics and an available 'status' sibling, the failure to document the async workflow pattern is a significant gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4. The description correctly omits parameter discussion since none exist, requiring no additional semantic clarification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Trigger') and resource ('AI intelligence hunt'), clearly distinguishing it from siblings by enumerating the 13 specific sources scanned (HuggingFace, GitHub, ArXiv, Reddit, etc.) and its focus on 'latest AI developments' versus market intelligence or general search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implicitly guides usage through specificity of sources (broad AI research scanning), it lacks explicit guidance on when to use this versus siblings like 'compound_search', 'market_intel', or 'analyze', and does not mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
line_pushBInspect
Send a LINE message to the owner via TSUNG1 bot.
| Name | Required | Description | Default |
|---|---|---|---|
| message | Yes | Message to send |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the recipient ('owner') and mechanism ('TSUNG1 bot'), but fails to disclose critical traits like rate limits, message format constraints (HTML/Markdown/plain text), idempotency, or error handling behavior for failed pushes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence of nine words. It is front-loaded with the verb, contains zero redundancy, and every word contributes necessary information (platform, recipient, and bot identifier).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (single required string parameter, no output schema, no nested objects), the description is reasonably complete. It identifies the specific bot and recipient, which is sufficient for invocation, though it could be improved by noting success/failure indicators or message constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for the single 'message' parameter. The description implies the message content but adds no semantic details beyond the schema (e.g., length limits, formatting options, encoding). Baseline 3 is appropriate when the schema fully documents the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Send), resource (LINE message), recipient (the owner), and mechanism (via TSUNG1 bot). It effectively distinguishes this notification tool from analytical siblings like 'analyze', 'hunt', and 'compound_search'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives, nor does it specify prerequisites (e.g., whether the bot must be pre-configured) or exclusion criteria. The agent must infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_intelAInspect
Multi-model market intelligence. Queries 3+ LLMs in parallel for consensus analysis on a market question.
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Market intelligence question |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the parallel execution model ('Queries 3+ LLMs in parallel'), but fails to mention side effects, cost implications, rate limits, or return format for the consensus analysis.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: the first establishes the domain ('Multi-model market intelligence') and the second details the mechanism ('Queries 3+ LLMs...'). Information is front-loaded and every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While adequate for a single-parameter tool, the description lacks output format details (critical given no output schema exists) and safety/cost disclosures relevant to multi-LLM querying. The 'consensus analysis' mention hints at output but remains vague.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the 'question' parameter fully documented as 'Market intelligence question.' The description references 'a market question' but adds no additional semantic detail, syntax guidance, or examples beyond the schema baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'Multi-model market intelligence' using '3+ LLMs in parallel for consensus analysis,' providing specific mechanism and scope. However, it does not explicitly differentiate from siblings like 'analyze' or 'ask' despite the crowded namespace.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'consensus analysis' implies usage context (when multiple model perspectives are needed), but there is no explicit when-to-use guidance or comparison to sibling tools like 'analyze' or 'compound_search' that could serve similar purposes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
statusBInspect
Get TSUNG AI system status — provider count, health, capabilities.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds value by disclosing what data points are returned (provider count, health, capabilities), but lacks operational details such as caching behavior, authentication requirements, or rate limiting that would be necessary for a higher score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. Information is front-loaded with the action ('Get') followed by the scope and return value details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters) and lack of output schema, the description partially compensates by listing the returned data categories. However, it should ideally describe the return structure or format more explicitly since no output schema is available to provide this context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, which per guidelines sets a baseline of 4. The description appropriately does not invent parameter semantics where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and clearly identifies the resource ('TSUNG AI system status'). It distinguishes from sibling 'ghost_status' by specifying the TSUNG AI scope. However, it lacks explicit contrast with other status-related tools, falling short of a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'ghost_status' or other monitoring tools. There are no prerequisites, conditions, or explicit exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!