법마디(Lawmadi) OS — Korean Legal AI
Server Details
Korean Legal AI — 60 agents, real-time statute verification via law.go.kr. Free 2/day.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3/5 across 7 of 7 tools scored. Lowest: 1.9/5.
Most tools have distinct purposes: ask_ask_post for basic legal questions, ask_expert_ask_expert_post for deep analysis, ask_stream_ask_stream_post for streaming responses, chat_leader_api_chat_leader_post for 1:1 chat, get_leaders_api_leaders_get for listing specialists, search_search_get for search, and suggest_questions_suggest_questions_post for follow-up suggestions. However, ask_ask_post and ask_expert_ask_expert_post could be confused as both handle legal questions, though the expert mode offers deeper analysis.
Naming is inconsistent with mixed patterns: some use verb_noun (e.g., suggest_questions), others use noun_verb (e.g., chat_leader_api_chat_leader_post), and there are redundant elements like 'ask_ask_post' and 'ask_expert_ask_expert_post'. This lack of a clear convention makes the tool set harder to navigate and predict.
With 7 tools, the count is well-scoped for a legal AI server. It covers key functionalities like asking questions (in basic, expert, and streaming modes), chatting with specialists, listing leaders, searching, and suggesting follow-ups, which is appropriate for the domain without being overwhelming or too sparse.
The tool set covers core legal AI operations: querying (ask, expert, stream), interacting with specialists (chat, get leaders), searching, and suggesting follow-ups. Minor gaps might include tools for managing user sessions or accessing specific legal documents, but the surface supports essential workflows without dead ends.
Available Tools
7 toolsask_ask_postCRead-onlyIdempotentInspect
Ask
Main legal question endpoint — routes to 1 of 60 specialist agents with real-time statute verification.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Output Schema:
{}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type"
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | Response language: 'ko' (Korean) or 'en' (English). Auto-detected from query if omitted. | |
| query | Yes | Legal question in Korean or English (max 2000 chars). Example: '부당해고를 당했는데 어떻게 해야 하나요?' | |
| history | No | Conversation history (max 6 recent turns). Each item: {role: 'user'|'model', content: '...'} | |
| current_leader | No | Current leader context for handoff/deliberation. | |
| is_first_question | No | Whether this is the user's first question in the session. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds valuable context about routing to 60 specialist agents and real-time statute verification, which goes beyond annotations. However, it doesn't mention rate limits, authentication requirements, or detailed behavioral traits like response time or error handling beyond the HTTP codes shown.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured and includes excessive technical details like HTTP response codes and output schemas that belong in structured fields, not in the description. The core purpose is buried among response format details. Sentences like 'Main legal question endpoint — routes to 1 of 60 specialist agents with real-time statute verification' earn their place, but the rest is wasteful repetition of schema information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, nested objects) and rich annotations, the description provides basic purpose but lacks usage guidelines and behavioral details like authentication or rate limits. No output schema is provided, but the description includes HTTP response details that partially compensate. However, for a legal question tool with multiple siblings, more contextual guidance is needed for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters well-documented in the schema itself (e.g., 'query' for legal questions, 'lang' for language). The description adds no additional parameter semantics beyond what's in the schema. According to scoring rules, with high schema coverage (>80%), the baseline is 3 even without param info in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states this is the 'Main legal question endpoint' that 'routes to 1 of 60 specialist agents with real-time statute verification,' providing a specific verb ('ask') and resource ('legal question'). However, it doesn't explicitly distinguish this from sibling tools like 'ask_expert_ask_expert_post' or 'ask_stream_ask_stream_post,' which appear to be related legal question tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions routing to specialist agents but doesn't specify when this tool is appropriate compared to sibling tools like 'ask_expert_ask_expert_post' or 'search_search_get.' There are no explicit when/when-not instructions or alternative recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ask_expert_ask_expert_postBRead-onlyIdempotentInspect
Ask Expert
Expert mode — full 4-Stage Legal Pipeline with deep analysis and comprehensive statute verification.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Output Schema:
{}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type"
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | Response language: 'ko' or 'en'. Auto-detected if omitted. | |
| query | Yes | Legal question for expert analysis (max 2000 chars). | |
| original_response | No | Original /ask response for deeper expert analysis. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide key behavioral hints: readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds context by mentioning '4-Stage Legal Pipeline' and 'comprehensive statute verification,' which gives insight into the analysis process. However, it does not disclose additional traits like rate limits, authentication needs, or specific error handling beyond the generic validation error example, resulting in moderate added value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured and not front-loaded; it starts with 'Ask Expert' but then includes extensive, irrelevant details like HTTP response codes, output schemas, and example errors that belong in structured fields rather than the description. This clutter distracts from the core purpose and makes the description longer than necessary, reducing clarity and efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (legal analysis with multiple parameters) and rich annotations, the description is incomplete. It lacks explanation of the output (no output schema is provided, and the description doesn't describe return values), does not detail the '4-Stage Legal Pipeline' or what 'deep analysis' entails, and fails to address potential side effects or performance considerations. This leaves gaps for an AI agent to understand full behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for all three parameters (e.g., 'query' for legal questions, 'lang' for language, 'original_response' for deeper analysis). The description does not add any parameter-specific information beyond what the schema provides, such as examples or usage tips. Since the schema fully documents parameters, the baseline score of 3 is appropriate, as no extra value is contributed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool performs 'Expert mode — full 4-Stage Legal Pipeline with deep analysis and comprehensive statute verification,' which clearly indicates it provides comprehensive legal analysis. It specifies 'Ask Expert' as the action and distinguishes itself from simpler siblings like 'ask_ask_post' by emphasizing 'full' and 'deep' analysis. However, it doesn't explicitly name the sibling tool it differs from, keeping it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for complex legal questions requiring deep analysis, suggesting it should be used over simpler alternatives like 'ask_ask_post' when thorough verification is needed. However, it lacks explicit guidance on when to choose this tool versus others (e.g., no mention of 'ask_stream_ask_stream_post' for streaming or 'search_search_get' for searches), and does not specify prerequisites or exclusions, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ask_stream_ask_stream_postBRead-onlyIdempotentInspect
Ask Stream
SSE streaming legal question — real-time token-by-token response with statute verification.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Output Schema:
{}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type"
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | Response language: 'ko' or 'en'. Auto-detected if omitted. | |
| mode | No | Stream mode: 'general' (default), 'leader_chat', or 'expert'. | general |
| query | Yes | Legal question in Korean or English (max 2000 chars). | |
| history | No | Conversation history (max 6 recent turns). | |
| leader_id | No | Specific leader ID (e.g. 'L01') for 1:1 chat mode. | |
| current_leader | No | Current leader context for handoff. | |
| is_first_question | No | Whether this is the user's first question in the session. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds significant behavioral context beyond what annotations provide. While annotations declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, the description reveals this is an SSE (Server-Sent Events) streaming endpoint with 'real-time token-by-token response' and 'statute verification' - critical implementation details not captured in annotations. It also documents HTTP response codes (200, 422) and error formats, though some of this overlaps with output schema information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured and includes unnecessary technical details. The first sentence states the core purpose, but then it devotes most of the text to HTTP response documentation (200/422 codes, output schemas, example responses) that belongs in API documentation rather than an AI-facing tool description. This creates clutter without adding value for tool selection. The description should be front-loaded with usage context rather than implementation details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (7 parameters, streaming behavior, legal domain) and rich annotations, the description is partially complete. It covers the streaming nature and statute verification well, but misses critical context about when to use this versus sibling tools. With no output schema provided, the description includes some response format details, but these are mixed with HTTP-level documentation that doesn't help an AI understand the actual response content. The description adequately explains what the tool does but poorly explains when and why to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents all 7 parameters with clear descriptions. The tool description adds no additional parameter semantics beyond what's in the schema - it doesn't explain how parameters like 'mode', 'leader_id', or 'current_leader' affect the streaming behavior or statute verification. The baseline score of 3 is appropriate since the schema carries the full parameter documentation burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool's purpose clearly: 'Ask Stream SSE streaming legal question — real-time token-by-token response with statute verification.' This specifies the verb ('Ask'), resource ('legal question'), and key behavioral characteristics ('SSE streaming', 'real-time token-by-token', 'statute verification'). However, it doesn't explicitly differentiate from sibling tools like 'ask_ask_post' or 'ask_expert_ask_expert_post', which likely have overlapping purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'ask_ask_post', 'ask_expert_ask_expert_post', and 'chat_leader_api_chat_leader_post' that appear related to legal questioning, there's no indication of when this SSE streaming approach is preferred over non-streaming or expert-specific alternatives. The description focuses entirely on technical behavior without contextual usage advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
chat_leader_api_chat_leader_postCRead-onlyIdempotentInspect
Chat Leader
1:1 chat with a specific legal specialist leader via SSE streaming.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Output Schema:
{}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type"
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Question to ask the specific leader (max 2000 chars). | |
| history | No | Conversation history with this leader. | |
| leader_id | Yes | Leader identifier (e.g. 'L01', 'L32', 'CCO'). Use GET /api/leaders to see all available leaders. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key behavioral traits (readOnlyHint: true, destructiveHint: false, idempotentHint: true), so the description's burden is lower. It adds value by specifying 'via SSE streaming,' which clarifies the response mechanism, but doesn't detail rate limits, authentication needs, or error handling beyond the provided examples. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured, with redundant sections like output schemas and example responses that clutter the core purpose. It's not front-loaded effectively, as the key functionality is buried, and sentences like '**200**: Successful Response' don't earn their place in a tool description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (streaming chat with legal leaders) and lack of output schema, the description is moderately complete. It covers the basic purpose and response formats but misses details on streaming behavior, error scenarios beyond 422, or integration with sibling tools, leaving gaps for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents parameters like 'leader_id' and 'query.' The description doesn't add meaningful semantics beyond what's in the schema, such as explaining 'history' usage or 'query' constraints in more depth, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool enables '1:1 chat with a specific legal specialist leader via SSE streaming,' which is a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'ask_expert_ask_expert_post' or 'ask_stream_ask_stream_post,' which may offer similar chat functionalities, leaving some ambiguity about uniqueness.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions using 'GET /api/leaders' to see available leaders, but doesn't explain when to choose this tool over siblings like 'ask_expert_ask_expert_post' or 'ask_stream_ask_stream_post,' leaving usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_leaders_api_leaders_getARead-onlyIdempotentInspect
Get Leaders
List all 60+ specialist legal agents (leaders) with their names, specialties, and profiles.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Output Schema:
{}| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds context about the number of leaders (60+) and the data fields returned, but does not disclose rate limits, authentication needs, or pagination behavior, offering only moderate value beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by response details. However, the inclusion of the output schema section (which is empty and noted as false in context signals) adds minor redundancy, slightly reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema), the description provides sufficient context: it explains the resource, scope, and returned data. With annotations covering behavioral traits, the description is adequately complete, though it could benefit from mentioning response format or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the schema fully documents the lack of inputs. The description appropriately does not discuss parameters, which is correct for a parameterless tool, earning a baseline score of 4 for not adding unnecessary information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get'/'List' and resource 'specialist legal agents (leaders)' with specific details about what information is returned (names, specialties, profiles) and the scope (all 60+). It distinguishes from siblings like search_search_get by focusing on comprehensive listing rather than query-based retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving a complete list of leaders, which contrasts with tools like search_search_get for filtered searches or chat_leader_api_chat_leader_post for interactive conversations. However, it does not explicitly state when-not-to-use or name alternatives, leaving some inference required.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_search_getDRead-onlyIdempotentInspect
Search
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Output Schema:
{}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type"
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| q | Yes | Search query for Korean law topics (min 2 chars, max 200 chars). Example: '근로기준법' | |
| limit | No | Maximum number of results to return (1-100, default 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, which cover key behavioral traits. The description adds no additional behavioral context (e.g., rate limits, auth needs, or what 'Search' entails beyond the annotations). However, it does not contradict the annotations, so it meets the lower bar with annotations present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is under-specified with just 'Search', followed by irrelevant HTTP response details that clutter the text without adding useful information. It fails to be appropriately sized or front-loaded with meaningful content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's purpose (searching Korean law) and the presence of annotations, the description is incomplete. It lacks explanation of what 'Search' returns, how results are structured, or any context about the domain (Korean law), leaving significant gaps despite annotations covering safety aspects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters (q and limit) well-documented in the schema. The description adds no parameter semantics beyond what the schema provides, so it meets the baseline of 3 for high schema coverage without adding value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is essentially a tautology that just repeats the tool name 'Search' without specifying what is being searched, what resource it operates on, or distinguishing it from sibling tools. It provides no meaningful verb+resource combination or scope information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the sibling tools (like ask_ask_post, suggest_questions_suggest_questions_post, etc.). There are no explicit or implied usage contexts, alternatives, or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
suggest_questions_suggest_questions_postCRead-onlyIdempotentInspect
Suggest Questions
Suggest 3 contextual follow-up questions based on the current query and leader specialty.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Output Schema:
{}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type"
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Current user question (max 500 chars). | |
| leader | No | Current leader name (e.g. '담우'). | |
| specialty | No | Current leader's legal specialty (e.g. '노동법'). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds that it suggests '3 contextual follow-up questions', which clarifies the output quantity and context, but doesn't disclose additional behavioral traits like rate limits, authentication needs, or error handling beyond the schema examples. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured, starting with a redundant 'Suggest Questions' header and including extensive, irrelevant HTTP response details (e.g., 200, 422 codes and output schemas) that don't aid tool selection. This clutter wastes space and obscures the core functionality, making it less front-loaded and concise than ideal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, annotations provided) and lack of output schema, the description is incomplete. It fails to explain the return format (e.g., structure of suggested questions) or usage context relative to siblings, leaving gaps in understanding how to interpret results or when to invoke it. The inclusion of HTTP noise doesn't compensate for these omissions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for 'query', 'leader', and 'specialty' parameters. The description adds that suggestions are based on 'current query and leader specialty', reinforcing the parameter roles, but doesn't provide extra semantic details beyond what the schema already documents. With high schema coverage, a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Suggest 3 contextual follow-up questions based on the current query and leader specialty', which provides a clear verb ('suggest') and resource ('follow-up questions'). However, it doesn't distinguish this tool from sibling tools like 'ask_ask_post' or 'chat_leader_api_chat_leader_post', which also seem related to question handling. The purpose is understandable but lacks sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or specify scenarios where suggesting follow-up questions is preferred over direct asking or chatting. Without this context, an AI agent must guess based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!