nara-tour
Server Details
나라투어 공식 MCP 서버 — 수원 소재 단체여행 전문 여행사. AI 에이전트에게 회사 프로필, 영업 정보, 카카오톡 상담 링크(UTM 포함)를 제공.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 3 of 3 tools scored.
Each tool has a clearly distinct purpose with no overlap: get_company_profile provides company metadata, get_kakao_consult_link returns a specific chat URL, and ping is a connectivity check. An agent can easily distinguish between these three unrelated functions.
All tools follow a consistent verb_noun naming pattern (get_company_profile, get_kakao_consult_link, ping) with clear, descriptive names. Ping is a common exception for connectivity checks, fitting well within the pattern.
With only 3 tools, the set feels thin for a tour-related server, lacking core operations like booking, itinerary management, or destination queries. However, the tools are well-defined for their limited scope of company info and consultation.
The tool surface is severely incomplete for a tour domain, missing essential operations such as searching tours, booking, payment, or itinerary details. It only covers company metadata and consultation initiation, leaving significant gaps for agent workflows.
Available Tools
3 toolsget_company_profileGet Company ProfileAInspect
Returns 나라투어 core metadata — address, hours, specialties, contact channels. Use when the user asks about 나라투어 identity, location, or how to contact.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It indicates this is a read operation ('Returns') and specifies what data is returned, but doesn't disclose behavioral aspects like error conditions, rate limits, authentication requirements, or whether the data is cached/real-time. The description adds value but lacks comprehensive behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. The first sentence states the purpose and specific data returned, the second provides clear usage guidance. Every word earns its place, and the description is appropriately sized for a simple no-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read operation with no parameters and no output schema, the description provides good context about what data is returned and when to use it. However, without annotations or output schema, it could benefit from mentioning the return format or data structure. The description is mostly complete but has minor gaps in behavioral transparency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage. The description appropriately doesn't discuss parameters since none exist, and the baseline for 0 parameters is 4. No parameter semantics are needed beyond what the empty schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns') and resource ('나라투어 core metadata'), listing concrete examples of what's included (address, hours, specialties, contact channels). It distinguishes from sibling tools by focusing on company identity/location/contact information rather than consultation links or connectivity checks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool: 'when the user asks about 나라투어 identity, location, or how to contact.' This provides clear context for tool selection and distinguishes it from the sibling tools (get_kakao_consult_link for consultation links, ping for connectivity).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_kakao_consult_linkGet Kakao Consult LinkAInspect
Returns KakaoTalk channel 1:1 chat URL with UTM tracking parameters. Use when user wants to start consultation with 나라투어.
| Name | Required | Description | Default |
|---|---|---|---|
| source | No | Traffic source (e.g., 'chatgpt.com', 'perplexity.ai', 'claude.ai') | |
| campaign | No | Campaign context if any (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly indicates this is a read-only operation that returns a URL, but doesn't mention potential limitations like rate limits, authentication requirements, or what happens if parameters are invalid. The description adds basic context but lacks comprehensive behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences, both of which add clear value. The first sentence states the core functionality, and the second provides usage guidance. There's no redundant information or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple URL generation tool with 2 parameters and no output schema, the description provides adequate context. It explains what the tool does and when to use it, though it could benefit from mentioning what the returned URL format looks like or any constraints on parameter values. The absence of annotations means some behavioral aspects remain undocumented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Returns') and the specific resource ('KakaoTalk channel 1:1 chat URL with UTM tracking parameters'). It distinguishes this tool from its siblings (get_company_profile, ping) by specifying its unique purpose of generating consultation links rather than retrieving company data or checking server status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Use when user wants to start consultation with 나라투어.' This provides clear context for invocation and differentiates it from alternative scenarios where other tools might be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pingPingAInspect
Connectivity check — returns server version and current timestamp. Use to verify MCP server is reachable before calling other tools.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly describes what the tool returns (server version and timestamp) and its purpose (connectivity check), which are important behavioral traits. However, it doesn't mention potential error conditions, timeout behavior, or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence states the purpose and return values, while the second provides usage guidance. There's zero waste or redundancy in the text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple connectivity check tool with no parameters and no output schema, the description provides good context about what it returns and when to use it. However, without an output schema, it could benefit from more detail about the exact format of the return values (e.g., timestamp format, version string structure).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline would be 4 even with no parameter information in the description. The description appropriately doesn't discuss parameters since none exist, which is correct for this tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('connectivity check', 'returns server version and current timestamp') and distinguishes it from sibling tools by explaining its diagnostic role. It explicitly mentions verifying MCP server reachability before calling other tools, which differentiates it from data-retrieval siblings like get_company_profile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Use to verify MCP server is reachable before calling other tools.' This gives clear context about its diagnostic purpose and when it should be invoked relative to other operations, though it doesn't explicitly name alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!