vietnamese-calendar
Server Details
Vietnamese Lunar Calendar for date and calendar conversion, and cultural insights.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 5 of 5 tools scored. Lowest: 2.8/5.
Each tool targets a distinct aspect of the Vietnamese calendar: lunar-to-solar conversion, solar-to-lunar conversion, Can Chi calculation, monthly lunar days, and vegetarian days. There is no overlap or ambiguity.
All tool names follow a consistent verb_noun pattern with descriptive prefixes (convert_date_*, get_*). The naming is uniform and predictable, making it easy to understand each tool's purpose.
With 5 tools covering the core operations of a Vietnamese calendar utility, the count is well-scoped. Each tool serves a clear need without being excessive or insufficient.
The set covers essential conversions and queries (lunar/solar, Can Chi, monthly days, vegetarian days). A minor gap might be the lack of a tool for leap year detection or holiday calculations, but the core functionality is complete for most use cases.
Available Tools
5 toolsconvert_date_lunar2solarAInspect
Convert a lunar date (Vietnamese calendar) to solar date (Gregorian calendar)
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Lunar date in YYYY-MM-DD format (e.g. 2026-01-01 for Tet) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the basic behavior (conversion between calendar systems) but lacks details on error handling, rate limits, authentication needs, or output format. The description does not contradict any annotations, but it provides only minimal behavioral context beyond the core function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that directly states the tool's purpose without any unnecessary words. It is front-loaded and efficiently communicates the essential information, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema, no annotations), the description is adequate but minimal. It covers the basic purpose and distinguishes from siblings, but lacks details on output format, error cases, or behavioral traits. For a conversion tool, more context on what the output looks like would be beneficial, but it meets the minimum viable threshold.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the parameter 'date' fully documented in the schema. The description does not add any additional meaning or context about the parameter beyond what the schema provides, such as examples of lunar date formats or validation rules. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb 'convert' and the resources involved: 'lunar date (Vietnamese calendar)' to 'solar date (Gregorian calendar)'. It explicitly distinguishes from the sibling tool 'convert_date_solar2lunar' by specifying the opposite direction of conversion, making the purpose unambiguous and well-differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (for converting from lunar to solar dates) and the sibling tool name suggests an alternative for the reverse conversion. However, it does not explicitly state when NOT to use it or provide detailed contextual exclusions, such as handling invalid dates or edge cases, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
convert_date_solar2lunarAInspect
Convert a solar date (Gregorian calendar) to lunar date (Vietnamese calendar)
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Solar (Gregorian) date in YYYY-MM-DD format (e.g. 2026-04-18) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It describes the core conversion behavior but doesn't disclose additional traits like error handling, timezone considerations, validation rules, or what happens with invalid dates. The description is accurate but lacks behavioral depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that contains no wasted words. It immediately communicates the tool's purpose and differentiates it from its sibling, making it perfectly front-loaded and appropriately sized for this simple conversion tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter conversion tool with no output schema and no annotations, the description provides adequate context about what the tool does and when to use it. However, it doesn't describe the return format or potential edge cases, leaving some gaps in completeness for a tool that performs calendar conversions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'date' fully documented in the schema. The description doesn't add any parameter information beyond what's already in the schema, so it meets the baseline for high schema coverage without providing extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('convert'), the input resource ('solar date (Gregorian calendar)'), and the output resource ('lunar date (Vietnamese calendar)'). It precisely distinguishes this tool from its sibling 'convert_date_lunar2solar' by specifying the direction of conversion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly indicates when to use this tool by specifying it converts from solar to lunar, making it clear this is the alternative to 'convert_date_lunar2solar' which converts in the opposite direction. This provides perfect sibling differentiation without needing additional exclusion statements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_canchiBInspect
Get Can Chi for a given solar date
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Solar (Gregorian) date in YYYY-MM-DD format (e.g. 2026-04-18) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description lacks details on behavior such as what is returned, side effects, or operational constraints. It merely states the action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence with no unnecessary words, effectively conveying the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description does not explain what 'Can Chi' refers to or what the output format is, leaving the agent underinformed. With no output schema or annotations, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description of the date parameter. The description adds no extra meaning beyond the schema, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get Can Chi for a given solar date,' specifying the verb and resource. It distinguishes from sibling tools that involve lunar conversions, making the tool's purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description only states what it does, without context on prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_lunar_days_for_monthCInspect
Get lunar days for a given solar month
| Name | Required | Description | Default |
|---|---|---|---|
| solar_year | Yes | Solar Year in YYYY format (e.g. 2026) | |
| solar_month | Yes | Solar Month, valid values are 1-12 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fails to disclose behavioral traits like whether it is read-only, what the output looks like, or any rate limits. For a tool with no annotation safety net, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence, but it lacks any structure or elaboration. While short, it does not fully earn its place as it omits critical context that could be added without significant length increase.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should clarify what 'lunar days' means—whether it returns a list of dates, day numbers, or something else. The description is incomplete for a tool with this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptive parameter text for solar_year and solar_month. The tool description adds no additional meaning beyond what the schema already provides, so the baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get lunar days for a given solar month' clearly states the verb ('Get') and the resource ('lunar days for a given solar month'). It distinguishes from sibling tools like conversion or vegetarian days, though 'get' is slightly vague regarding the return format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as convert_date_solar2lunar or get_lunar_vegetarian_days. There is no mention of when-not or context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_lunar_vegetarian_daysAInspect
Get lunar vegetarian days for a given solar date and calendar type
| Name | Required | Description | Default |
|---|---|---|---|
| solar_day | No | Day, omit if you want to get veggie days for the whole month | |
| solar_year | Yes | Solar Year in YYYY format (e.g. 2026) | |
| solar_month | Yes | Solar Month, valid values are 1-12 | |
| lunar_vegetarian_calendar_type | No | Lunar vegetarian calendar type, one of: two_days, four_days, six_days, eight_days, ten_days, default is ten_days |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must bear the full burden of behavioral disclosure. It correctly indicates that this is a query tool (get) and not a destructive operation. However, it does not mention performance, rate limits, or what happens when no vegetarian days are found or for invalid dates. Additional context on return format or edge cases would improve transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at one sentence. It is front-loaded with the primary action and resource. It is not overly verbose, but could potentially benefit from a brief note on usage or return. No waste sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 4 parameters (2 required) and no output schema. The description provides a basic understanding but lacks details on the output format, especially since there is no output schema. Given the complexity (lunar calendar calculations), users may need more context on what the output contains. The description is adequate but not complete for a smooth agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 100% of parameters with descriptions, so the description adds no additional parameter meaning beyond the schema. The baseline score is 3. The description does not elaborate on parameters like the relationship between solar_day and the return type, which the schema already hints at ('omit if you want to get veggie days for the whole month').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving lunar vegetarian days. It specifies the inputs (solar date and calendar type), which are distinct from the sibling tools that handle date conversion. The verb 'Get' is appropriate, and the resource 'lunar vegetarian days' is clearly identified.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool versus alternatives. While the sibling tools are date converters, the description does not mention them or explain when one would choose this tool over them. The usage context (get vegetarian days) is implied, but no guidance on exclusions or prerequisites is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!