hebcal
Server Details
Model Context Protocol extension for Hebrew calendar
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- hebcal/hebcal-mcp
- GitHub Stars
- 1
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
7 toolsconvert-gregorian-to-hebrewCInspect
Converts a Gregorian (civil) date to a Hebrew date (Jewish calendar)
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Gregorian date (in yyyy-MM-dd format) to convert |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry full behavioral disclosure burden. It fails to specify the output format (e.g., whether Hebrew dates include Hebrew characters, month names, or numeric formats), timezone handling, or behavior with invalid dates. Only clarifies calendar system synonyms.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no redundancy or filler. However, given the lack of annotations and output schema, the description may be overly terse—front-loaded with purpose but lacking behavioral details that would help an agent predict outcomes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter conversion tool, identifying the transformation direction. However, without an output schema or annotations, the description should ideally specify the Hebrew date output format (string format, localization) to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for its single parameter (date format 'yyyy-MM-dd'). The description offers no additional parameter guidance beyond what the schema already documents, meeting the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb (Converts) and resources (Gregorian/Hebrew dates). Parenthetical clarifications '(civil)' and '(Jewish calendar)' add semantic value. However, it does not explicitly distinguish from the sibling 'convert-hebrew-to-gregorian' tool by naming it or explaining the directional difference.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus siblings like 'yahrzeit' or 'jewish-holidays-year', nor any mention of prerequisites or error conditions. Usage is implied by the tool name but not stated in the description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
convert-hebrew-to-gregorianCInspect
Converts a Hebrew date to a Gregorian (civil) date
| Name | Required | Description | Default |
|---|---|---|---|
| day | Yes | Hebrew day of month | |
| year | Yes | Hebrew year | |
| month | Yes | Hebrew month name transliterated, like Elul or Tishrei |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to indicate output format (string? object? ISO date?), error behavior for invalid Hebrew dates (e.g., 30th day of months with only 29 days), or whether the conversion accounts for sunset/times versus dates.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with no redundant words or filler. However, it is minimalist to a fault—while not wasting space, it misses opportunities to add value about edge cases or output format that would justify a higher score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacking an output schema and annotations, the description should compensate by describing the return value structure or format. It provides no information about what the tool returns (e.g., ISO string, object with day/month/year fields), leaving a significant gap for an AI agent attempting to use the result.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (day, month, year all documented), establishing a baseline of 3. The description adds no additional semantic context about parameter relationships (e.g., that month names are transliterated) or valid value ranges beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Converts') and identifies both source (Hebrew) and target (Gregorian/civil) resources clearly. While it doesn't explicitly name the sibling tool 'convert-gregorian-to-hebrew', the directionality in the description combined with the tool name makes the distinction clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its sibling 'convert-gregorian-to-hebrew', nor does it mention prerequisites such as valid Hebrew date ranges or leap year handling. It assumes the agent already knows the conversion direction needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
daf-yomiBInspect
Calculates the Daf Yomi (Babylonian Talmud) learning for a specified date
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Gregorian date in yyyy-MM-dd format |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to describe the output format, whether the calculation includes the current cycle's tractate/page breakdown, or what happens for dates outside the Daf Yomi epoch (pre-1923). The term 'calculates' is vague regarding whether this performs a lookup or algorithmic computation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence that front-loads the verb and object. Every word earns its place without redundancy or unnecessary hedging.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter lookup tool, but lacks output description since no output schema exists. For a tool returning structured Jewish text references, mentioning the return format (tractate name + page) would elevate this to 4-5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (the 'date' parameter is fully documented). The description references 'specified date' which aligns with the parameter, earning the baseline 3. It does not add semantic nuance beyond the schema regarding date boundaries or timezone handling.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the specific resource (Daf Yomi/Babylonian Talmud) and action (calculates) with enough specificity to distinguish from sibling calendar/holiday tools. However, it could be elevated to 5 by explicitly stating what constitutes the 'learning' (i.e., the specific tractate and page number) to fully disambiguate the output scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit guidance on when to use this tool versus siblings like 'torah-portion' or 'jewish-holidays-year'. While the domain is distinct, the description lacks 'use this when...' framing or prerequisites (e.g., understanding that Daf Yomi follows a specific 7.5 year cycle).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jewish-holidays-yearAInspect
Calculates a list of all Jewish holidays during a Gregorian (civil) year
| Name | Required | Description | Default |
|---|---|---|---|
| year | Yes | Gregorian year |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses the output structure ('a list') and input calendar system ('Gregorian'), but omits which holidays are included (major/minor/fast days), whether Hebrew dates are returned, or if the list is sorted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 12 words with no redundancy. Front-loaded with action verb 'Calculates', every word serves a purpose, and appropriate length for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, the description provides minimum viable context. However, given the lack of output schema, it should ideally specify what constitutes a 'Jewish holiday' in this context or the return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the 'year' parameter already described as 'Gregorian year'. The description adds the '(civil)' synonym, providing minimal additional semantic value beyond the structured schema, warranting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Calculates'), resource ('Jewish holidays'), and scope ('during a Gregorian year'). It effectively distinguishes from siblings like 'shabbat-times' (weekly) and 'convert-gregorian-to-hebrew' (conversion vs. holiday listing).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies 'Gregorian (civil) year' which clarifies the input format, but provides no explicit guidance on when to use this versus calendar conversion tools or other date-related siblings. The scope is clear but lacks 'when-not' or alternative recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shabbat-timesAInspect
Generates Shabbat and holiday candle-lighting and Havdalah times for a given location and date range
| Name | Required | Description | Default |
|---|---|---|---|
| tzid | Yes | Olson timezone ID (e.g. "America/Chicago", "Europe/Moscow") | |
| endDate | Yes | End date in yyyy-MM-dd format | |
| latitude | Yes | Latitude as decimal, valid range -90 to +90 (e.g. 41.85003) | |
| longitude | Yes | Longitude as decimal, valid range -180 to +180 (e.g. -87.65005) | |
| startDate | Yes | Start date in yyyy-MM-dd format |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context by specifying 'candle-lighting' (pre-sunset) and 'Havdalah' (post-sunset/twilight) rather than generic sunset times, and clarifies coverage includes both Shabbat and holidays. However, it omits calculation methods (e.g., 18 minutes before sunset), output format details, and handling of edge cases like polar regions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the verb and covers all essential elements (what it generates, for what occasions, for what inputs). Zero redundancy or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the moderate complexity (5 required primitives, no nested objects) and absence of an output schema, the description adequately explains what the tool returns. It appropriately scopes the functionality (Shabbat + holidays, candle-lighting + Havdalah). Minor gap: could hint at output structure (list of events) given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description provides conceptual grouping by mentioning 'location' (encompassing lat/long/tzid) and 'date range' (encompassing start/end dates), which adds semantic clarity beyond the raw schema, but does not add syntax details, validation rules, or cross-parameter constraints beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Generates') and clearly identifies the resource (Shabbat and holiday candle-lighting and Havdalah times). It effectively distinguishes from siblings like 'convert-gregorian-to-hebrew' (date conversion) and 'jewish-holidays-year' (likely date listing vs. specific ritual times) by specifying candle-lighting and Havdalah, which are time-specific rituals.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through specificity—candle-lighting and Havdalah times suggest precise temporal calculations rather than just date listings. However, it lacks explicit guidance on when to use this versus 'jewish-holidays-year' or other siblings, and provides no prerequisites or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
torah-portionBInspect
Calculates the weekly Torah portion (also called parashat haShavua) for a specified date
| Name | Required | Description | Default |
|---|---|---|---|
| il | Yes | True if in Israel, false for Diaspora | |
| date | Yes | Gregorian date in yyyy-MM-dd format |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Fails to disclose what the tool returns (string? object?), error behavior for invalid dates, or the critical behavioral difference between Israel and Diaspora reading schedules despite having the 'il' parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with strong verb-first structure. No redundancy or waste. Front-loaded with core action. Minor gap: could use second sentence to explain Israel/Diaspora distinction behavior given parameter complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 2-parameter tool with complete schema documentation. However, missing output description (no output schema exists) and fails to explain the semantic importance of the Israel/Diaspora flag which affects calculation results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions, establishing baseline of 3. Description mentions 'specified date' which aligns with the date parameter, but adds no semantic context about the 'il' parameter's significance for determining reading schedules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with 'Calculates' + 'weekly Torah portion' + alternative name 'parashat haShavua'. Clearly distinguishes from sibling holiday/conversion tools by focusing on weekly Torah readings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implicit usage is clear (use when you need weekly portion for a date), but lacks explicit when-not-to-use guidance. No mention of relationship to sibling tools like daf-yomi (daily) vs weekly, or date range limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
yahrzeitAInspect
Calculates the Yahrzeit, the anniversary of the day of death of a loved one, according to the Hebrew calendar for a specified date
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Gregorian date of death (in yyyy-MM-dd format) | |
| afterSunset | Yes | after sunset |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It successfully adds the Hebrew calendar context not present in schema, but lacks disclosure of return value format, calculation edge cases (e.g., Adar II in leap years), or specific behavior of the afterSunset flag beyond the parameter name.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, front-loaded sentence with zero waste. Action ('Calculates') comes first, followed by definition and scope. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a 2-parameter calculation tool with 100% schema coverage. The missing output schema is partially mitigated by the clear verb 'Calculates', though explicit mention of return format (Hebrew date vs Gregorian) would achieve a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (date format yyyy-MM-dd documented, afterSunset described). Description reinforces that 'date' refers to death date via 'anniversary of the day of death', but adds no syntax, validation rules, or format details beyond the schema. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'Calculates', resource 'Yahrzeit' with parenthetical definition ('anniversary of the day of death'), and scope 'according to the Hebrew calendar'. Clearly distinguishes from generic conversion siblings like convert-gregorian-to-hebrew by targeting the specific Jewish mourning ritual.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context by defining Yahrzeit as a death anniversary, making the domain unambiguous. While it doesn't explicitly contrast with siblings, the specific cultural terminology and explanation imply distinct use case from general calendar conversions or holiday lookups. No explicit 'when not to use' guidance prevents a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!