Skip to main content
Glama
Sivan22

Sefaria Jewish Library MCP Server

get_commentaries

Retrieve a list of commentary references for a specific Jewish text, enabling deeper study and analysis of traditional texts in the Sefaria Library.

Instructions

get a list of references of commentaries for a jewish text

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
referenceYesthe reference of the jewish text, e.g. 'שולחן ערוך אורח חיים סימן א' or 'Genesis 1:1'

Implementation Reference

  • The primary handler function that implements the tool logic: fetches related links from Sefaria API for the given reference and filters those of type 'commentary', returning a list of Hebrew commentary references.
    async def get_commentaries(parasha_ref)-> list[str]:
        """
        Retrieves and filters commentaries on the given verse.
        """
        data = get_request_json_data("api/related/", parasha_ref)
    
        commentaries = []
        if data and "links" in data:
            for linked_text in data["links"]:
                if linked_text.get('type') == 'commentary':
                    commentaries.append(linked_text.get('sourceHeRef'))
    
        return commentaries
  • Registers the 'get_commentaries' tool with the MCP server in the list_tools handler, specifying its description and input schema.
    types.Tool(
        name="get_commentaries",
        description="get a list of references of commentaries for a jewish text",
        inputSchema={
            "type": "object",
            "properties": {
                "reference": {
                    "type": "string",
                    "description": "the reference of the jewish text, e.g. 'שולחן ערוך אורח חיים סימן א' or 'Genesis 1:1'",
                },
            },
            "required": ["reference"],
        },
    ),
  • Defines the JSON schema for the tool's input: requires a 'reference' string parameter.
    inputSchema={
        "type": "object",
        "properties": {
            "reference": {
                "type": "string",
                "description": "the reference of the jewish text, e.g. 'שולחן ערוך אורח חיים סימן א' or 'Genesis 1:1'",
            },
        },
        "required": ["reference"],
    },
  • Tool dispatch handler in the call_tool function that extracts the reference argument, calls the get_commentaries function, and formats the result as text content or handles errors.
    elif name == "get_commentaries":
        try:
            reference = arguments.get("reference")
            if not reference:
                raise ValueError("Missing  parameter")
            
            logger.debug(f"handle_get_commentaries: {reference}")
            commentaries = await get_commentaries(reference)
            
            return [types.TextContent(
                type="text",
                text="\n".join(commentaries)
            )]
        except Exception as err:
            logger.error(f"retreive commentaries error: {err}", exc_info=True)
            return [types.TextContent(
                type="text",
                text=f"Error: {str(err)}"
            )]
  • Utility helper function used by get_commentaries to perform API requests to Sefaria and retrieve JSON data.
    def get_request_json_data(endpoint, ref=None, param=None):
        """
        Helper function to make GET requests to the Sefaria API and parse the JSON response.
        """
        url = f"{SEFARIA_API_BASE_URL}/{endpoint}"
    
        if ref:
            url += f"{ref}"
    
        if param:
            url += f"?{param}"
    
        try:
            response = requests.get(url)
            response.raise_for_status()  # Raise an exception for bad status codes
            data = response.json()
            return data
        except requests.exceptions.RequestException as e:
            print(f"Error during API request: {e}")
            return None
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. The description only states what the tool does ('get a list of references') without detailing behavioral traits such as whether this is a read-only operation, potential rate limits, authentication needs, or what format the list returns (e.g., structured data, pagination). For a tool with no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of a single sentence that directly states the tool's purpose. There is no wasted language or unnecessary elaboration, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete for effective use. It doesn't explain what the output looks like (e.g., list format, data structure), behavioral constraints, or how it differs from sibling tools. For a tool with no structured metadata, the description should provide more context to compensate, but it falls short.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not add any parameter-specific information beyond what the input schema provides. The schema has 100% description coverage, with the 'reference' parameter clearly documented with examples. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description doesn't compensate with additional semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'get a list of references of commentaries for a jewish text'. It specifies the verb ('get'), resource ('commentaries'), and target ('jewish text'), making it easy to understand what the tool does. However, it doesn't explicitly distinguish this from sibling tools like 'get_text' or 'search_texts', which might also retrieve text-related information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_text' or 'search_texts', nor does it specify prerequisites, exclusions, or contextual cues for selecting this tool over others. The user must infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Sivan22/mcp-sefaria-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server