Skip to main content
Glama
OpenTorah-ai

Sefaria Jewish Library MCP Server

by OpenTorah-ai

get_commentaries

Retrieve commentary references for Jewish texts from the Sefaria library to analyze interpretations and scholarly perspectives.

Instructions

get a list of references of commentaries for a jewish text

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
referenceYesthe reference of the jewish text, e.g. 'שולחן ערוך אורח חיים סימן א' or 'Genesis 1:1'

Implementation Reference

  • Primary implementation of the get_commentaries tool. Queries the Sefaria API for related content to the given reference and returns a list of commentary references (sourceHeRef).
    async def get_commentaries(parasha_ref)-> list[str]:
        """
        Retrieves and filters commentaries on the given verse.
        """
        data = get_request_json_data("api/related/", parasha_ref)
    
        commentaries = []
        if data and "links" in data:
            for linked_text in data["links"]:
                if linked_text.get('type') == 'commentary':
                    commentaries.append(linked_text.get('sourceHeRef'))
    
        return commentaries
  • Registration of the 'get_commentaries' tool within the server's list_tools() method, defining its name, description, and input schema.
    types.Tool(
        name="get_commentaries",
        description="get a list of references of commentaries for a jewish text",
        inputSchema={
            "type": "object",
            "properties": {
                "reference": {
                    "type": "string",
                    "description": "the reference of the jewish text, e.g. 'שולחן ערוך אורח חיים סימן א' or 'Genesis 1:1'",
                },
            },
            "required": ["reference"],
        },
    ),
  • Server-side handler within call_tool() that processes requests for get_commentaries, invokes the tool implementation, and returns the result as MCP TextContent.
    elif name == "get_commentaries":
        try:
            reference = arguments.get("reference")
            if not reference:
                raise ValueError("Missing  parameter")
            logger.debug(f"handle_get_commentaries: {reference}")
            commentaries = await get_commentaries(reference)
            
            return [types.TextContent(
                type="text",
                text="\n".join(commentaries)
            )]
        except Exception as err:
            logger.error(f"retreive commentaries error: {err}", exc_info=True)
            return [types.TextContent(
                type="text",
                text=f"Error: {str(err)}"
            )]
  • Utility helper function used by get_commentaries to perform HTTP GET requests to the Sefaria API and retrieve JSON data.
    def get_request_json_data(endpoint, ref=None, param=None):
        """
        Helper function to make GET requests to the Sefaria API and parse the JSON response.
        """
        url = f"{SEFARIA_API_BASE_URL}/{endpoint}"
    
        if ref:
            url += f"{ref}"
    
        if param:
            url += f"?{param}"
    
        try:
            response = requests.get(url)
            response.raise_for_status()  # Raise an exception for bad status codes
            data = response.json()
            return data
        except requests.exceptions.RequestException as e:
            print(f"Error during API request: {e}")
            return None
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions retrieving a list but doesn't specify if this is a read-only operation, how results are formatted, if there are rate limits, or any other behavioral traits. This leaves significant gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without any unnecessary words. It is appropriately sized and front-loaded, making it easy to understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the returned list contains (e.g., format, structure), any limitations, or how it differs from the sibling tool. For a tool with no structured support, more context is needed to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter 'reference' with examples. The description adds no additional meaning beyond what the schema provides, such as clarifying the scope or format of commentaries, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('get a list of references') and resource ('commentaries for a jewish text'), making the purpose understandable. However, it doesn't explicitly differentiate from the sibling tool 'get_text', which might also retrieve text-related information, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_text', nor does it mention any prerequisites or exclusions. It only states what the tool does, leaving usage context implied at best.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/OpenTorah-ai/mcp-sefaria-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server