Skip to main content
Glama

CourtListener

Server Details

MCP for CourtListener: US federal and state opinions, dockets, judges, plus eCFR regulations.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Vaquill-AI/courtlistener-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 19 of 19 tools scored. Lowest: 3/5.

Server CoherenceA
Disambiguation4/5

Tools are grouped by functionality (citation, get, search) with distinct purposes. However, citation tools have some overlap (e.g., citation_lookup_citation and citation_enhanced_citation_lookup both look up citations) that could confuse an agent, though descriptions help differentiate.

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with clear prefixes: citation_*, get_*, search_*, and status. This makes the API predictable and easy to navigate.

Tool Count4/5

With 19 tools, the server covers multiple sub-domains (citations, entities, searches) without being overwhelming. The count is slightly high but still reasonable for the scope of legal research.

Completeness4/5

The tool set provides good CRUD-lifecycle coverage for read operations: citation lookup/parsing, entity retrieval, and various searches. Notable omissions include searching courts (only get_court exists) and lack of update/create tools, but these are consistent with a read-only API.

Available Tools

19 tools
citation_batch_lookup_citationsAInspect

Look up multiple legal citations in a single request.

This is more efficient than making individual requests for each citation. Accepts up to 100 citations at once.

Args: citations: List of citation strings to look up (max 100). ctx: The FastMCP context for logging and accessing shared resources.

Returns: dict[str, Any]: A dictionary mapping each citation to its corresponding opinion(s).

Raises: ValueError: If COURT_LISTENER_API_KEY is not found in environment variables. httpx.HTTPStatusError: If the API request fails.

ParametersJSON Schema
NameRequiredDescriptionDefault
citationsYesList of citations to look up (max 100)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations were provided, so the description carries full burden. It discloses that the tool makes an API call, requires an API key, returns a dictionary mapping citations to opinions, and raises specific errors. It does not discuss read-only nature or rate limits, but the tool name implies lookup.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear opening sentence, efficiency note, and sections for args/returns/raises. It is concise but includes some verbosity in the docstring-style elements that could be streamlined.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a batch lookup tool and absence of annotations, the description covers the essential behavior: input, output, errors, and efficiency benefit. It lacks details on pagination or performance limits but is otherwise complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description only repeats information in the schema (list of citations, max 100) without adding additional meaning beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Look up multiple legal citations in a single request.' It explicitly contrasts with individual requests, distinguishing it from sibling tools like citation_lookup_citation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains that it is more efficient than individual requests and accepts up to 100 citations, implying use for batch lookups. However, it does not explicitly state when not to use it or mention alternatives like citation_lookup_citation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

citation_enhanced_citation_lookupAInspect

Enhanced citation lookup combining citeurl parsing with CourtListener data.

This tool first uses citeurl to parse and validate the citation format, then optionally queries the CourtListener API for additional case information.

Returns: dict: Comprehensive citation information from both sources, containing: - citation: The original citation string - citeurl_analysis: Parsing results from citeurl - courtlistener_data: Lookup results from CourtListener API - combined_info: Summary of available information from both sources

Args: citation: The citation string to look up and analyze. ctx: The FastMCP context for logging and accessing shared resources. include_courtlistener: Whether to include CourtListener API lookup.

Returns: dict[str, dict | str | bool]: A dictionary containing the enhanced citation information.

ParametersJSON Schema
NameRequiredDescriptionDefault
citationYesThe citation to look up and analyze
include_courtlistenerNoWhether to also perform CourtListener API lookup

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description discloses the two-step process (parse then optionally query API) and return structure, but lacks details on potential errors, rate limits, or whether it's a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with purpose and well-structured with Args/Returns sections, but includes redundancy (returns mentioned twice) and could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 params and an output schema, the description covers the combined workflow and return structure. Missing error or rate limit info, but adequate for the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good descriptions. The description adds context (e.g., default for include_courtlistener) but largely repeats parameter info without significant additional meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it combines citeurl parsing with CourtListener data, distinguishing it from sibling tools like citation_parse_citation_with_citeurl (parsing only) and citation_lookup_citation (likely basic lookup).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies it's for enhanced lookup combining both sources but does not explicitly state when to use vs alternatives, nor when to avoid it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

citation_extract_citations_from_textAInspect

Extract all legal citations from a block of text using citeurl.

This tool finds and parses all legal citations within a given text, including both long-form and short-form citations (like 'id.' references).

Args: text: The text containing legal citations to extract. ctx: The FastMCP context for logging.

Returns: dict[str, list | int]: A dictionary containing: - total_citations: Number of citations found - citations: List of parsed citation information - text_length: Length of the input text - error (optional): Error message if extraction failed

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesText containing legal citations to extract

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses that the tool uses citeurl for parsing, handles both long and short forms, and describes the return structure including potential errors. It does not mention side effects, rate limits, or permissions, but as a read-only extraction tool, this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is informative but verbose, including an 'Args' and 'Returns' section that is more typical of code documentation. While structured, it could be more concise without sacrificing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter input and the presence of an output schema (though not shown), the description covers the return format, error case, and extraction scope. No obvious gaps remain for an extraction tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds context about citeurl and citation types, but does not significantly enhance parameter meaning beyond the schema's description of 'text'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Extract all legal citations'), the resource ('from a block of text'), and the method ('using citeurl'). It also specifies coverage of long-form and short-form citations, distinguishing it from siblings that likely handle single citations or different operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lacks explicit guidance on when to use this tool versus alternatives. While the name and description imply bulk extraction from text, it does not compare to sibling tools like citation_batch_lookup_citations or citation_lookup_citation, nor does it state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

citation_lookup_citationAInspect

Look up a legal citation to find the opinion it references in CourtListener.

This tool accepts various citation formats including:

  • U.S. Reporter citations (e.g., "410 U.S. 113")

  • Federal Reporter citations (e.g., "123 F.3d 456")

  • WestLaw citations (e.g., "2023 WL 12345")

  • State reporter citations

Args: citation: The citation string to look up. ctx: The FastMCP context for logging and accessing shared resources.

Returns: dict[str, Any]: The opinion(s) that match the citation, or an error dict if the lookup fails.

Raises: ValueError: If COURT_LISTENER_API_KEY is not found in environment variables. httpx.HTTPStatusError: If the API request fails.

ParametersJSON Schema
NameRequiredDescriptionDefault
citationYesThe citation to look up (e.g., '410 U.S. 113', '2023 WL 12345')

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses the lookup behavior, dependencies on environment variable, and possible errors (ValueError, httpx.HTTPStatusError). Missing details on idempotency or rate limits, but adequate for a read-only tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise and well-structured: a clear first sentence, bulleted examples, and formal Args/Returns/Raises sections. Every sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (single required parameter) and presence of output schema (per context signals), the description fully covers purpose, parameters, error conditions, and return value. No gaps identified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already describes parameter, but description adds significant value by listing specific supported citation formats (e.g., '410 U.S. 113', '2023 WL 12345'), enhancing understanding beyond the schema's generic example.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Look up a legal citation to find the opinion it references' and lists specific citation formats, making the purpose unmistakable and distinct from sibling tools like citation_batch_lookup_citations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context on when to use (single citation lookup) and examples of citation formats. Does not explicitly state when not to use or mention alternatives, but sibling tool names suggest other options.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

citation_parse_citation_with_citeurlAInspect

Parse a legal citation using citeurl's advanced citation recognition.

This tool uses the citeurl library to parse legal citations and extract structured information including tokens, normalized format, and URL generation.

Returns detailed information about the citation including:

  • Recognized citation format and source

  • Extracted tokens (volume, reporter, page, etc.)

  • Generated URL if available

  • Normalized citation text

Args: citation: The citation string to parse. ctx: The FastMCP context for logging. broad: Whether to use broad matching for flexible parsing.

Returns: dict[str, str | dict | None]: A dictionary containing the parsed citation data, including success status, original citation, and detailed parsing results.

ParametersJSON Schema
NameRequiredDescriptionDefault
broadNoUse broad matching for more flexible parsing
citationYesThe citation to parse (e.g., '410 U.S. 113', '42 USC § 1988')

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It explains the tool returns tokens, normalized format, URL generation, and details the return structure. It mentions the broad parameter for flexible parsing. However, it does not disclose potential side effects, auth requirements, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear opening statement, followed by bullet points of returned information. It includes an Args section that largely duplicates schema documentation, adding slight redundancy. Overall, it is concise and front-loaded, but a few sentences could be trimmed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations, the description adequately covers the tool's purpose, parameters, and return structure (including output schema details). It mentions broad matching context. It does not cover error cases or behavior on invalid input, but for a parse tool with output schema, it is sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with both citation and broad having descriptions in the input schema. The description repeats the parameter descriptions almost verbatim, adding minimal extra meaning. Since schema_coverage is high, baseline 3 applies, and the description does not provide significant additional semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool parses a legal citation using citeurl's advanced citation recognition. It specifies extracting structured tokens, normalized format, and URL generation. This distinguishes it from sibling tools like citation_lookup_citation or citation_verify_citation_format, which have different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for parsing legal citations into structured data, but does not explicitly state when to use this tool over alternatives like citation_lookup_citation or citation_extract_citations_from_text. It provides clear context for its functionality but lacks exclusion criteria or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

citation_verify_citation_formatAInspect

Verify if a citation string is in a valid format using citeurl's advanced parsing.

This tool performs validation using citeurl's comprehensive citation templates to check if a citation appears to be in a recognized legal citation format. This is much more accurate than simple regex matching.

Returns information about the citation format and any detected issues.

Args: citation: The citation string to verify. ctx: The FastMCP context for logging.

Returns: dict[str, str | bool | list[str] | None]: A dictionary containing validation results with: - valid: Whether the citation is in a valid format - format: The recognized citation format type (if valid) - template: The citation template matched (if valid) - issues: List of any validation issues found - citation: The original citation string

ParametersJSON Schema
NameRequiredDescriptionDefault
citationYesThe citation to verify

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It reveals behavioral details: uses citeurl's comprehensive citation templates, returns validation results including valid, format, template, issues, and original citation. This provides good transparency about internal logic and output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is adequately structured with a short introductory paragraph and detailed returns section, but it is longer than necessary. The returns dict could be summarized more concisely, and the Args section is redundant with the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 param, output schema exists), the description covers core functionality and return structure. However, it lacks details on error handling, performance, or permissions, which would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (one parameter with description 'The citation to verify'). The description's Args section repeats this verbatim, adding no new semantics about citation formatting expectations. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: verifying if a citation string is in a valid format using citeurl's advanced parsing. It distinguishes itself from siblings like citation_lookup_citation and citation_parse_citation_with_citeurl by focusing on format validation rather than lookup or parsing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when validation accuracy is needed ('much more accurate than simple regex') but does not explicitly state when to use this tool versus alternatives or when not to use it. No direct comparison to sibling tools is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_audioAInspect

Get oral argument audio information by ID from CourtListener.

ParametersJSON Schema
NameRequiredDescriptionDefault
audio_idYesThe audio recording ID to retrieve

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It correctly implies a read operation but does not disclose rate limits, authentication, or exact return behavior beyond what is in the output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no redundancy, perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With an output schema present, the description adequately defines the tool's purpose for a simple retrieval. Could mention read-only nature, but overall complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter, and the description adds no additional meaning beyond what the input schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get', the resource 'oral argument audio information', and the method 'by ID', distinguishing it from search_audio and other get tools like get_opinion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives (e.g., search_audio). The context is implied but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_clusterBInspect

Get an opinion cluster by ID from CourtListener.

ParametersJSON Schema
NameRequiredDescriptionDefault
cluster_idYesThe opinion cluster ID to retrieve

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. The description only indicates a read operation ('Get') but omits details about idempotence, side effects, error handling, or required permissions. An output schema exists but behavioral transparency is lacking.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no redundant words. It is appropriately front-loaded and concise for the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given only one parameter and an output schema, the description is minimal. However, it lacks usage context and behavioral details, making it less complete than ideal for a tool with no annotations. A 3 reflects acceptable but not thorough coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter 'cluster_id', which already explains its purpose. The tool description does not add any meaning beyond what the schema provides, so a baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get an opinion cluster by ID from CourtListener', specifying the action, resource, and origin. It effectively distinguishes from siblings like get_opinion or get_docket by naming the specific resource type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor any context about prerequisites or scenarios. An agent would have to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_courtBInspect

Get court information by ID from CourtListener.

ParametersJSON Schema
NameRequiredDescriptionDefault
court_idYesThe court ID to retrieve (e.g., 'scotus', 'ca9')

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits. It only says 'get', implying a read operation, but fails to mention any limitations, error conditions, or that it returns only basic court metadata. The existence of an output schema mitigates this slightly, but the description itself adds little behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, front-loaded sentence that efficiently conveys the tool's purpose with no extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the output schema compensates for missing return value details, the description lacks context about what constitutes 'court information' (e.g., name, jurisdiction). Given the simple structure, it is minimally adequate but could be more informative.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with a description for 'court_id' including examples. The description simply reiterates 'by ID', adding no new meaning. A baseline of 3 is appropriate given full schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves court information by ID from CourtListener. The verb 'get' and resource 'court information' are specific, and the tool name 'get_court' distinguishes it from sibling tools like 'get_audio' or 'get_cluster'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like search_opinions or other get_* tools. The description does not mention any conditions for use or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_docketBInspect

Get a specific court docket by ID from CourtListener.

ParametersJSON Schema
NameRequiredDescriptionDefault
docket_idYesThe docket ID to retrieve

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; the description does not disclose any behavioral traits beyond 'Get', such as authentication needs, error handling, or output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 10 words, no fluff, front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple retrieval tool with output schema present, but could be improved by mentioning source or typical usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with parameter 'docket_id' already described; the description adds no additional meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' and resource 'specific court docket by ID', clearly distinguishing from siblings like search_dockets (search) and get_cluster (different entity).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like search_dockets; no exclusions or context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_opinionAInspect

Get a specific court opinion by ID from CourtListener.

ParametersJSON Schema
NameRequiredDescriptionDefault
opinion_idYesThe opinion ID to retrieve

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must bear the full burden. It only states that it retrieves an opinion by ID, lacking details on authentication, rate limits, or error behavior. The basic disclosure is insufficient for a read tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no unnecessary words. It is concise and front-loaded with the key action and resource. It could include slightly more context without sacrificing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and an output schema present, the description is mostly adequate. It clearly identifies what is retrieved, but lacks mention of the response structure or potential constraints, which the output schema likely covers.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single parameter 'opinion_id', and the description adds no extra meaning. Per guidelines, baseline 3 is appropriate when schema already documents the parameter well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get', the resource 'court opinion', and the source 'from CourtListener'. It distinguishes this tool from siblings like 'get_audio' or 'get_cluster' by specifying the exact resource type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving a specific opinion by ID but does not explicitly state when to use this tool versus alternatives like 'search_opinions'. No exclusions or context are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_personAInspect

Get judge or legal professional information by ID from CourtListener.

ParametersJSON Schema
NameRequiredDescriptionDefault
person_idYesThe person (judge) ID to retrieve

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It does not mention that this is a read-only operation, any authentication needs, or error behavior (e.g., what happens if ID not found). Minimal disclosure beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no unnecessary words. Front-loaded with the verb and resource. Efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is simple, has an output schema (not shown here but noted), and one parameter, the description is sufficient. However, it could mention uniqueness of the ID or that it returns a full person object.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for person_id. The description does not add new meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Get judge or legal professional information by ID from CourtListener,' which is a specific verb+resource. It distinguishes from sibling tool 'search_people' which likely searches, not retrieves by ID.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when/when-not to use or alternatives given. The context of retrieving by ID vs. searching is implied but not stated. Could mention that search_people is for query-based lookups.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_audioBInspect

Search oral argument audio recordings in CourtListener.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSearch query for oral argument audio
courtNoCourt ID filter (e.g., 'scotus', 'ca9')
judgeNoFilter by judge name
limitNoMaximum results to return
order_byNoSort by 'score desc', 'dateArgued desc', or 'dateArgued asc'score desc
case_nameNoFilter by case name
argued_afterNoFilter arguments after this date (YYYY-MM-DD)
argued_beforeNoFilter arguments before this date (YYYY-MM-DD)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden, but it only states the core action. It does not disclose whether it is read-only, any rate limits, data freshness, or behaviors like pagination.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that conveys the core purpose efficiently. It is front-loaded with the verb and resource, but could be slightly more structured with additional context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is bare minimum. While the output schema exists to explain return values, the tool has many parameters and no usage hints. Additional context on search behavior or date formatting would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no information beyond what is in the input schema, which has 100% coverage. The schema already describes all parameters, so the description provides no additional semantic value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action (search), the resource (oral argument audio recordings), and the context (CourtListener). It effectively distinguishes this tool from sibling search tools like search_opinions or search_dockets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as when to prefer this over search_opinions for audio-related queries. No explicit context or exclusions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_docketsBInspect

Search federal cases (dockets) from PACER in CourtListener.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSearch query for docket text
courtNoCourt ID filter (e.g., 'scotus', 'ca9')
limitNoMaximum results to return
order_byNoSort by 'score desc', 'dateFiled desc', or 'dateFiled asc'score desc
case_nameNoFilter by case name
party_nameNoFilter by party name
docket_numberNoSpecific docket number to search for
date_filed_afterNoFilter dockets filed after this date (YYYY-MM-DD)
date_filed_beforeNoFilter dockets filed before this date (YYYY-MM-DD)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must disclose behavioral traits like read-only nature, rate limits, or return format. It only states the search function without any such details, leaving significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that is front-loaded with the core purpose. However, it could be slightly more informative without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 9 parameters and sibling tools, the description is too brief. It lacks context on when to use it versus siblings and does not explain output despite having an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no extra meaning beyond the schema's parameter descriptions, but it does not detract either.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Search federal cases (dockets) from PACER in CourtListener,' using a specific verb and resource. It distinguishes from sibling tools like search_opinions or search_dockets_with_documents by specifying the source and target entity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as search_dockets_with_documents or search_opinions. The description only explains what it does without any contextual usage advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_dockets_with_documentsAInspect

Search federal cases (dockets) with up to three nested documents.

If there are more than three matching documents, the more_docs field will be true.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSearch query for federal cases
courtNoCourt ID filter (e.g., 'scotus', 'ca9')
limitNoMaximum results to return
order_byNoSort by 'score desc', 'dateFiled desc', or 'dateFiled asc'score desc
case_nameNoFilter by case name
party_nameNoFilter by party name
docket_numberNoSpecific docket number to search for
date_filed_afterNoFilter dockets filed after this date (YYYY-MM-DD)
date_filed_beforeNoFilter dockets filed before this date (YYYY-MM-DD)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the key behavioral trait that only up to three documents are returned and that a 'more_docs' field indicates additional ones, which compensates for the lack of annotations. However, it could mention other behaviors like pagination or the default order.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words, front-loaded with the main purpose, and immediately providing the critical detail about the three-document limit.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 9 parameters and an output schema, the description adequately covers the core functionality and a key behavioral nuance. It could be slightly more complete by clarifying that the search returns docket entries with documents, but it is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the input schema already documents all parameters. The description adds no additional meaning beyond the schema, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as searching federal cases (dockets) with up to three nested documents, which distinguishes it from sibling tools like 'search_dockets' that likely return dockets without documents.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use the tool (when you need dockets with documents) but does not explicitly contrast it with alternatives like 'search_dockets' or state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_opinionsBInspect

Search case law opinion clusters with nested Opinion documents in CourtListener.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSearch query for full text of opinions
courtNoCourt ID filter (e.g., 'scotus', 'ca9')
judgeNoFilter by judge name
limitNoMaximum results to return
cited_gtNoMinimum number of times opinion has been cited
cited_ltNoMaximum number of times opinion has been cited
order_byNoSort by 'score desc', 'dateFiled desc', or 'dateFiled asc'score desc
case_nameNoFilter by case name
filed_afterNoOnly show opinions filed after this date (YYYY-MM-DD)
filed_beforeNoOnly show opinions filed before this date (YYYY-MM-DD)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without any annotations, the description only says 'search' which implies read-only, but fails to disclose rate limits, authentication needs, or what 'opinion clusters' means in practice. The behavioral detail is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 10 words, which is concise but lacks structure for a tool with 10 parameters. It under-specifies the tool's capabilities.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 10 parameters and no annotations, the description does not explain the concept of 'opinion clusters' or how 'nested Opinion documents' are returned. The output schema exists but is not described. The description is incomplete for a complex search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds no extra meaning beyond what the schema provides, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'search' and the resource 'case law opinion clusters with nested Opinion documents' in CourtListener, distinguishing it from sibling tools like search_audio or search_dockets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus other search or citation lookup siblings. For example, it doesn't mention that citation_lookup_citation is better for finding specific citations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_peopleBInspect

Search judges and legal professionals in the CourtListener database.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSearch query for judges and legal professionals
nameNoFilter by person's name
limitNoMaximum results to return
schoolNoFilter by school attended
order_byNoSort by 'score desc' or 'name asc'score desc
appointed_byNoFilter by appointing authority
position_typeNoFilter by position type (e.g., 'jud' for judge)
selection_methodNoFilter by selection method
political_affiliationNoFilter by political affiliation

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden of disclosing behavior. It states 'search' implying a read-only operation, but lacks details on result ordering, pagination, or potential side effects. The existence of output schema is not mentioned. This is adequate for a search tool but could be more transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no redundant words. It is appropriately short, though it could benefit from a slightly more structured format to highlight key aspects.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 9 parameters and an output schema, the description is too minimal. It does not explain search behavior (e.g., full-text or exact matching) or reference the output schema, leaving a significant gap in understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 9 parameters have descriptions in the input schema, so the schema provides complete coverage. The tool description does not add any additional parameter context beyond what the schema already offers, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the verb 'search' and the resource 'judges and legal professionals', which clearly distinguishes it from sibling tools that search other entities like opinions or dockets. However, it could be more precise about the database scope (CourtListener) and what exactly is returned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like search_opinions. It does not mention any prerequisites, limitations, or when it is appropriate to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_recap_documentsBInspect

Search federal filing documents from PACER in the RECAP archive.

ParametersJSON Schema
NameRequiredDescriptionDefault
qYesSearch query for RECAP filing documents
courtNoCourt ID filter (e.g., 'scotus', 'ca9')
limitNoMaximum results to return
order_byNoSort by 'score desc', 'dateFiled desc', or 'dateFiled asc'score desc
case_nameNoFilter by case name
party_nameNoFilter by party name
filed_afterNoFilter documents filed after this date (YYYY-MM-DD)
filed_beforeNoFilter documents filed before this date (YYYY-MM-DD)
docket_numberNoSpecific docket number to search for
document_numberNoSpecific document number to search for
attachment_numberNoSpecific attachment number to search for

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits, but it only states the action. It does not mention rate limits, authentication, or other behavioral aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence, which is concise but under-specified for a tool with 11 parameters. It lacks structure and additional context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (11 parameters) and that an output schema exists, the description is too minimal. It does not explain the archive context or the scope of the search.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description adds no parameter meaning beyond what is already in the schema. No value added.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Search' and the resource 'federal filing documents from PACER in the RECAP archive'. It distinguishes itself from sibling tools like search_opinions and search_dockets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, nor are there any usage prerequisites or exclusions. The description lacks any contextual hints about suitability.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

statusAInspect

Check the status of the CourtListener MCP server.

Returns: A dictionary containing server status, system metrics, and service information.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the return content (status, system metrics, service information), which adds value beyond the empty schema, but lacks information on side effects, authentication needs, or rate limits. With no annotations, the description carries a higher burden.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences without unnecessary words; the purpose and return are front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and an output schema, the description sufficiently covers what the tool does, though it could mention whether authentication is required or if the call is safe to repeat.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100%. The description does not need to explain parameter semantics, and the baseline for zero parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action ('Check the status') and resource ('CourtListener MCP server'), distinguishing it from sibling tools focused on search and retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies using this tool to check server health, but does not explicitly specify when to use it versus alternatives or exclude inappropriate use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.