Skip to main content
Glama

Vaquill

Server Details

Legal research: US primary law, Indian case law (31M+ judgments), and citation graph traversal.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Vaquill-AI/vaquill-mcp
GitHub Stars
3

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 15 of 15 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap: quick_search is a compact version of search_legal_cases, and search_cases_by_citation overlaps with lookup_case. Descriptions help differentiate, but ambiguity exists.

Naming Consistency5/5

Tool names follow a consistent verb_noun pattern (get_, search_, list_, etc.). The verbs are clear and each set within a family is uniform, making the naming predictable.

Tool Count5/5

15 tools is well-scoped for a legal research server covering both US and Indian law. Each tool serves a specific function without being excessive or insufficient.

Completeness4/5

The tool set covers searching, browsing, and retrieving legal texts and cases for both US and Indian law. However, there is no dedicated US case search tool (only via ask_legal_question) and no full case PDF retrieval tool, leaving minor gaps.

Available Tools

15 tools
get_act_textAInspect

Get URLs for the full text, PDF, and HTML versions of a specific Indian act. Pass the act_id (e.g., 'IND_central_2187' for Indian Contract Act). Returns R2 CDN URLs — fetch the text/PDF content directly from those URLs. Cost: 1 credit.

ParametersJSON Schema
NameRequiredDescriptionDefault
act_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It discloses that the tool returns R2 CDN URLs and instructs the user to fetch content directly from those URLs, which reveals important behavior beyond the schema. It also mentions the cost (1 credit), providing transparency. No contradictions with annotations (none provided).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief (two sentences) and front-loads the purpose. Every sentence adds value: the first states what it does, the second adds detail on input format, output behavior, and cost. No extraneous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having only one parameter, the description covers input format (example act_id), output (R2 CDN URLs, need to fetch content), and cost. With an output schema existing (though not shown in input), the description still provides complete guidance for a simple tool, leaving no major gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but the description adds meaning by providing an example act_id format and a real-world example ('IND_central_2187' for Indian Contract Act). This compensates for the lack of schema descriptions, giving the agent a concrete understanding of the parameter value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get URLs for the full text, PDF, and HTML versions of a specific Indian act.' This specifies the verb (Get), resource (URLs for act versions), and scope (Indian act), making the tool's purpose unambiguous. It also distinguishes from sibling tools that deal with US statutes or general legal search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides an example act_id ('IND_central_2187') and mentions cost (1 credit), but does not explicitly state when to use this tool over alternatives like search_legislation or get_us_statute_section. No 'when not to use' guidance is given, which limits the agent's ability to select the correct tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_amendmentsAInspect

Get the complete amendment history for an Indian act. Returns all footnotes showing substitutions, insertions, omissions, and notes made by amending acts. Filter by section number or amendment type. Each footnote shows the amending act name and original text (if available). Use to trace how a statute evolved. Cost: 1 credit.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
act_idYes
sectionNo
page_sizeNo
footnote_typeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description covers return format (footnotes with amending act name and original text) and mentions cost (1 credit). Could add more on error behavior or permissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, no redundant words, purpose first then details then usage hint then cost.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Output schema exists, but description already details return format. All key parameters and use case are covered. No sibling overlaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schemas have 0% coverage, but description explains 'Filter by section number or amendment type' covering section and footnote_type. Pagination parameters are implicit.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'get' with resource 'complete amendment history for an Indian act', distinguishes from sibling tools which focus on US statutes or cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use to trace how a statute evolved' and mentions filtering options, but no explicit when-not-to-use or alternative tools despite siblings being different jurisdictions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_citation_networkAInspect

Traverse the citation network around a case. Returns nodes (cases) and edges (citing relationships) with treatment types (followed, distinguished, overruled). Specify direction: 'outbound' (cases this cites), 'inbound' (cases citing this), or 'both'. Set depth (1-3 hops) and limit (1-100 nodes). Useful for understanding a case's legal influence. Cost: 2 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNo
limitNo
citationYes
directionNoboth
country_codeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses cost (2 credits) and constraints (depth 1-3, limit 1-100), but does not mention side effects, authentication needs, or error handling. Adequate for a read-only tool with output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with several short sentences. It front-loads the main purpose, then details parameters, and ends with utility note and cost. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters and an output schema, the description covers core functionality well. It mentions return types (nodes and edges) but not exact fields (output schema handles that). Missing details on pagination or citation format, but overall sufficient for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains direction (enum), depth (1-3), limit (1-100), and implicitly citation (required). Country_code is not explained, but overall adds significant meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: traversing the citation network around a case, returning nodes and edges with treatment types. It distinguishes from sibling tools like lookup_case (case details) and search_cases_by_citation (finding cases) by focusing on network traversal.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains how to use the tool: specify direction, depth, and limit. It mentions usefulness for understanding legal influence, but lacks explicit when-not-to-use or alternatives, though siblings imply differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pricingAInspect

Get current API credit pricing. Returns per-endpoint credit costs and credit-to-currency conversion rates (1 credit = $0.01 USD). No authentication required. Use to check costs before making API calls.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses that the tool requires no authentication and returns current credit costs and conversion rates. It does not specify whether pricing is cached or live, but for a simple read-only tool, the transparency is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at two sentences, with no redundant information. The key action and details are front-loaded, making it efficient for an agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters, output schema present), the description covers all necessary aspects: what it does, what it returns, authentication requirements, and intended usage context. It is complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so the baseline is 4. The description adds relevant context about the return format and authentication status, which is appropriate for a parameterless tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves current API credit pricing with specific details on return values (per-endpoint costs and conversion rates). It distinguishes itself from all sibling tools which are legal research functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'No authentication required' and 'Use to check costs before making API calls', providing clear context on when to invoke the tool. However, it does not explicitly exclude other use cases or mention alternatives, but the purpose is very specific.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_us_statute_sectionAInspect

Get metadata for a specific US statute or regulation section by act_id (e.g. 'USC_T42_C21_S1983'). The act_id comes from search_us_statutes results or ask_legal_question sources. Returns citation, title hierarchy, breadcrumb, and links to HTML, PDF, and XML formats. Use before get_us_statute_section_text to preview a section. Cost: 1 credit.

ParametersJSON Schema
NameRequiredDescriptionDefault
act_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses it returns metadata (citation, hierarchy, breadcrumb, links) and mentions cost (1 credit). It implies a read operation with no destructive side effects, adding sufficient behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with no redundancy. Purpose is front-loaded, and every sentence adds value (purpose, example, returns, usage suggestion, cost).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one parameter and an output schema (mentioned in context), the description sufficiently covers the workflow (where to get act_id, what it returns, and how it fits with a sibling). Could add more on output structure but not necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage for the only parameter, but the description adds meaning by providing an example format and stating the parameter's source (search results). This compensates well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it gets metadata for a specific US statute/regulation section, provides an example act_id format, and distinguishes from sibling get_us_statute_section_text by indicating the use case (preview before getting text).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells the agent that act_id comes from search_us_statutes or ask_legal_question, and advises to use this before get_us_statute_section_text. While it doesn't list alternatives for not using it, the guidance is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_us_statute_section_textAInspect

Get the full text of a US statute or regulation section (USC or CFR) by act_id. Returns both styled HTML (with cross-references and paragraph numbering as published in the official code) and plain text. Use when you need the actual statutory language for quotation, drafting, or analysis. Cost: 3 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
act_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so description carries the burden. It discloses cost (3 credits) and return format (HTML and plain text) but omits other behavioral traits like idempotency, permissions, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with key information (verb, resource, output format, cost). No redundant phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and minimal parameter count, the description sufficiently covers usage context and return details. It could elaborate on act_id format but is otherwise complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description explains that act_id is used to identify the statute/regulation section. This adds meaning beyond the bare schema, though a format example would improve clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves the full text of a US statute or regulation section (USC or CFR) by act_id, with specific details on output (styled HTML and plain text). It distinguishes from siblings like get_us_statute_section by specifying 'full text' and return format.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage context: 'Use when you need the actual statutory language for quotation, drafting, or analysis.' However, it does not explicitly mention when not to use or compare to alternatives like get_act_text or get_us_statute_section.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_legislationAInspect

Browse 23,000+ Indian acts, regulations, and legislation. Use to discover act_id values for get_act_text and get_amendments. Filter by category (central, state, regulatory, repealed, spent), state slug, department (sebi, rbi, etc.), year range, and status (in_force, repealed, spent). Sort by year_desc, year_asc, title_asc, title_desc, or popular. Cost: 1 credit.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
sortNoyear_desc
stateNo
searchNo
statusNo
year_toNo
categoryNo
page_sizeNo
year_fromNo
departmentNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions a credit cost (1 credit) but does not discuss pagination behavior, rate limits, or whether results are cached. The read-only nature is implied but not confirmed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively concise and front-loaded with the core purpose. It includes cost information but could be slightly more structured with bullet points for filtering options.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 10 parameters and lack of annotations, the description covers filtering and sorting adequately. The presence of an output schema reduces the need to explain return values. Missing details on pagination behavior and explicit parameter explanations prevent a score of 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description explains about 7 of 10 parameters (category, state, department, years, status, sort). It omits explanations for 'search', 'page', and 'page_size', leaving gaps for effective usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the tool lists over 23,000 Indian acts, regulations, and legislation, and explicitly states its purpose to discover act_id values for get_act_text and get_amendments. It distinguishes itself from sibling tools like search_legislation by emphasizing browsing and filtering capabilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool (to discover act_id values) and lists filtering options. However, it does not explicitly contrast with sibling tools like search_legislation, leaving some ambiguity about when to use one over the other.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_caseAInspect

Get full details for a specific case by its citation. Returns comprehensive case metadata, all known citation aliases, and citation treatment statistics showing how many times the case was followed, distinguished, overruled, approved, or referred. Use after resolve_citation or search_cases_by_citation for deep case analysis. Cost: 1 credit.

ParametersJSON Schema
NameRequiredDescriptionDefault
citationYes
country_codeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully bears transparency. It details what is returned: comprehensive case metadata, citation aliases, and treatment statistics. It also discloses cost, a behavioral trait not in schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loaded with purpose, then return details, then usage guidance. Every sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description correctly omits return value details but still summarizes them. It covers purpose, behavioral costs, and usage context. However, the lack of parameter explanations slightly reduces completeness, as schema coverage is zero and description does not compensate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, yet the description does not explain the 'citation' or 'country_code' parameters beyond the full name. It adds no meaning about format, required status, or purpose beyond the schema itself. The agent gains no extra semantic understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves full case details by citation, explicitly listing return types (metadata, aliases, treatment statistics). It distinguishes from sibling tools like 'resolve_citation' or 'search_cases_by_citation' by positioning itself for deep analysis after initial search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives explicit guidance: 'Use after resolve_citation or search_cases_by_citation for deep case analysis.' It also mentions cost (1 credit), indicating when not to use (e.g., for simple lookups).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_citationAInspect

Resolve any Indian legal citation format to its canonical case record. Accepts SCC, AIR, SCR, MANU, SCALE, INSC formats (e.g., '(2019) 11 SCC 706' or 'AIR 1976 SC 1207'). Returns case details and all known citation aliases/formats. Returns found=false (not an error) when citation cannot be resolved. Cost: 1 credit.

ParametersJSON Schema
NameRequiredDescriptionDefault
citationYes
country_codeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the accepted formats, that it returns details and aliases, and the not-found behavior. It also mentions cost. It does not address idempotency or rate limits, but those are less critical for a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no wasted words. The main purpose is front-loaded, followed by format examples and key behavioral detail about not-found. The cost note is brief but useful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity of legal citation resolution and the presence of many sibling tools, the description provides core functionality but leaves gaps: it does not explain the 'country_code' parameter, guide tool selection, or document output schema details (though an output schema exists externally). It is adequate but not fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It describes the 'citation' parameter with format examples, but does not mention the 'country_code' parameter at all, leaving its purpose unclear for a tool focused on Indian citations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Resolve' and the resource 'any Indian legal citation format to its canonical case record'. It distinguishes from siblings like lookup_case or search_cases_by_citation by focusing on canonical resolution with alias mapping.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly accepts multiple legal citation formats with examples, and clarifies that not-found returns 'found=false' rather than an error. However, it does not specify when to prefer this over sibling tools or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_cases_by_citationAInspect

Search for legal cases by citation text or case name. Use when you know part of a case name (e.g., 'Maneka Gandhi') or a partial citation. Filter by court code (SC, DEL, BOM, MAD, etc.), year range, and validity status (GOOD_LAW, OVERRULED, DISTINGUISHED, etc.). Returns up to 50 matching cases with metadata. Cost: 1 credit.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYes
year_endNo
court_codeNo
year_startNo
country_codeNo
validity_statusNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral disclosure. It mentions the cost ('1 credit'), returns up to 50 cases, and filter options, which are useful. It does not explicitly state read-only nature, but the description is sufficient for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficient with three sentences. It front-loads the purpose, then covers key features and cost. No redundant or vague language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the schema has 0% descriptions, the description provides a solid overview of the tool's capabilities, including filters and result limit. It does not explain the 'country_code' parameter or default value for 'limit', but overall it is adequate for an agent to understand how to use the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, so the description must compensate. It explains 'query', 'court_code' (with examples), year range, and 'validity_status'. However, it omits 'country_code' and 'limit' (though limit is implied by 'returns up to 50'). The added value is moderate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for legal cases by citation text or case name, which is specific and distinct from sibling tools like 'search_legal_cases' or 'lookup_case'. It provides examples ('Maneka Gandhi') and lists filter options, leaving no ambiguity about its function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use the tool ('when you know part of a case name or a partial citation') and provides examples of usage contexts. However, it does not explicitly mention when not to use it or suggest alternatives for other types of searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_legislationAInspect

Search 23,000+ Indian acts, regulations, and legislation using semantic search. Find specific statutory provisions, definitions, penalties, and procedures. Filter by category (central, state, regulatory), state, department (SEBI, RBI, TRAI, etc.), and year range. Returns relevant act sections with text excerpts, section numbers, provision type, and PDF links. Cost: 1 credit. Use for questions like 'What is the penalty for insider trading under SEBI Act?' or 'Definition of goods under GST Act'.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes
stateNo
year_toNo
categoryNo
page_sizeNo
year_fromNo
departmentNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, but description discloses cost (1 credit), return format (sections, excerpts, links), and filter capabilities. Does not confirm read-only but implies it. Lacks rate limits or authorization details, but sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single paragraph with key info front-loaded. Could be slightly more structured (e.g., bullet points) but no unnecessary sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Has output schema (not shown) and description explains return fields. Missing details on pagination and total results, but overall sufficient for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage, so description carries full burden. It explains all main filters (category, state, department, year range) and the query, though page_size is omitted. Adds value beyond raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'search' with specific resource 'Indian acts, regulations, and legislation'. Distinguishes from siblings like 'search_legal_cases' and 'search_us_statutes' by specifying domain and features.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides example queries that illustrate use cases, but does not explicitly state when not to use or compare with alternatives. Still, the examples give good practical guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_us_statutesAInspect

Semantic search across the United States Code (USC) and Code of Federal Regulations (CFR). Use for federal statutory and regulatory questions: SEC (Title 17), FDA (Title 21), civil rights (Title 42), tax (Title 26), etc. Filter by corpusType ('USC' | 'CFR') and titleNumber. Returns sections with citation, title hierarchy, HTML/PDF/XML links. The returned act_id (e.g. 'USC_T42_C21_S1983') feeds get_us_statute_section_text for full text. Cost: 2 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYes
corpus_typeNo
title_numberNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but the description discloses return format (sections, citation, hierarchy, links), linkage to sibling tool for full text, and cost (2 credits). This provides good behavioral context beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, front-loaded with purpose, and every sentence adds information (filtering, returns, linkage, cost). No redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter tool with output schema, the description covers purpose, filters, return structure, and downstream usage. It is complete and well-suited for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 4 parameters with 0% description coverage. The description explains 'corpus_type' and 'title_number' filtering, but does not mention 'limit' or elaborate on 'query'. Adds moderate value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Semantic search across the United States Code (USC) and Code of Federal Regulations (CFR)', with specific verb and resource. It also distinguishes from siblings like 'search_legal_cases' and 'search_legislation' by focusing on federal statutes and regulations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use for federal statutory and regulatory questions' with examples. While it does not explicitly exclude other uses or name alternatives, the context is clear and valuable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.