Vaquill
Server Details
Legal research: US primary law, Indian case law (31M+ judgments), and citation graph traversal.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Vaquill-AI/vaquill-mcp
- GitHub Stars
- 3
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 15 of 15 tools scored.
Most tools have distinct purposes, but there is some overlap: quick_search is a compact version of search_legal_cases, and search_cases_by_citation overlaps with lookup_case. Descriptions help differentiate, but ambiguity exists.
Tool names follow a consistent verb_noun pattern (get_, search_, list_, etc.). The verbs are clear and each set within a family is uniform, making the naming predictable.
15 tools is well-scoped for a legal research server covering both US and Indian law. Each tool serves a specific function without being excessive or insufficient.
The tool set covers searching, browsing, and retrieving legal texts and cases for both US and Indian law. However, there is no dedicated US case search tool (only via ask_legal_question) and no full case PDF retrieval tool, leaving minor gaps.
Available Tools
15 toolsask_legal_questionAInspect
AI-generated legal answer grounded in primary sources. countryCode='US' (default) covers USC, CFR, 50-state law, and CourtListener case law. countryCode='IN' covers 31M+ Indian judgments + 23K+ acts. US-only sourcesFilter: 'all' | 'statutes_only' | 'cases_only'. Modes: 'standard' (5 credits) or 'deep' (multi-hop, 20 credits). Pass chatHistory for follow-ups. Returns answer with numbered citations.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | standard | |
| sources | No | ||
| question | Yes | ||
| max_sources | No | ||
| chat_history | No | ||
| country_code | No | ||
| sources_filter | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses credit costs for modes, country-specific coverage, and the return format (numbered citations). However, it does not explicitly state read-only behavior or potential side effects, though the 'AI-generated answer' implies no mutations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise, using four sentences to convey purpose, country options, filtering, modes, and follow-up capability. It front-loads the main verb and resource, with no redundant or superfluous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (7 parameters, output schema exists) and sibling tools, the description is fairly complete. It explains key parameters and return value, but omits details like error handling, rate limits, or required permissions. The output schema likely covers return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates by explaining country_code, sources_filter, mode, and chat_history. However, max_sources and the sources boolean are not explained, leaving some parameters ambiguous. Overall, it adds significant meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates an AI legal answer grounded in primary sources, distinguishing it from sibling tools which retrieve specific documents. The verb 'ask' and resource 'legal question' are explicit, and the context of country codes and source filters reinforces its purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for general legal questions but does not explicitly state when to use this tool versus siblings like get_act_text or search_cases. No when-not or alternative guidance is provided, leaving the agent to infer boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_act_textAInspect
Get URLs for the full text, PDF, and HTML versions of a specific Indian act. Pass the act_id (e.g., 'IND_central_2187' for Indian Contract Act). Returns R2 CDN URLs — fetch the text/PDF content directly from those URLs. Cost: 1 credit.
| Name | Required | Description | Default |
|---|---|---|---|
| act_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses that the tool returns R2 CDN URLs and instructs the user to fetch content directly from those URLs, which reveals important behavior beyond the schema. It also mentions the cost (1 credit), providing transparency. No contradictions with annotations (none provided).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (two sentences) and front-loads the purpose. Every sentence adds value: the first states what it does, the second adds detail on input format, output behavior, and cost. No extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having only one parameter, the description covers input format (example act_id), output (R2 CDN URLs, need to fetch content), and cost. With an output schema existing (though not shown in input), the description still provides complete guidance for a simple tool, leaving no major gaps for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, but the description adds meaning by providing an example act_id format and a real-world example ('IND_central_2187' for Indian Contract Act). This compensates for the lack of schema descriptions, giving the agent a concrete understanding of the parameter value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get URLs for the full text, PDF, and HTML versions of a specific Indian act.' This specifies the verb (Get), resource (URLs for act versions), and scope (Indian act), making the tool's purpose unambiguous. It also distinguishes from sibling tools that deal with US statutes or general legal search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides an example act_id ('IND_central_2187') and mentions cost (1 credit), but does not explicitly state when to use this tool over alternatives like search_legislation or get_us_statute_section. No 'when not to use' guidance is given, which limits the agent's ability to select the correct tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_amendmentsAInspect
Get the complete amendment history for an Indian act. Returns all footnotes showing substitutions, insertions, omissions, and notes made by amending acts. Filter by section number or amendment type. Each footnote shows the amending act name and original text (if available). Use to trace how a statute evolved. Cost: 1 credit.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| act_id | Yes | ||
| section | No | ||
| page_size | No | ||
| footnote_type | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description covers return format (footnotes with amending act name and original text) and mentions cost (1 credit). Could add more on error behavior or permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, no redundant words, purpose first then details then usage hint then cost.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Output schema exists, but description already details return format. All key parameters and use case are covered. No sibling overlaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schemas have 0% coverage, but description explains 'Filter by section number or amendment type' covering section and footnote_type. Pagination parameters are implicit.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'get' with resource 'complete amendment history for an Indian act', distinguishes from sibling tools which focus on US statutes or cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use to trace how a statute evolved' and mentions filtering options, but no explicit when-not-to-use or alternative tools despite siblings being different jurisdictions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_citation_networkAInspect
Traverse the citation network around a case. Returns nodes (cases) and edges (citing relationships) with treatment types (followed, distinguished, overruled). Specify direction: 'outbound' (cases this cites), 'inbound' (cases citing this), or 'both'. Set depth (1-3 hops) and limit (1-100 nodes). Useful for understanding a case's legal influence. Cost: 2 credits.
| Name | Required | Description | Default |
|---|---|---|---|
| depth | No | ||
| limit | No | ||
| citation | Yes | ||
| direction | No | both | |
| country_code | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses cost (2 credits) and constraints (depth 1-3, limit 1-100), but does not mention side effects, authentication needs, or error handling. Adequate for a read-only tool with output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with several short sentences. It front-loads the main purpose, then details parameters, and ends with utility note and cost. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters and an output schema, the description covers core functionality well. It mentions return types (nodes and edges) but not exact fields (output schema handles that). Missing details on pagination or citation format, but overall sufficient for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains direction (enum), depth (1-3), limit (1-100), and implicitly citation (required). Country_code is not explained, but overall adds significant meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: traversing the citation network around a case, returning nodes and edges with treatment types. It distinguishes from sibling tools like lookup_case (case details) and search_cases_by_citation (finding cases) by focusing on network traversal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains how to use the tool: specify direction, depth, and limit. It mentions usefulness for understanding legal influence, but lacks explicit when-not-to-use or alternatives, though siblings imply differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pricingAInspect
Get current API credit pricing. Returns per-endpoint credit costs and credit-to-currency conversion rates (1 credit = $0.01 USD). No authentication required. Use to check costs before making API calls.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses that the tool requires no authentication and returns current credit costs and conversion rates. It does not specify whether pricing is cached or live, but for a simple read-only tool, the transparency is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at two sentences, with no redundant information. The key action and details are front-loaded, making it efficient for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, output schema present), the description covers all necessary aspects: what it does, what it returns, authentication requirements, and intended usage context. It is complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the baseline is 4. The description adds relevant context about the return format and authentication status, which is appropriate for a parameterless tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves current API credit pricing with specific details on return values (per-endpoint costs and conversion rates). It distinguishes itself from all sibling tools which are legal research functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'No authentication required' and 'Use to check costs before making API calls', providing clear context on when to invoke the tool. However, it does not explicitly exclude other use cases or mention alternatives, but the purpose is very specific.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_us_statute_sectionAInspect
Get metadata for a specific US statute or regulation section by act_id (e.g. 'USC_T42_C21_S1983'). The act_id comes from search_us_statutes results or ask_legal_question sources. Returns citation, title hierarchy, breadcrumb, and links to HTML, PDF, and XML formats. Use before get_us_statute_section_text to preview a section. Cost: 1 credit.
| Name | Required | Description | Default |
|---|---|---|---|
| act_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses it returns metadata (citation, hierarchy, breadcrumb, links) and mentions cost (1 credit). It implies a read operation with no destructive side effects, adding sufficient behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with no redundancy. Purpose is front-loaded, and every sentence adds value (purpose, example, returns, usage suggestion, cost).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one parameter and an output schema (mentioned in context), the description sufficiently covers the workflow (where to get act_id, what it returns, and how it fits with a sibling). Could add more on output structure but not necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage for the only parameter, but the description adds meaning by providing an example format and stating the parameter's source (search results). This compensates well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it gets metadata for a specific US statute/regulation section, provides an example act_id format, and distinguishes from sibling get_us_statute_section_text by indicating the use case (preview before getting text).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells the agent that act_id comes from search_us_statutes or ask_legal_question, and advises to use this before get_us_statute_section_text. While it doesn't list alternatives for not using it, the guidance is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_us_statute_section_textAInspect
Get the full text of a US statute or regulation section (USC or CFR) by act_id. Returns both styled HTML (with cross-references and paragraph numbering as published in the official code) and plain text. Use when you need the actual statutory language for quotation, drafting, or analysis. Cost: 3 credits.
| Name | Required | Description | Default |
|---|---|---|---|
| act_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description carries the burden. It discloses cost (3 credits) and return format (HTML and plain text) but omits other behavioral traits like idempotency, permissions, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with key information (verb, resource, output format, cost). No redundant phrases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and minimal parameter count, the description sufficiently covers usage context and return details. It could elaborate on act_id format but is otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description explains that act_id is used to identify the statute/regulation section. This adds meaning beyond the bare schema, though a format example would improve clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves the full text of a US statute or regulation section (USC or CFR) by act_id, with specific details on output (styled HTML and plain text). It distinguishes from siblings like get_us_statute_section by specifying 'full text' and return format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage context: 'Use when you need the actual statutory language for quotation, drafting, or analysis.' However, it does not explicitly mention when not to use or compare to alternatives like get_act_text or get_us_statute_section.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_legislationAInspect
Browse 23,000+ Indian acts, regulations, and legislation. Use to discover act_id values for get_act_text and get_amendments. Filter by category (central, state, regulatory, repealed, spent), state slug, department (sebi, rbi, etc.), year range, and status (in_force, repealed, spent). Sort by year_desc, year_asc, title_asc, title_desc, or popular. Cost: 1 credit.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| sort | No | year_desc | |
| state | No | ||
| search | No | ||
| status | No | ||
| year_to | No | ||
| category | No | ||
| page_size | No | ||
| year_from | No | ||
| department | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions a credit cost (1 credit) but does not discuss pagination behavior, rate limits, or whether results are cached. The read-only nature is implied but not confirmed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively concise and front-loaded with the core purpose. It includes cost information but could be slightly more structured with bullet points for filtering options.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 10 parameters and lack of annotations, the description covers filtering and sorting adequately. The presence of an output schema reduces the need to explain return values. Missing details on pagination behavior and explicit parameter explanations prevent a score of 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description explains about 7 of 10 parameters (category, state, department, years, status, sort). It omits explanations for 'search', 'page', and 'page_size', leaving gaps for effective usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the tool lists over 23,000 Indian acts, regulations, and legislation, and explicitly states its purpose to discover act_id values for get_act_text and get_amendments. It distinguishes itself from sibling tools like search_legislation by emphasizing browsing and filtering capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool (to discover act_id values) and lists filtering options. However, it does not explicitly contrast with sibling tools like search_legislation, leaving some ambiguity about when to use one over the other.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookup_caseAInspect
Get full details for a specific case by its citation. Returns comprehensive case metadata, all known citation aliases, and citation treatment statistics showing how many times the case was followed, distinguished, overruled, approved, or referred. Use after resolve_citation or search_cases_by_citation for deep case analysis. Cost: 1 credit.
| Name | Required | Description | Default |
|---|---|---|---|
| citation | Yes | ||
| country_code | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully bears transparency. It details what is returned: comprehensive case metadata, citation aliases, and treatment statistics. It also discloses cost, a behavioral trait not in schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loaded with purpose, then return details, then usage guidance. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description correctly omits return value details but still summarizes them. It covers purpose, behavioral costs, and usage context. However, the lack of parameter explanations slightly reduces completeness, as schema coverage is zero and description does not compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, yet the description does not explain the 'citation' or 'country_code' parameters beyond the full name. It adds no meaning about format, required status, or purpose beyond the schema itself. The agent gains no extra semantic understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves full case details by citation, explicitly listing return types (metadata, aliases, treatment statistics). It distinguishes from sibling tools like 'resolve_citation' or 'search_cases_by_citation' by positioning itself for deep analysis after initial search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives explicit guidance: 'Use after resolve_citation or search_cases_by_citation for deep case analysis.' It also mentions cost (1 credit), indicating when not to use (e.g., for simple lookups).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
quick_searchAInspect
Fast compact Indian legal case search returning top 3-5 results with just the essentials: title, citation, court, year, summary excerpt, and PDF link. Same boolean query syntax as search_legal_cases but returns fewer, flatter results. Best when you need a quick overview rather than detailed results. Cost: 1 credit.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| top_k | No | ||
| year_to | No | ||
| year_from | No | ||
| court_name | No | ||
| court_type | No | ||
| country_code | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that it returns top 3-5 results with essentials, and mentions a credit cost (rate-limiting aspect). However, it does not describe whether it is read-only, data freshness, sorting, or pagination behavior. For a search tool, it provides basic behavioral context but could be more comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is four sentences: first states purpose, second ties to sibling, third gives best-use context, fourth mentions cost. No fluff, front-loaded with key information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Output schema exists (not shown) reducing need to explain return values. Description does mention return fields. However, for a tool with 7 parameters and no param explanations, completeness is lacking. It covers purpose and usage well but leaves parameter behavior to the schema, which provides only names and types.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning description must explain parameters. It only explains the 'query' parameter by referencing 'boolean query syntax'. None of the other six parameters (top_k, year_from, year_to, court_name, court_type, country_code) are described. The description mentions result fields but not input parameters. This is insufficient for a 7-parameter tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Indian legal case search' with specific verb 'search' and resource 'Indian legal cases'. It mentions returning top 3-5 results with essential fields (title, citation, court, year, summary excerpt, PDF link). It also distinguishes from sibling 'search_legal_cases' by noting same query syntax but fewer results.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description says 'Same boolean query syntax as search_legal_cases but returns fewer, flatter results' and 'Best when you need a quick overview rather than detailed results'. This provides clear when-to-use context. It implicitly suggests using search_legal_cases for detailed results but doesn't explicitly state when not to use. Cost is mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_citationAInspect
Resolve any Indian legal citation format to its canonical case record. Accepts SCC, AIR, SCR, MANU, SCALE, INSC formats (e.g., '(2019) 11 SCC 706' or 'AIR 1976 SC 1207'). Returns case details and all known citation aliases/formats. Returns found=false (not an error) when citation cannot be resolved. Cost: 1 credit.
| Name | Required | Description | Default |
|---|---|---|---|
| citation | Yes | ||
| country_code | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the accepted formats, that it returns details and aliases, and the not-found behavior. It also mentions cost. It does not address idempotency or rate limits, but those are less critical for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no wasted words. The main purpose is front-loaded, followed by format examples and key behavioral detail about not-found. The cost note is brief but useful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity of legal citation resolution and the presence of many sibling tools, the description provides core functionality but leaves gaps: it does not explain the 'country_code' parameter, guide tool selection, or document output schema details (though an output schema exists externally). It is adequate but not fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It describes the 'citation' parameter with format examples, but does not mention the 'country_code' parameter at all, leaving its purpose unclear for a tool focused on Indian citations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Resolve' and the resource 'any Indian legal citation format to its canonical case record'. It distinguishes from siblings like lookup_case or search_cases_by_citation by focusing on canonical resolution with alias mapping.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly accepts multiple legal citation formats with examples, and clarifies that not-found returns 'found=false' rather than an error. However, it does not specify when to prefer this over sibling tools or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_cases_by_citationAInspect
Search for legal cases by citation text or case name. Use when you know part of a case name (e.g., 'Maneka Gandhi') or a partial citation. Filter by court code (SC, DEL, BOM, MAD, etc.), year range, and validity status (GOOD_LAW, OVERRULED, DISTINGUISHED, etc.). Returns up to 50 matching cases with metadata. Cost: 1 credit.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| year_end | No | ||
| court_code | No | ||
| year_start | No | ||
| country_code | No | ||
| validity_status | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden for behavioral disclosure. It mentions the cost ('1 credit'), returns up to 50 cases, and filter options, which are useful. It does not explicitly state read-only nature, but the description is sufficient for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficient with three sentences. It front-loads the purpose, then covers key features and cost. No redundant or vague language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the schema has 0% descriptions, the description provides a solid overview of the tool's capabilities, including filters and result limit. It does not explain the 'country_code' parameter or default value for 'limit', but overall it is adequate for an agent to understand how to use the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, so the description must compensate. It explains 'query', 'court_code' (with examples), year range, and 'validity_status'. However, it omits 'country_code' and 'limit' (though limit is implied by 'returns up to 50'). The added value is moderate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for legal cases by citation text or case name, which is specific and distinct from sibling tools like 'search_legal_cases' or 'lookup_case'. It provides examples ('Maneka Gandhi') and lists filter options, leaving no ambiguity about its function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool ('when you know part of a case name or a partial citation') and provides examples of usage contexts. However, it does not explicitly mention when not to use it or suggest alternatives for other types of searches.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_legal_casesAInspect
Boolean keyword search of the Indian corpus. Supports AND, OR, NOT and quoted phrases. Filter by courtType (supreme_court, high_court), courtName, year range. Returns paginated results with text, citation, court, relevance score, snippet, PDF. Present only the top results, do NOT emphasize total count. Use pageSize 10 for conversational answers, 20 for exhaustive lists. Cost: 1-3 credits. For US case law, use ask_legal_question with countryCode='US'.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| query | Yes | ||
| year_to | No | ||
| page_size | No | ||
| year_from | No | ||
| court_name | No | ||
| court_type | No | ||
| country_code | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses cost (1-3 credits), advises not to emphasize total count, and mentions pagination behavior. It implies a read-only operation via 'search'. However, it could mention required authentication or rate limits, though not critical for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (3 sentences plus 2 imperatives) and front-loaded with essential information. Every sentence adds value: search capability, filters, return fields, usage tips, cost, and sibling differentiation. No redundant or vague statements.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters, output schema exists), the description covers most aspects: search syntax, filters, pagination, return fields, cost, and an alternative tool. Missing a clear explanation of the country_code parameter (unclear if it constrains to Indian or other jurisdictions) and prerequisites. Output schema existence reduces burden, but a small gap remains.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description adds meaning to most parameters: query supports Boolean, court_type has enumerated examples, year_from/to form a range, and page_size has recommended values. The country_code parameter is mentioned indirectly (US case law alternative) but not explained for this Indian search; this minor gap prevents a 5.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a Boolean keyword search of the Indian corpus, specifies supported operators and filters, and distinguishes itself from the US case law tool (ask_legal_question). The verb 'search' combined with the resource 'legal cases' and the explicit mention of 'Indian' corpus makes the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance: for US case law, use ask_legal_question with countryCode='US'. It also recommends pageSize values (10 for conversational, 20 for exhaustive). However, it does not explicitly state when not to use this tool (e.g., for non-Indian or other legal queries) beyond the US alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_legislationAInspect
Search 23,000+ Indian acts, regulations, and legislation using semantic search. Find specific statutory provisions, definitions, penalties, and procedures. Filter by category (central, state, regulatory), state, department (SEBI, RBI, TRAI, etc.), and year range. Returns relevant act sections with text excerpts, section numbers, provision type, and PDF links. Cost: 1 credit. Use for questions like 'What is the penalty for insider trading under SEBI Act?' or 'Definition of goods under GST Act'.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| state | No | ||
| year_to | No | ||
| category | No | ||
| page_size | No | ||
| year_from | No | ||
| department | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, but description discloses cost (1 credit), return format (sections, excerpts, links), and filter capabilities. Does not confirm read-only but implies it. Lacks rate limits or authorization details, but sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph with key info front-loaded. Could be slightly more structured (e.g., bullet points) but no unnecessary sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Has output schema (not shown) and description explains return fields. Missing details on pagination and total results, but overall sufficient for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, so description carries full burden. It explains all main filters (category, state, department, year range) and the query, though page_size is omitted. Adds value beyond raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'search' with specific resource 'Indian acts, regulations, and legislation'. Distinguishes from siblings like 'search_legal_cases' and 'search_us_statutes' by specifying domain and features.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides example queries that illustrate use cases, but does not explicitly state when not to use or compare with alternatives. Still, the examples give good practical guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_us_statutesAInspect
Semantic search across the United States Code (USC) and Code of Federal Regulations (CFR). Use for federal statutory and regulatory questions: SEC (Title 17), FDA (Title 21), civil rights (Title 42), tax (Title 26), etc. Filter by corpusType ('USC' | 'CFR') and titleNumber. Returns sections with citation, title hierarchy, HTML/PDF/XML links. The returned act_id (e.g. 'USC_T42_C21_S1983') feeds get_us_statute_section_text for full text. Cost: 2 credits.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| corpus_type | No | ||
| title_number | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description discloses return format (sections, citation, hierarchy, links), linkage to sibling tool for full text, and cost (2 credits). This provides good behavioral context beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, front-loaded with purpose, and every sentence adds information (filtering, returns, linkage, cost). No redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter tool with output schema, the description covers purpose, filters, return structure, and downstream usage. It is complete and well-suited for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 4 parameters with 0% description coverage. The description explains 'corpus_type' and 'title_number' filtering, but does not mention 'limit' or elaborate on 'query'. Adds moderate value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Semantic search across the United States Code (USC) and Code of Federal Regulations (CFR)', with specific verb and resource. It also distinguishes from siblings like 'search_legal_cases' and 'search_legislation' by focusing on federal statutes and regulations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use for federal statutory and regulatory questions' with examples. While it does not explicitly exclude other uses or name alternatives, the context is clear and valuable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!