Skip to main content
Glama
paulieb89

UK Legal Research MCP Server

Search Committee Evidence

committees_search_evidence
Read-onlyIdempotent

Search oral and written evidence submitted to UK parliamentary committees. Retrieve paginated results with committee ID, evidence type filters, and title length controls.

Instructions

Search oral and written evidence submitted to a parliamentary committee.

Returns ONE PAGE of evidence (default 20). Free-text titles are capped per max_title_chars; witness lists are capped at 10 per item. For committees with many submissions, re-call with offset=offset+returned while has_more is true.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYesCommitteeEvidenceInput with committee_id, evidence_type, offset/limit pagination, and max_title_chars.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
committee_idYesCommittee ID this page belongs to
evidence_typeYesEvidence type filter applied to this query
offsetYesNumber of evidence items skipped before this page
limitYesMax evidence items requested for this page
returnedYesNumber of evidence items actually returned in this call
has_moreYesTrue if there may be more evidence beyond this page. Re-call with offset=offset+returned to fetch the next page. Conservative: when evidence_type='both', True if either oral or written upstream page came back full.
evidenceNoEvidence items in this page. Titles are capped per max_title_chars; witness lists are capped at 10 per item.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true. The description adds valuable behavioral context beyond annotations: pagination behavior (returns one page, default 20 items), field truncation (titles capped per max_title_chars, witness lists capped at 10), and the has_more flag mechanism. This provides practical implementation details not captured in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by essential behavioral details. Every sentence earns its place: the second explains pagination, the third covers field truncation, and the fourth provides pagination continuation logic. Zero wasted words with clear, actionable information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the comprehensive annotations (read-only, non-destructive, idempotent, open-world), 100% schema coverage, and existence of an output schema, the description provides exactly what's needed: it explains the tool's purpose, pagination behavior, and field limitations without duplicating structured information. It's complete for a search tool with good supporting documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already thoroughly documents all 5 parameters. The description adds marginal value by mentioning max_title_chars truncation and the offset pagination logic, but doesn't provide significant additional semantic meaning beyond what's in the parameter descriptions. The baseline of 3 is appropriate when the schema does most of the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search oral and written evidence') and resource ('submitted to a parliamentary committee'), distinguishing it from sibling tools like committees_get_committee or committees_search_committees which focus on committees themselves rather than their evidence. The verb+resource combination is precise and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about pagination ('Returns ONE PAGE of evidence') and when to re-call ('re-call with offset=offset+returned while has_more is true'), but doesn't explicitly mention when to use this tool versus alternatives like general search tools or when not to use it. It implies usage for committee evidence specifically but lacks explicit sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/paulieb89/uk-legal-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server