Skip to main content
Glama
paulieb89

UK Legal Research MCP Server

Search UK Parliament Petitions

parliament_search_petitions
Read-onlyIdempotent

Search UK Parliament petitions by keyword to find titles, status, signature counts, and response dates. Filter results by state and manage pagination.

Instructions

Search UK Parliament petitions by keyword.

Returns petition title, state, signature count, and dates for government response or parliamentary debate if applicable.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYesPetitionSearchInput with query and optional state filter.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesThe term that was searched in petitions
stateYesPetition state filter applied to this query
offsetNoSkip applied to this page
limitNoPage size requested
totalYesNumber of petitions returned in this call
has_moreNoTrue if a full page was returned (more may exist)
petitionsNoMatching petitions (title, state, signature count, key dates, URL).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true, covering safety and idempotency. The description adds value by specifying the return fields (title, state, signature count, dates) and implying pagination behavior through the mention of 'returns' and context in the offset parameter description. However, it does not disclose rate limits, authentication needs, or detailed error handling beyond what annotations imply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured with two sentences: the first states the purpose, and the second details the return values. Every sentence adds value without redundancy, and it is front-loaded with the core functionality. There is no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (search with pagination), rich annotations, and the presence of an output schema (implied by context signals), the description is reasonably complete. It covers the purpose and return fields, and the schema handles parameter details. However, it lacks usage guidelines and could benefit from more behavioral context like rate limits, but annotations provide safety coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with detailed descriptions for all parameters (query, state, offset, limit). The description adds minimal semantic context by mentioning 'by keyword' which aligns with the query parameter, but does not provide additional meaning beyond what the schema already documents. The baseline score of 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search UK Parliament petitions by keyword.' It specifies the verb ('search') and resource ('UK Parliament petitions'), but does not explicitly differentiate from sibling tools like 'parliament_search_hansard' or 'parliament_find_member'. The description is specific but lacks sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention sibling tools like 'parliament_search_hansard' for searching parliamentary debates or 'parliament_find_member' for finding members. There is no context about when this tool is appropriate or what alternatives exist for related queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/paulieb89/uk-legal-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server