Skip to main content
Glama

Server Details

UK Police MCP — wraps the UK Police Data API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-ukpolice
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.8/5 across 8 of 8 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes: get_crimes, get_forces, and get_outcomes are clearly differentiated for UK police data, while ask_pipeworx, discover_tools, forget, recall, and remember form a separate memory/query utility set. However, ask_pipeworx and discover_tools could be confused as both help find information, though ask_pipeworx is more direct while discover_tools is for tool discovery.

Naming Consistency3/5

The naming is mixed: get_crimes, get_forces, and get_outcomes follow a consistent verb_noun pattern, but ask_pipeworx, discover_tools, forget, recall, and remember use different verb styles (ask, discover, forget, recall, remember) without a clear pattern. This inconsistency makes the set less predictable, though the names are still readable.

Tool Count4/5

With 8 tools, the count is reasonable for a server combining UK police data and utility functions. It's slightly over-scoped as the memory tools (forget, recall, remember) and query tools (ask_pipeworx, discover_tools) might be extraneous for a focused police data server, but overall it's manageable and not excessive.

Completeness3/5

For the UK police data domain, the tools cover core read operations (get_crimes, get_forces, get_outcomes) but lack obvious gaps like crime reporting, force details, or historical data. The inclusion of memory and query tools adds utility but doesn't fill domain-specific gaps, making the surface notably incomplete for full police data workflows.

Available Tools

8 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it explains that Pipeworx picks the right tool and fills arguments automatically, handles natural language input, and returns results. It could improve by mentioning potential limitations like response time or data source availability.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core functionality stated first, followed by clarifying details and examples. Every sentence earns its place by enhancing understanding without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (natural language processing with automatic tool selection), no annotations, and no output schema, the description is largely complete but could benefit from mentioning output format or error handling. It adequately covers purpose, usage, and behavior for a single-parameter tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single 'question' parameter. The description adds marginal value by emphasizing natural language input and providing examples, but doesn't add syntax or format details beyond what the schema provides, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Ask a question', 'get an answer') and resources ('best available data source'), and distinguishes it from siblings by emphasizing natural language interaction without needing to browse tools or learn schemas. It provides concrete examples that illustrate the scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Ask a question in plain English') and when not to use alternatives ('No need to browse tools or learn schemas'), with clear examples that reinforce its role as a high-level query interface versus more specialized sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it searches by natural language description and returns relevant tools with names and descriptions. However, it doesn't mention potential limitations like rate limits, authentication needs, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core functionality in the first sentence, followed by specific usage guidance. Every sentence earns its place by providing essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search functionality with 2 parameters) and no output schema, the description provides good context about what the tool does and when to use it. However, it doesn't describe the format of returned results or potential limitations, leaving some gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description doesn't add any parameter-specific information beyond what's already documented in the schema (query and limit parameters with their descriptions).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('search', 'returns') and resource ('Pipeworx tool catalog'), distinguishing it from sibling tools like get_crimes, get_forces, and get_outcomes by focusing on tool discovery rather than data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear context on when to use this tool versus alternatives, including a specific threshold (500+ tools) and scenario (finding tools for a task).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states 'Delete' which implies a destructive mutation, but doesn't clarify if this is permanent, reversible, requires specific permissions, or has side effects. For a deletion tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste—it directly states the action and required input. It's appropriately sized and front-loaded, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a deletion tool with no annotations and no output schema, the description is incomplete. It doesn't address behavioral risks (e.g., permanence), error conditions, or what happens on success/failure. Given the complexity of a destructive operation, more context is needed to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'key' parameter documented as 'Memory key to delete'. The description adds minimal value beyond this, merely restating 'by key' without providing additional context like key format or examples. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Delete') and resource ('a stored memory by key'), making the purpose unambiguous. However, it doesn't differentiate from sibling tools like 'recall' or 'remember', which likely interact with memories differently, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'recall' (which might retrieve memories) or 'remember' (which might store them). It lacks explicit when/when-not instructions or prerequisites, leaving usage context implied at best.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_crimesBInspect

Get street-level crimes near a latitude/longitude for a given month. Returns crime category, location, and outcome status.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude of the location
lngYesLongitude of the location
dateNoMonth to query in YYYY-MM format (e.g. "2024-01"). Defaults to latest available.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but lacks behavioral details. It states what the tool returns but doesn't disclose rate limits, authentication needs, data freshness, or error handling. For a data query tool, this leaves significant gaps in understanding operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, inputs, and outputs without unnecessary words. It's front-loaded with the core action and resource, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the basic purpose and return data but lacks behavioral context and usage guidelines, which are important for a crime data query tool with siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal value beyond the schema by implying spatial and temporal filtering but doesn't provide additional syntax, format, or usage details for the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get street-level crimes'), resource ('near a latitude/longitude'), scope ('for a given month'), and return data ('crime category, location, and outcome status'). It distinguishes from siblings like 'get_forces' and 'get_outcomes' by focusing on crime data rather than police forces or outcomes specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings ('get_forces', 'get_outcomes'). It mentions the purpose but doesn't specify scenarios, prerequisites, or alternatives, leaving the agent to infer usage context without explicit direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_forcesAInspect

List all police forces in England, Wales, and Northern Ireland. Returns force ID and name.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the return format ('force ID and name') which is helpful, but doesn't mention other important traits like whether this is a read-only operation, potential rate limits, authentication requirements, or error conditions. The description adds some value but leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence states the purpose and scope, the second describes the return format. There's no wasted text and the information is front-loaded effectively.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with no parameters, no output schema, and no annotations, the description provides basic purpose and return format. However, it lacks important context about behavioral traits (rate limits, auth needs) and doesn't explain the relationship to sibling tools. The geographic scope is clear, but other operational context is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is 4. The description appropriately doesn't discuss parameters since none exist, and the schema already fully documents this. No additional parameter information is needed or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List all police forces') and the resource ('police forces in England, Wales, and Northern Ireland'), with specific geographic scope. It distinguishes from siblings by focusing on forces rather than crimes or outcomes, providing a complete purpose statement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the sibling tools get_crimes and get_outcomes. It mentions the geographic scope but doesn't indicate whether this is the only way to access force data or if there are alternatives for different regions or data types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_outcomesCInspect

Get outcomes for crimes at a location for a given month. Returns outcome category and date for each crime.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude of the location
lngYesLongitude of the location
dateNoMonth to query in YYYY-MM format (e.g. "2024-01"). Defaults to latest available.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns outcome data but does not cover important aspects such as whether it's a read-only operation, potential rate limits, authentication needs, error handling, or data freshness. This leaves significant gaps in understanding the tool's behavior beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, consisting of two sentences that efficiently state the purpose and return value. There is no unnecessary information, and it effectively communicates the core functionality without waste, though it could be slightly more structured for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description is incomplete. It covers the basic purpose and return types but fails to address behavioral traits, error cases, or detailed usage context. For a tool with three parameters and no structured output information, more comprehensive guidance is needed to fully understand its operation and limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting all three parameters (lat, lng, date) with their types and meanings. The description adds minimal value by implying the parameters are used for location and month querying, but it does not provide additional semantic context beyond what the schema already covers, such as format details or usage nuances.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('outcomes for crimes'), specifying the scope ('at a location for a given month') and what it returns ('outcome category and date for each crime'). However, it does not explicitly differentiate from sibling tools like 'get_crimes' or 'get_forces', which might provide related but different data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_crimes' or 'get_forces'. It mentions the context ('at a location for a given month') but lacks explicit when-to-use or when-not-to-use instructions, prerequisites, or comparisons to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the dual functionality (retrieve by key or list all) and mentions persistence across sessions, which is valuable context. However, it doesn't disclose error handling, rate limits, or what happens when a non-existent key is provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve distinct purposes: first sentence defines the dual functionality, second sentence provides usage context. Every word earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with 100% schema coverage but no annotations or output schema, the description provides good context about functionality and usage. It could be more complete by mentioning what format the memories are returned in or error conditions, but given the tool's simplicity, it's mostly adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter. The description adds meaningful context by explaining the conditional behavior (omit key to list all keys) and relating the parameter to 'memory key' terminology used throughout the description. This goes beyond the schema's basic documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from siblings like 'remember' (which stores) and 'forget' (which deletes) by focusing on retrieval operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('to retrieve context you saved earlier') and provides clear conditional usage guidance ('omit key to list all keys'). It distinguishes from alternatives by specifying this is for retrieving memories, not discovering tools or getting other data types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool stores data in session memory, specifies persistence differences for authenticated users (persistent) vs. anonymous sessions (24 hours), and implies it's a write operation. It could improve by mentioning error handling or limitations, but it covers essential aspects well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two sentences that efficiently convey purpose, usage, and behavioral details without redundancy. Every sentence adds value, such as clarifying persistence rules, making it concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, and key behavioral traits like persistence. It could be enhanced by explaining return values or error cases, but it provides sufficient context for effective use without being overly detailed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents both parameters ('key' and 'value') with clear descriptions. The description adds minimal semantic value beyond the schema, such as example use cases, but doesn't provide additional syntax or format details. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('store a key-value pair') and resource ('in your session memory'), distinguishing it from siblings like 'recall' (retrieval) and 'forget' (deletion). It explicitly mentions use cases like saving intermediate findings, user preferences, or context across tool calls, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls') and implicitly distinguishes it from 'recall' (for retrieval) and 'forget' (for deletion). However, it lacks explicit guidance on when not to use it or alternatives for similar tasks, such as comparing with other storage mechanisms.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.