California Justice Watch
Server Details
California criminal-justice accountability: DAs, defenders, judges, officers, CJP, POST.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 15 of 15 tools scored. Lowest: 2.7/5.
Every tool targets a distinct entity or operation: retrieval (get_*), listing (list_*), search (search_*), and ranking (top_*). Even search_cjp and search_cjp_documents are clearly differentiated as summary vs. semantic full-text search. No ambiguity.
All tool names follow a consistent verb_noun pattern using lowercase and underscores: get_*, list_*, search_*, top_*. The verbs are descriptive and the nouns match the entity type, providing clear predictability.
15 tools is well within the 3-15 range for a focused server. Each tool serves a distinct purpose for querying and exploring public records on California justice system actors, with no redundancy.
The tool surface covers all major entity types (DAs, defenders, judges, officers, POST, CJP) with retrieval, search, and ranking operations. The addition of list_databases and list_pages provides necessary metadata, making the server self-contained and navigable.
Available Tools
15 toolsget_daAInspect
Get a full DA record by name (exact or close partial match).
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | DA name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses the fuzzy matching behavior ('close partial match') but does not mention side effects, error states (e.g., no match), or what 'full record' entails. This is moderate transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that conveys all essential information without unnecessary words. It is appropriately concise and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema, no annotations), the description covers the primary action and matching behavior. However, it omits details on response format, error handling, and behavior when multiple matches exist, leaving some gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema (100% coverage) describes the 'name' parameter simply as 'DA name'. The description adds value by specifying it can be an exact or close partial match, clarifying the matching behavior beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('get'), the resource ('full DA record'), and the method ('by name, exact or close partial match'). It effectively distinguishes from sibling tools like 'search_das' which implies multiple results.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when searching by name but does not explicitly differentiate from sibling tools like 'search_das'. It lacks when-not guidelines or alternative suggestions, leaving the agent to infer.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_defenderAInspect
Get a full defender record by Bar number id or name.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | ||
| name | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose behavioral aspects such as what happens if no record is found, authentication requirements, data sensitivity, or rate limits. The term 'get' implies a read operation, but minimal detail is given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 9 words, efficiently conveying the tool's purpose and key parameters. It is front-loaded with the action and resource, making it easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with two optional parameters and no output schema, the description adequately covers the query logic but lacks details on error handling, return structure (e.g., what constitutes a 'full defender record'), and whether the record includes sensitive information. The differentiation from sibling search tools is implied but not explicit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description adds basic meaning: 'Bar number id' maps to the 'id' parameter and 'name' to the 'name' parameter. However, it does not clarify that both parameters can be used together, which takes precedence, or the expected format of the bar number.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'get' and the resource 'full defender record', with specific identifiers (Bar number id or name). This immediately differentiates from sibling tools like 'search_defenders' which likely returns multiple records, and 'get_da'/'get_officer' which target different roles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving a single defender record by identifier, but does not explicitly state when to use this vs. alternatives like 'search_defenders' for multiple records, or how to choose between id and name parameters. No prerequisite or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_officerAInspect
Get a full officer record by name (with optional agency filter for disambiguation).
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| agency | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries the full burden. It indicates the tool is read-only ('Get') and does not disclose side effects, but provides no details on authentication, rate limits, or what 'full officer record' includes. Given the simplicity of retrieval, this is minimally adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that delivers the core purpose and parameter hint without any wasted words. It is appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a straightforward retrieval tool with two parameters and no output schema, the description covers the essential function and parameter semantics. However, it could be more complete by hinting at the return format or fields, but given the tool's simplicity, it is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description adds essential meaning: 'name' is the primary identifier (required) and 'agency' is optional for disambiguation. This clarifies parameter roles beyond the schema, which only lists types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a specific officer record by name, with an optional agency filter for disambiguation. The verb 'Get' and resource 'officer record' are specific, distinguishing it from sibling tools like 'search_officers' which likely return multiple results.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use the agency parameter (for disambiguation) but does not explicitly state when to use this tool versus alternatives like 'search_officers'. No guidance on prerequisites or exclusions is provided, leaving the agent to infer context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_databasesAInspect
List all public databases exposed by this MCP server with their metadata (entry counts, last updated, descriptions).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It discloses that the tool lists public databases and returns metadata (entry counts, last updated, descriptions). While not exhaustive (no mention of auth or pagination), it covers the key output for a simple list tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys the action, scope, and included metadata without superfluous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately describes the return content (metadata). It lacks details on exact format or field types, but for a listing tool it provides sufficient context. No major gaps noted.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, so schema_coverage is 100% and no parameter documentation is needed. The description adds no parameter info, but baseline is 4 for no-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all public databases with their metadata, using a specific verb ('List') and resource. It differentiates from siblings like get_da or search_capost, which target individual items or search functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for obtaining an overview of available databases, but it does not explicitly state when to use it versus alternatives like list_pages or search_ commands. No when-not or context is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_pagesAInspect
Return the canonical list of pages on cajusticewatch.com — slug, URL, label, and purpose. Use this when the user asks about features/pages/tools of the site, OR when you need to recommend a page, OR before saying "I do not have access to X" — the page may actually exist.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It implies a read operation with no side effects. The return fields are specified, but no mention of authentication, rate limits, or result size limitations. For a simple list tool, this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first sentence states action and output; second provides usage guidance. Concise, front-loaded, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters, no annotations, and no output schema, the description covers the tool's purpose and usage. It does not mention edge cases or completeness guarantee, but for a simple list tool it is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist (0 params), so baseline 4 applies. The description adds no parameter info because none is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool returns the canonical list of pages with specific fields (slug, URL, label, purpose). The verb 'Return' and resource 'canonical list of pages' are specific, and it distinguishes from sibling tools that retrieve single entities or search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: when user asks about features/pages/tools, when recommending a page, or before claiming lack of access. Provides a reason (page may exist). No alternatives named, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_capostBInspect
Search the California POST decertification database — law enforcement officers stripped of their POST certification. Source: post.ca.gov public records.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | ||
| agency | No | Police department or agency name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description only mentions the data source. It does not disclose rate limits, authentication, data freshness, pagination behavior, or what happens if no results are found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the purpose, no fluff. Every word serves a function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters, no output schema, and no annotations, the description is incomplete. It omits details on return format, pagination, and search behavior, leaving critical gaps for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 33% (only agency has a description). The description does not explain query format, case sensitivity, or how limit/query interact with the search. It adds little beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches the California POST decertification database for decertified officers, with a specific verb 'search' and resource, distinguishing it from siblings like search_cjp by naming the source database.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching POST decertification records but lacks explicit guidance on when not to use or alternatives. No exclusionary context is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_cjpCInspect
Search the California Commission on Judicial Performance (CJP) public-discipline records — judges censured, admonished, or removed for misconduct. Source: cjp.ca.gov public decisions.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | Free-text query against any field |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose safety and behavioral traits. It only states the source is public decisions, implying read-only, but lacks details on authentication, rate limits, or mutate potential.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (two sentences) and front-loaded with the key purpose. However, it could be slightly more structured without adding bulk.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with no output schema, the description is incomplete. It should explain what fields are searched, how results are returned, and pagination behavior given the limit parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers the 'query' parameter with a free-text description, but 'limit' has no description. The tool description adds no parameter-level detail beyond the schema's default values and constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as searching CJP public-discipline records including judges censured, admonished, or removed for misconduct, and cites the source. This distinguishes it from general search tools, though sibling search_cjp_documents overlaps.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like search_cjp_documents or search_judges. No context for exclusion or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_cjp_documentsAInspect
Semantic search over the full text of CJP public-discipline decisions (250 PDFs ingested). Use this for topic questions ("racial bias", "drug-related misconduct", "ex parte communications") or when you need passages, not just summary records. Returns matching passages with citations. Distinct from search_cjp (which searches the summary-record JSON).
| Name | Required | Description | Default |
|---|---|---|---|
| year | No | Optional year filter (e.g. 2020) | |
| judge | No | Optional judge-name filter (substring match) | |
| limit | No | ||
| query | Yes | Topic or phrase to find in the decision text |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses it's semantic search over 250 PDFs and returns passages with citations. Could mention read-only nature but not required. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that front-load the main purpose, include usage guidance, and end with distinction. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (no output schema, no complex nested parameters), the description is complete: states what it does, when to use, and what it returns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is high (75%), so baseline is 3. Description does not add extra meaning beyond schema, but adequately aligns with parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it performs semantic search over full text of CJP public-discipline decisions, using specific verbs and resources. It also explicitly distinguishes from the sibling tool search_cjp.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use ('topic questions', 'need passages') and explicitly distinguishes from the alternative tool search_cjp, which searches summary-record JSON.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_dasAInspect
Search the public database of California District Attorneys with documented misconduct or controversy. Returns name, county, in-office date, misconduct type, description, source URL.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | Free-text query — matched against any field | |
| county | No | Filter to a specific California county |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses return fields but omits behavioral traits like pagination, default ordering, fuzzy vs exact matching, or whether the operation is read-only. The description is partially transparent but incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose, and contains no unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers basic purpose and return fields, but given the lack of output schema and annotations, it should address more behavioral specifics (e.g., pagination, result ordering) and usage guidance relative to siblings. It is adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not add any information about parameters beyond the input schema. Schema coverage is 67% (query and county have descriptions, limit does not), but the description does not clarify the missing limit parameter or provide additional context like accepted formats. The description adds no value over the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches a public database of California District Attorneys with misconduct or controversy, and specifies the return fields (name, county, etc.). This distinguishes it from sibling tools like get_da (single DA retrieval) or search_judges.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for finding DAs with misconduct but does not explicitly state when to use vs alternatives (e.g., get_da for specific details) or when not to use. No exclusions or alternative tool references are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_defendersBInspect
Search the public database of California public defenders with documented case outcomes, failures, or systemic-context entries. Includes Bar number, county, office, score.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | ||
| county | No | ||
| status | No | e.g. "Active", "Suspended" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden. It mentions including 'Bar number, county, office, score' and 'case outcomes, failures, or systemic-context entries,' giving some content context. However, it does not explicitly state that the operation is read-only or safe, nor does it discuss pagination, rate limits, or other behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is only two sentences, no wasted words, and front-loads the purpose. It could be slightly improved by structuring the included fields more clearly, but it is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters, no output schema, and no annotations, the description is too brief. It lacks explanation of return format, pagination, or how to use the 'query' parameter. The tool's complexity requires more detail for an agent to invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is only 25% (only 'status' has a description). The description adds 'Includes Bar number, county, office, score,' which likely describes output fields rather than input parameters. 'county' matches a param but is not explained as filter; 'query' and 'limit' are not explained. The description fails to clarify parameter roles or how they influence results.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches the public database of California public defenders, specifying what the search includes (case outcomes, failures, systemic-context entries, Bar number, county, office, score). This distinguishes it from siblings like search_das, search_judges, and search_officers, and complements get_defender which retrieves a single defender.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for searching multiple defenders, but it does not explicitly state when to use it vs alternatives like get_defender or other search tools. There is no guidance on required conditions or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_judgesBInspect
Search current California judges by name and/or court type (e.g., "superior", "appeal", "supreme"). Returns name + court type pairs.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | ||
| court_type | No | "superior", "appeal", "supreme", etc. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. The description discloses the return format but does not mention safety (read-only), authentication needs, rate limits, or side effects. With no annotations, the description carries the full burden and falls short.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are front-loaded with the tool's purpose. Every word adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters and no output schema, the description explains the return format and the purpose of two parameters. However, the 'limit' parameter is undocumented, and no example or error behavior is provided. Adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds context for the 'query' and 'court_type' parameters by relating them to name search and court types. However, 'limit' is not explained, and schema coverage is only 33%. The description partially compensates but could be more specific.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for California judges by name and/or court type. It differentiates from sibling tools like 'search_das' and 'search_defenders' by explicitly mentioning judges, but does not explicitly distinguish from 'top_judges' or 'search_cjp'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'top_judges' or 'search_cjp'. There is no mention of prerequisites or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_officersAInspect
Search the public database of California law enforcement officers with documented misconduct or decertification. Includes name, agency, basis.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | ||
| agency | No | Police department or agency name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavioral traits. It only states the tool searches a public database and includes result fields, but does not mention authentication, rate limits, pagination, search behavior (fuzzy/exact), or ordering. This is insufficient for an agent to anticipate side effects or constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two short sentences that front-load the action and key result fields. Every phrase adds value without redundant wording.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and multiple sibling tools, the description is somewhat complete but lacks details on return format, pagination, and explicit usage boundaries. It hints at output content (name, agency, basis) but does not fully equip an agent to use the tool correctly in all scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 33% (only agency has a description). The tool description adds value by implying that 'query' searches officer name, 'agency' filters by department, and 'limit' controls result count. However, it does not explain the 'basis' field (not a parameter) and leaves some parameters underspecified relative to the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches a specific public database of California law enforcement officers with misconduct or decertification, and lists result fields (name, agency, basis). This distinguishes it from sibling tools like get_officer or search_capost.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for finding officers by misconduct status, but does not explicitly state when to use this tool versus alternatives like get_officer or other search tools. The context is clear enough for an agent to infer, but lacks explicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
top_dasAInspect
Return the top-N California DAs by editorial severity score (descending). Use for "worst DA" / "most-disciplined DA" / ranking questions. Optional county filter. Only manually-scored records are returned.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| county | No | Optional county filter, e.g. "San Mateo" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description provides valuable behavioral context: it returns only manually-scored records, orders descending by severity, and supports an optional county filter. This goes beyond the schema to clarify constraints and operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the main action, and contains no extraneous information. Every word contributes to understanding, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema), the description covers purpose, usage hint, and a key behavioral constraint (manually-scored only). It does not describe the output format, but for a ranking tool, this is a minor omission. Overall, it sufficiently completes the picture.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50% (county has description, limit does not). The description adds 'Optional county filter' and implies limit via 'top-N', but does not elaborate on limit beyond what the schema provides (default, range). It partially compensates but does not add significant extra meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns top N California DAs by editorial severity score in descending order, specifying the resource (California DAs), action (return), and ranking criterion. It distinguishes from siblings like top_judges and top_officers by explicitly mentioning 'DAs' and 'ranking questions'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use for 'worst DA' / 'most-disciplined DA' / ranking questions.' and mentions the optional county filter. However, it does not explicitly state when not to use or mention alternatives like search_das or get_da, leaving some ambiguity for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
top_judgesAInspect
Return the top-N California judges by CJP discipline severity (removal > censure > admonishment, descending). Use for "worst judge" / "most-disciplined judge" / ranking questions.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It discloses the ranking metric, order, and that it returns top-N. No hidden side effects or destructive behavior, but it does not discuss rate limits or pagination. The simplicity of the tool makes this sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence front-loads the action ('Return the top-N California judges'), uses parenthetical to clarify ordering, and ends with usage examples. No wasted words; every part serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description defines what is returned (list of judges ranked by discipline severity). It does not detail output fields, but for a simple ranking tool, this is adequate. Sibling context (e.g., top_das) suggests consistent output patterns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (description does not mention the 'limit' parameter). However, the parameter is simple (integer, default 10, min 1, max 25) and its purpose is evident from context. The description adds no additional explanation beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb ('Return'), resource ('top-N California judges'), and specific ordering criterion ('CJP discipline severity: removal > censure > admonishment, descending'). Includes example use cases ('worst judge', 'ranking'), distinguishing it from siblings like top_das or search_judges.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explicitly ties usage to ranking and 'worst judge' queries, providing clear context. Lacks explicit when-not-to-use or alternatives, but sibling differentiation is implied via distinct resource types (judges vs. DAs vs. officers).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
top_officersAInspect
Return the top-N California law enforcement officers by editorial severity score (descending). Use for "worst officer" / "most-disciplined officer" / ranking questions. Optional agency filter.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| agency | No | Optional agency filter, e.g. "Riverside County Sheriff" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description lacks disclosure of behavioral traits like side effects, rate limits, or safety. It implies a read operation but does not confirm, leaving the agent to infer. Adequate but not explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that front-load the core purpose and usage, with no superfluous information. Every sentence contributes meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is complete for a simple listing tool, but lacks explanation of return values (no output schema). However, the tool is straightforward and the missing output format is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50%; description adds meaning by stating 'Optional agency filter' for the agency parameter, while limit is well-documented in schema. This adds value beyond schema, justifying above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool returns top-N California law enforcement officers by editorial severity score in descending order, and provides example queries like 'worst officer' and 'most-disciplined officer', effectively differentiating it from sibling tools like 'top_das' and 'top_judges'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states to use for ranking questions such as 'worst officer' or 'most-disciplined officer' and mentions optional agency filter, providing clear context. Does not include exclusionary guidance but is adequate for the tool's simplicity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!