Pega MCP Server
Server Quality Checklist
This repository includes a README.md file.
Add a LICENSE file by following GitHub's guide.
MCP servers without a LICENSE cannot be installed.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 6 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It adds valuable return structure documentation (success and failure JSON formats) compensating for the missing output schema. However, it omits critical safety information (read-only vs destructive), pagination behavior, and case structure details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
Contains filler phrase 'Use this tool to' that wastes space. Front-loading is reasonable but the mixing of input specification with return format documentation creates slight structural awkwardness. No redundant sentences beyond the optional input mention.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Reasonably complete for a simple listing tool with one optional parameter. Return format description compensates for lack of output schema. However, missing safety characteristics (read-only status) and sibling differentiation given the presence of 'get_case'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (limit parameter fully documented). Description merely repeats 'Optional input: limit' without adding semantics about default behavior when omitted, or valid use cases for the 1-500 range.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool 'list cases' with specific verb and resource. However, it fails to distinguish from sibling tool 'get_case' (singular), which likely retrieves a specific case by ID versus listing multiple cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus the singular 'get_case' or other sibling tools. No mention of prerequisites, filtering capabilities, or search patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It documents the response format (success/error structure) which adds value, but fails to disclose mutation side effects, optimistic locking failure behavior (despite eTag parameter), or idempotency characteristics of the submit operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear ordering: purpose statement, input categorization, return format. The optional input list is somewhat lengthy but efficiently presented. No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers the 9-parameter mutation operation with return format documentation, but gaps remain for a complex tool: no sibling relationship context, no side effect disclosure, and no output schema reference to compensate for the complex nested object parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. The description categorizes inputs as required/optional but adds no semantic depth beyond the schema (e.g., doesn't explain that eTag is for concurrency control or how content structure relates to specific actions).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool executes an action on an existing case, distinguishing it from sibling 'get' operations (get_case, get_cases, get_case_actions). However, it misses opportunity to clarify the relationship with 'get_case_actions' (which lists available actions vs. this tool which executes them).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Lists required and optional inputs but provides no guidance on when to use this tool versus alternatives (e.g., when to execute actions vs. when to simply get case data). No mention of prerequisites like knowing available actions beforehand.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so description carries full burden. It discloses return format structure ({ ok: true/false, data/error }) which helps, but omits safety profile (read-only vs destructive), rate limits, or auth requirements despite mutation-sounding siblings existing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently structured with purpose first, then inputs, then outputs. Each sentence delivers distinct information (functionality, input cardinality, return contracts). Slightly verbose in documenting JSON return structure inline.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately compensates for missing output schema by documenting both success and failure response formats. With 5 parameters and good schema coverage, the description provides sufficient context to invoke the tool, though workflow relationships could be clearer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description categorizes parameters as 'Required' and 'Optional' which mirrors the schema but adds no additional semantic context (e.g., example values for originChannel, relationship between actionId and viewType).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verbs ('discover', 'fetch') and resource ('actions for a case'), clearly identifying it queries case actions. Lacks explicit differentiation from sibling 'submit_case_action' (discovery vs execution).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage by delineating required (caseId) vs optional inputs, but provides no explicit guidance on when to use versus 'get_case' or workflow sequencing before 'submit_case_action'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden of disclosing behavior. It effectively documents the exact JSON response structure for both success and failure cases, compensating for the missing output schema. However, it doesn't mention idempotency, caching, or rate limiting characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is structured logically with purpose first, then inputs, then outputs. It is appropriately concise with no redundant sentences, though the input listing is somewhat mechanical and could be integrated more fluidly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description adequately completes the documentation by manually specifying the return payload structure and error format. For a simple three-parameter retrieval tool, this provides sufficient context for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting all three parameters including originChannel's channel hint semantics. The description merely lists the parameter names and their required/optional status without adding syntax details, usage examples, or semantic context beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'metadata for a specific case view' using specific verbs and resources. While it doesn't explicitly name sibling tools to differentiate from 'get_case' or 'get_case_actions', the target resource (view metadata) is distinct enough to avoid confusion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists required and optional inputs but provides no guidance on when to use this tool versus siblings like 'pega.get_case' or 'pega.get_case_actions'. There are no prerequisites, conditions, or exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description carries full burden and successfully discloses success/failure response structures and the dependency constraint between pageName and viewType. Lacks auth or rate limit details but covers the essential behavioral contract.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Dense, mechanical structure with minimal waste. Each sentence delivers distinct value: purpose, parameter requirements, constraints, and return formats. 'Use this tool to' is slight filler but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a read operation with no output schema: documents return structure, error format, and input constraints. Sibling differentiation would strengthen it, but technically sufficient for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds value by documenting the cross-parameter constraint (pageName requires viewType) and explicitly labeling optionality, which aids agent reasoning beyond raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('fetch') and resource ('case'), with scope ('one case') that implicitly distinguishes from sibling 'get_cases'. However, lacks explicit differentiation from 'get_case_views' or 'get_case_actions'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Documents cross-parameter constraint ('pageName requires viewType') and required vs optional inputs, which are critical for correct invocation. However, misses explicit guidance on when to use this singular fetch vs 'get_cases' (list) or vs view-specific siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return format structure for both success and failure cases ({ ok: true/false, ... }), compensating for lack of output schema. Explains the dual-mode upload flow behavior, though could explicitly state this modifies the case.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient structure: purpose statement, required input highlight, two mode explanations, then return formats. Every sentence conveys essential information; the JSON return examples are compact and informative rather than verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a 6-parameter tool with dual invocation patterns. Covers return values (absent output schema), required fields, optional variations, and the channel hint parameter. Addresses the complexity of base64 vs direct attachmentpayloads adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). Description adds significant value by grouping parameters into 'Mode 1' vs 'Mode 2' patterns, explaining the semantic relationship that attachments array is mutually exclusive with fileName/fileContentBase64 parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Excellent clarity: specific verb 'attach' with resources 'files or URLs' and target 'existing case'. Clearly distinguishes from sibling 'get_*' tools (read operations) and 'submit_case_action' by focusing specifically on document attachment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance on two mutually exclusive input modes (attachments array vs fileName/fileContentBase64), which is crucial for correct invocation. Lacks explicit contrast with sibling tools, though the attachment-specific purpose makes this relatively clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/amirmcs/Pega-MCP'
If you have feedback or need assistance with the MCP directory API, please join our Discord server