Skip to main content
Glama
krixerx

CIB Seven MCP Server

by krixerx

Server Quality Checklist

58%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool targets a distinct aspect of process execution with clear boundaries: instances (list/get), execution trace (activity_history), data (variables), background work (job_details), blueprint (definition_xml), and errors (incidents). No ambiguous overlap exists between these concepts.

    Naming Consistency4/5

    Follows consistent snake_case verb_noun pattern (get_*, list_*), but applies the 'process_' prefix inconsistently (e.g., get_process_instance vs get_activity_history). Despite this, the naming remains readable and predictable.

    Tool Count5/5

    Seven tools provide a focused, well-scoped surface for process observability without bloat. Each tool earns its place by covering a specific inspection need, fitting appropriately within the typical 3-15 tool range for this domain.

    Completeness3/5

    Covers core read-only inspection workflows well, but has notable gaps for a general CIB Seven integration: missing list_process_definitions (required to browse definitions without an instance), missing task-specific operations, and lacks any write/modify capabilities (create instances, update variables, resolve incidents).

  • Average 4.1/5 across 7 of 7 tools scored. Lowest: 3.5/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • Add a LICENSE file by following GitHub's guide.

    MCP servers without a LICENSE cannot be installed.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 7 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. Adds valuable security context about redaction ('[REDACTED]' based on patterns). Missing confirmation that this is read-only, auth requirements, or return structure details.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with clear paragraph separation and bullet points for common patterns. No filler text; redaction warning and usage patterns earn their place. Appropriately sized for the domain complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Complete for a single-parameter read tool. Compensates for missing output schema by explaining what variables represent and their business relevance. Redaction disclosure addresses a key operational concern.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage ('The process instance ID'), meeting baseline expectations. Description mentions 'process instance' but doesn't add parameter-specific semantics like ID format or constraints beyond the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb+resource ('Get all variables for a process instance') and distinguishes scope by explaining what variables contain (form inputs, API responses) versus instance metadata or activities. Lacks explicit differentiation from sibling tools like get_process_instance.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implied usage guidance through 'Common variable patterns' section (error flags, retry counters, input data), hinting at when to use the tool. However, lacks explicit when-to-use/when-not-to-use guidance or named alternatives.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations and no output schema, the description carries full disclosure burden. It excellently documents key output fields (retries, exceptionMessage, dueDate, suspended) and their semantics (e.g., '0 means the engine gave up'). Explains job states and error conditions without contradicting the read-only nature of the operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two well-structured blocks: purpose definition followed by key field documentation. Every sentence earns its place. Front-loaded with the primary action, followed by domain context, then specific field semantics. No redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Compensates effectively for missing output schema by documenting four critical output fields with their meanings and relationships. Explains domain concepts (jobs vs incidents) necessary for interpretation. Given 3 simple parameters and no annotations, the description provides sufficient context for successful invocation and result interpretation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (all 3 params documented). Description implies 'processInstanceId' usage via 'for a process instance', but doesn't add syntax details, format constraints, or pagination guidance beyond the schema. Baseline 3 appropriate given schema completeness.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear specific verb ('Get') and resource ('job execution details'). Defines what jobs are ('units of work... service tasks, timers, message events') distinguishing them from process instances or activities. However, it doesn't explicitly differentiate from sibling 'list_incidents' despite mentioning the incident relationship.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explains job lifecycle concepts (retries=0 creates incident) which implies usage context, but lacks explicit guidance on when to use this vs 'get_activity_history' or 'list_incidents'. No explicit prerequisites or exclusions stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It excellently documents output semantics: explaining that missing endTime indicates running activities, that durationInMillis is null for active tasks, and detailing the canceled flag. It could improve by explicitly stating this is read-only/safe or mentioning any rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information is perfectly front-loaded: purpose in sentence one, value proposition in sentence two, critical interpretation guidance in sentence three. The 'Key fields' section efficiently documents output structure without verbosity. Every sentence and bullet earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite lacking an output schema, the description compensates comprehensively by enumerating and explaining key response fields (activityType, activityName, timestamps, duration, canceled). It contextualizes these within BPMN concepts (startEvent, serviceTask, etc.). Only minor gap is explicit confirmation of read-only safety.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% coverage with clear description 'The process instance ID to trace'. The tool description references 'process instance' in the context but doesn't add parameter-specific guidance beyond the schema's documentation. Baseline 3 is appropriate given the schema does the heavy lifting for this single required parameter.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Get') and resource ('execution trace for a process instance'), clearly distinguishing this from siblings like get_process_instance (current state) or get_process_variables (data). The scope 'every activity that ran, in order' precisely defines what this tool retrieves.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description explains what the tool returns (execution history, task order, duration), which implies when to use it (when you need audit trails). However, it lacks explicit guidance on when to prefer this over get_process_instance for current state or how it relates to the process lifecycle.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It effectively discloses behavioral traits by explaining the domain logic: failedJob means 'retries=0' and failedExternalTask means 'worker reported a failure.' It also clarifies these are 'open' incidents (state-based filtering). Could explicitly state this is read-only.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with clear sections: purpose declaration, bulleted type definitions with explanations, and usage guidance. Every sentence earns its place—no filler. The progression from general definition to specific types to usage patterns is logical and efficient.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 4 parameters with 100% schema coverage and no output schema, the description provides adequate domain context. It explains the process engine incident model sufficiently and covers pagination parameters implicitly through usage guidance. As a list operation, it appropriately focuses on filtering logic rather than return values.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage, but the description adds crucial semantic context beyond the schema. It explains what 'failedJob' actually means (service task/timer threw exception, retries exhausted) and defines failedExternalTask, helping the agent understand the business logic behind the incidentType parameter.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool 'List[s] open incidents in the process engine' with specific verb and resource. It defines what incidents are ('things that went wrong during execution') and identifies key types. However, it does not explicitly distinguish from sibling tools like get_activity_history or list_process_instances.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit filtering guidance: 'Use without filters to see all open incidents. Filter by processInstanceId to see incidents for a specific process.' This clarifies how to scope queries. Lacks explicit alternatives (e.g., when to use get_job_details instead), but the context provided is clear.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden. It successfully adds critical behavioral context by specifying the tool uses the 'history API' and returns 'historic process instances,' implying read-only access to archived data. It also discloses the default pagination limit (25).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Text is efficiently structured with a clear opening sentence, a 'when to use' sentence, a return value statement, and a bulleted list for filter options. Every sentence earns its place; no redundancy or filler.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 7 parameters with 100% schema coverage and no output schema, the description adequately covers tool purpose, behavioral API type (history), return structure (array), and provides concrete examples for key filters. Missing only minor details like explicit mention that all parameters are optional or the relationship between the three boolean state flags.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While the schema has 100% description coverage (baseline 3), the description adds helpful examples for processDefinitionKey and businessKey. However, it documents a 'state' enum parameter (ACTIVE, COMPLETED, SUSPENDED, EXTERNALLY_TERMINATED) that does not exist in the schema; instead, the schema implements these as three separate boolean flags (active, suspended, completed). This mismatch creates confusion about how to structure filter parameters.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with 'Search for process instances using filters' providing a specific verb and resource. It further distinguishes scope by noting it uses the 'history API to find both running and completed instances,' which differentiates it from siblings like get_process_instance (singular retrieval) and get_activity_history (activity level vs instance level).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states 'Use this when you need to find process instances by business key, definition key, or state,' providing clear context for when to select this tool. While it doesn't explicitly name the alternative (e.g., 'use get_process_instance when you have an ID'), the filtering focus implies this is for discovery rather than direct retrieval.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations, description carries full burden. It explains semantic meaning of key response states (suspended=manually paused, ended=completed/cancelled, businessKey=domain identifier) and cross-tool workflow (definitionId links to get_process_definition_xml). Could explicitly state read-only nature.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three logical segments (purpose/return, usage guidance, response documentation) with efficient bullet points for field definitions. No redundant text, though slightly longer than minimal necessary due to embedded response documentation.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Compensates for missing output schema by comprehensively documenting key response fields and their domain semantics (e.g., business key examples). Covers relationships to sibling tools and CIB Seven domain context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with clear parameter description ('The UUID of the process instance'). Description mentions lookup 'by its ID' but adds no additional semantic details (format constraints, validation rules) beyond schema baseline.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description uses specific verb 'Look up' with resource 'CIB Seven process instance' and explicitly distinguishes from sibling 'list_process_instances' by stating it requires a specific ID vs business/definition keys.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Contains explicit when-to-use clause ('when you have a specific process instance ID') and explicit alternative routing ('If you only have a business key... use list_process_instances instead').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Successfully discloses that 'Diagram layout elements are stripped for readability' (transformation behavior) and details what the XML includes (activities, gateways, sequence flows). Minor gap: could mention if this is expensive or cached, but 'Fetch' clearly indicates read-only nature.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Excellent structure: first sentence defines purpose, second links to prerequisite tool, third sets expectations about XML content, followed by targeted bullet points explaining analytical use cases. No wasted words; every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite no output schema, description comprehensively compensates by detailing XML contents (activities, gateways, conditions) and explaining analytical value (happy path, error boundaries, timers). For a single-parameter read operation, coverage is complete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, providing full parameter documentation. Description reinforces the parameter provenance by referencing get_process_instance, matching the schema description. At 100% coverage, baseline 3 is appropriate as schema carries the semantic weight.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verb 'Fetch' with clear resource 'BPMN XML model for a process definition' and distinguishes from siblings like get_process_instance (which retrieves runtime instance data) by emphasizing this retrieves the static 'blueprint' model.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states prerequisite 'Use the definitionId from get_process_instance to fetch the model,' providing clear workflow guidance linking to sibling tool. Also explains when to use the result (to understand happy path, gateway conditions, error events).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

cib7-mcp MCP server

Copy to your README.md:

Score Badge

cib7-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/krixerx/cib7-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server