Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| PORT | No | HTTP server port (when TRANSPORT=http) | 3000 |
| TRANSPORT | No | Transport mode: stdio or http | stdio |
| XCOMET_DEBUG | No | Enable verbose debug logging (v0.3.1+) | false |
| XCOMET_MODEL | No | xCOMET model to use (e.g., Unbabel/XCOMET-XL, Unbabel/XCOMET-XXL, or Unbabel/wmt22-comet-da) | Unbabel/XCOMET-XL |
| XCOMET_PRELOAD | No | Pre-load model at startup (v0.3.1+). Enabling this makes all requests fast (~500ms), including the first one. | false |
| XCOMET_PYTHON_PATH | No | Python executable path. If not set, the server automatically detects a Python environment with unbabel-comet installed. |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": true
} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| xcomet_evaluate | Evaluate the quality of a translation using xCOMET model. This tool analyzes a source text and its translation, providing:
Args:
Returns: For JSON format: { "score": number, // Quality score 0-1 "errors": [ // Detected errors { "text": string, "start": number, "end": number, "severity": "minor" | "major" | "critical" } ], "summary": string // Human-readable summary } Examples:
|
| xcomet_detect_errors | Detect and categorize errors in a translation. This tool focuses on error detection, providing detailed information about translation errors with their severity levels and positions. Args:
Returns: { "total_errors": number, "errors_by_severity": { "minor": number, "major": number, "critical": number }, "errors": [ { "text": string, "start": number, "end": number, "severity": "minor" | "major" | "critical", "suggestion": string | null } ] } Examples:
|
| xcomet_batch_evaluate | Evaluate multiple translation pairs in a batch. This tool processes multiple source-translation pairs and provides aggregate statistics along with individual results. Args:
Returns: { "average_score": number, "total_pairs": number, "results": [ { "index": number, "score": number, "error_count": number, "has_critical_errors": boolean } ], "summary": string } Examples:
|
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |