Skip to main content
Glama

lokalise_list_translations

Inspect translated text across languages to audit quality, find untranslated content, or check language progress. Filter by review status, QA issues, or language ID.

Instructions

Low-level inspection of actual translated text across languages. Required: projectId. Optional: limit (100), cursor, filterLangId, filterIsReviewed, filterQaIssues. Use for quality audits, finding untranslated content, or checking specific language progress. Returns: Translation entries with content, status, QA flags. Note: Different from keys - this shows actual text.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdYesProject ID to list translations for
limitNoNumber of translations to return (1-5000, default: 100)
cursorNoCursor for pagination (from previous response)
filterLangIdNoFilter by language ID (numeric, not ISO code)
filterIsReviewedNoFilter by review status (0=not reviewed, 1=reviewed)
filterUnverifiedNoFilter by verification status (0=verified, 1=unverified)
filterUntranslatedNoFilter by translation status (1=show only untranslated)
filterQaIssuesNoFilter by QA issues (comma-separated: spelling_and_grammar,inconsistent_placeholders,etc.)
filterActiveTaskIdNoFilter by active task ID
disableReferencesNoDisable reference information in response
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the tool is for 'low-level inspection' and describes the return format ('Translation entries with content, status, QA flags'), which adds useful behavioral context. However, it doesn't cover important aspects like pagination behavior (implied by 'cursor'), rate limits, authentication needs, or error handling, leaving gaps for a tool with 10 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose. Each sentence adds value: the first defines the tool, the second lists key parameters, the third provides usage scenarios, and the fourth clarifies returns and differentiation. While efficient, it could be slightly more concise by integrating parameter details more seamlessly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (10 parameters, no annotations, no output schema), the description is moderately complete. It covers purpose, usage, and basic returns, but lacks details on output structure, pagination mechanics, error cases, or performance characteristics. For a tool with many filtering options and no output schema, more contextual information would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 10 parameters thoroughly. The description lists some optional parameters ('limit (100), cursor, filterLangId, filterIsReviewed, filterQaIssues') but doesn't add meaningful semantic context beyond what's in the schema descriptions. It provides a baseline understanding without enhancing parameter interpretation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Low-level inspection of actual translated text across languages') and resource ('translations'), distinguishing it from sibling tools like 'lokalise_list_keys' by explicitly noting 'Different from keys - this shows actual text.' This provides precise differentiation and avoids confusion with similar listing tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Use for quality audits, finding untranslated content, or checking specific language progress') and distinguishes it from alternatives by noting it's 'Different from keys.' This provides clear context for selection among sibling tools, especially given the many list_* tools available.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AbdallahAHO/lokalise-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server