Skip to main content
Glama
pdfdotco

PDF.co MCP Server

Official
by pdfdotco

get_job_check

Check the status and results of a PDF processing job to monitor progress, verify completion, or identify failures in PDF.co operations.

Instructions

Check the status and results of a job
Status can be:
- working: background job is currently in work or does not exist.
- success: background job was successfully finished.
- failed: background job failed for some reason (see message for more details).
- aborted: background job was aborted.
- unknown: unknown background job id. Available only when force is set to true for input request.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
job_idYesThe ID of the job to get the status of
api_keyNoPDF.co API key. If not provided, will use X_API_KEY environment variable. (Optional)

Implementation Reference

  • The main handler for the 'get_job_check' MCP tool. Includes the @mcp.tool() decorator for registration, input schema via Pydantic Field annotations, docstring, and delegates execution to the internal _get_job_status helper.
    @mcp.tool()
    async def get_job_check(
        job_id: str = Field(description="The ID of the job to get the status of"),
        api_key: str = Field(
            description="PDF.co API key. If not provided, will use X_API_KEY environment variable. (Optional)",
            default="",
        ),
    ) -> BaseResponse:
        """
        Check the status and results of a job
        Status can be:
        - working: background job is currently in work or does not exist.
        - success: background job was successfully finished.
        - failed: background job failed for some reason (see message for more details).
        - aborted: background job was aborted.
        - unknown: unknown background job id. Available only when force is set to true for input request.
        """
        return await _get_job_status(job_id, api_key)
  • Pydantic BaseModel defining the output schema for the get_job_check tool response.
    class BaseResponse(BaseModel):
        status: str
        content: Any
        credits_used: int | None = None
        credits_remaining: int | None = None
        tips: str | None = None
  • Internal helper function containing the core logic: makes HTTP POST to PDF.co /v1/job/check API and constructs the BaseResponse.
    async def _get_job_status(job_id: str, api_key: str = "") -> BaseResponse:
        """
        Internal helper function to check job status without MCP tool decoration
        """
        try:
            async with PDFCoClient(api_key=api_key) as client:
                response = await client.post(
                    "/v1/job/check",
                    json={
                        "jobId": job_id,
                    },
                )
                json_data = response.json()
                return BaseResponse(
                    status=json_data["status"],
                    content=json_data,
                    credits_used=json_data.get("credits"),
                    credits_remaining=json_data.get("remainingCredits"),
                    tips="You can download the result if status is success",
                )
        except Exception as e:
            return BaseResponse(
                status="error",
                content=str(e),
            )
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses behavioral traits like status outcomes and conditions (e.g., 'unknown' status only with 'force' set to true), but lacks details on permissions, rate limits, error handling, or response format. For a job-checking tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the main purpose, followed by a bulleted list of statuses. Each sentence earns its place by clarifying tool behavior. It could be slightly more concise by integrating the status list into a single sentence, but overall it's efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides basic context on status outcomes and conditions. However, it lacks details on return values (e.g., what 'results' include), error scenarios, or integration with sibling tools. For a job-checking tool, this is adequate but has clear gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('job_id' and 'api_key'). The description adds context by mentioning 'force' in relation to 'unknown' status, which hints at parameter behavior but doesn't directly explain parameters beyond the schema. Baseline 3 is appropriate as the schema does most of the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check the status and results of a job' with a specific verb ('Check') and resource ('job'). It distinguishes from siblings like 'wait_job_completion' by focusing on status checking rather than waiting, though it doesn't explicitly name alternatives. The purpose is specific but could be more differentiated from similar tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by detailing status values (e.g., 'working', 'success'), suggesting it's for monitoring job progress. However, it doesn't explicitly state when to use this tool versus alternatives like 'wait_job_completion' or provide context on prerequisites (e.g., after job creation). Guidelines are implied but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pdfdotco/pdfco-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server