Skip to main content
Glama
CDataSoftware

CData Sync MCP Server

Official

read_history

Access job execution history to analyze performance, troubleshoot failures, and audit data movements for monitoring job health and SLA compliance.

Instructions

Access job execution history to analyze performance, troubleshoot failures, and audit data movements. If not authenticated with CData Sync, you will be prompted for credentials. Each history record shows when a job ran, its status, duration, and records affected. Use 'list' to browse history with filters/sorting. Note: Count action not supported by API - use 'list' and count results client-side. Essential for monitoring job health and SLA compliance.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
actionNoList history records. For counts, use 'list' with 'top' parameter and count results client-side.list
filterNo⚠️ KNOWN ISSUE: OData filters currently fail due to API limitation. Use 'top' parameter and filter results client-side instead. Example: Use top=100 then filter by JobName manually.
orderbyNoSort order (e.g., 'RunStartDate desc' for most recent first)
selectNoProperties to include (e.g., 'JobName,RunStartDate,Status,Runtime,RecordsAffected')
topNoMaximum records to return (useful for recent history and counting)
skipNoRecords to skip for pagination
workspaceIdNoWorkspace ID to use for this operation. Overrides the default workspace. Use 'default' for the default workspace or a UUID for specific workspaces.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: authentication requirements (prompt for credentials if not authenticated), API limitations (count action not supported, workaround provided), and the nature of returned data (history records with specific fields like status, duration). It doesn't cover rate limits or error handling, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose followed by authentication notes, record details, usage instructions, and API limitations. Every sentence adds value, though it could be slightly more streamlined by combining some related points about filtering and counting.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no annotations, no output schema), the description provides good contextual completeness. It covers purpose, authentication, data format, usage patterns, and API limitations. The main gap is the lack of output format details (though it mentions record fields), but it compensates with strong operational guidance for a read-only history tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description adds minimal parameter-specific information beyond the schema, mentioning 'list' action and client-side filtering for counts. It provides context about parameter usage but doesn't add significant semantic value beyond what's in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Access job execution history to analyze performance, troubleshoot failures, and audit data movements.' It specifies the verb ('access') and resource ('job execution history'), distinguishes it from siblings like 'read_jobs' or 'execute_job' by focusing on historical execution data, and lists specific use cases (performance analysis, troubleshooting, auditing).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: it states 'Use 'list' to browse history with filters/sorting' and notes 'Count action not supported by API - use 'list' and count results client-side.' It also mentions authentication requirements ('If not authenticated with CData Sync, you will be prompted for credentials'), helping the agent understand prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CDataSoftware/cdata-sync-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server