Skip to main content
Glama
aliyun
by aliyun

CreateDataQualityEvaluationTask

Create data quality monitoring tasks to validate table data against configurable rules, thresholds, and triggers within DataWorks environments.

Instructions

创建数据质量监控 *This Tool has a 'MCP Resource',please request CreateDataQualityEvaluationTask(MCP Resource) to get more examples for using this tool.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
TargetYes数据质量监控对象
NameYes质量监控任务名称
ProjectIdNoDataWorks工作空间的ID
DataSourceIdNo数据源ID
DescriptionNo质量监控任务描述
RuntimeConfNo扩展配置,JSON格式的字符串,只对EMR类型的数据质量监控生效。- queue:执行EMR数据质量校验时,使用的yarn队列,默认为本项目配置的队列- sqlEngine:执行EMR的数据校验时,采用的SQL引擎 + HIVE_SQL + SPARK_SQL
TriggerNo数据质量校验任务的触发配置
DataQualityRulesNo数据质量监控关联的数据质量规则列表。如果设置了DataQualityRule.Id,则把Id对应的规则关联到新建质量监控中;如果没有设置,则用其他字段创建一个新的规则,关联到新建的质量监控中
HooksNo回调设置
NotificationsNo通知订阅配置
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but fails completely. It doesn't indicate whether this is a read or write operation (though 'Create' implies mutation), what permissions are required, whether it's idempotent, what happens on failure, or any rate limits. The description provides zero behavioral context beyond the basic action implied by the name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely brief (two sentences) but under-specified rather than appropriately concise. The first sentence is a tautology that adds no value, and the second sentence is meta-instruction about MCP Resources rather than tool functionality. While it's not verbose, it fails to use its limited space effectively to convey meaningful information about the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (10 parameters with nested objects, no annotations, no output schema), the description is severely inadequate. It doesn't explain what a 'data quality evaluation task' entails, what happens after creation, how it relates to other data quality tools, or what the expected outcomes are. For a creation tool with significant parameter complexity and no structured behavioral hints, the description provides almost no contextual information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 10 parameters thoroughly. The description adds no parameter information whatsoever - it doesn't mention any parameters, their purposes, or relationships. However, with complete schema coverage, the baseline score is 3 even without parameter details in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description '创建数据质量监控' (Create data quality monitoring) is a tautology that restates the tool name 'CreateDataQualityEvaluationTask' in Chinese. It lacks specificity about what resource is being created (a task for evaluating data quality) and doesn't distinguish it from sibling tools like 'CreateDataQualityEvaluationTaskInstance' or 'CreateDataQualityRule'. The description provides no meaningful verb+resource combination beyond the name itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'CreateDataQualityRule' or 'CreateDataQualityEvaluationTaskInstance', nor does it provide any context about prerequisites, appropriate scenarios, or exclusions. The second sentence is purely meta-instruction about MCP Resources, not usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aliyun/alibabacloud-dataworks-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server