Skip to main content
Glama

validate_dependencies

Analyze task dependencies to identify issues such as circular references or broken links without altering the task file. Ensure task integrity within project directories using specified paths and tags.

Instructions

Check tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
fileNoAbsolute path to the tasks file
projectRootYesThe directory of the project. Must be an absolute path.
tagNoTag context to operate on

Implementation Reference

  • The execute handler for the 'validate_dependencies' MCP tool. Resolves the tasks file path, calls validateDependenciesDirect, handles logging and returns formatted API result.
    async (args, { log, session }) => {
    	try {
    		const resolvedTag = resolveTag({
    			projectRoot: args.projectRoot,
    			tag: args.tag
    		});
    		log.info(
    			`Validating dependencies with args: ${JSON.stringify(args)}`
    		);
    
    		// Use args.projectRoot directly (guaranteed by withToolContext)
    		let tasksJsonPath;
    		try {
    			tasksJsonPath = findTasksPath(
    				{ projectRoot: args.projectRoot, file: args.file },
    				log
    			);
    		} catch (error) {
    			log.error(`Error finding tasks.json: ${error.message}`);
    			return createErrorResponse(
    				`Failed to find tasks.json: ${error.message}`
    			);
    		}
    
    		const result = await validateDependenciesDirect(
    			{
    				tasksJsonPath: tasksJsonPath,
    				projectRoot: args.projectRoot,
    				tag: resolvedTag
    			},
    			log
    		);
    
    		if (result.success) {
    			log.info(
    				`Successfully validated dependencies: ${result.data.message}`
    			);
    		} else {
    			log.error(
    				`Failed to validate dependencies: ${result.error.message}`
    			);
    		}
    
    		return handleApiResult({
    			result,
    			log,
    			errorPrefix: 'Error validating dependencies',
    			projectRoot: args.projectRoot,
    			tag: resolvedTag
    		});
    	} catch (error) {
    		log.error(`Error in validateDependencies tool: ${error.message}`);
    		return createErrorResponse(error.message);
    	}
    }
  • Zod schema defining input parameters for the validate_dependencies tool: optional file, required projectRoot, optional tag.
    parameters: z.object({
    	file: z.string().optional().describe('Absolute path to the tasks file'),
    	projectRoot: z
    		.string()
    		.describe('The directory of the project. Must be an absolute path.'),
    	tag: z.string().optional().describe('Tag context to operate on')
    }),
  • Registration function that adds the 'validate_dependencies' tool to the MCP server with name, description, schema, and execute handler.
    export function registerValidateDependenciesTool(server) {
    	server.addTool({
    		name: 'validate_dependencies',
    		description:
    			'Check tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.',
    		parameters: z.object({
    			file: z.string().optional().describe('Absolute path to the tasks file'),
    			projectRoot: z
    				.string()
    				.describe('The directory of the project. Must be an absolute path.'),
    			tag: z.string().optional().describe('Tag context to operate on')
    		}),
    		execute: withToolContext(
    			'validate-dependencies',
    			async (args, { log, session }) => {
    				try {
    					const resolvedTag = resolveTag({
    						projectRoot: args.projectRoot,
    						tag: args.tag
    					});
    					log.info(
    						`Validating dependencies with args: ${JSON.stringify(args)}`
    					);
    
    					// Use args.projectRoot directly (guaranteed by withToolContext)
    					let tasksJsonPath;
    					try {
    						tasksJsonPath = findTasksPath(
    							{ projectRoot: args.projectRoot, file: args.file },
    							log
    						);
    					} catch (error) {
    						log.error(`Error finding tasks.json: ${error.message}`);
    						return createErrorResponse(
    							`Failed to find tasks.json: ${error.message}`
    						);
    					}
    
    					const result = await validateDependenciesDirect(
    						{
    							tasksJsonPath: tasksJsonPath,
    							projectRoot: args.projectRoot,
    							tag: resolvedTag
    						},
    						log
    					);
    
    					if (result.success) {
    						log.info(
    							`Successfully validated dependencies: ${result.data.message}`
    						);
    					} else {
    						log.error(
    							`Failed to validate dependencies: ${result.error.message}`
    						);
    					}
    
    					return handleApiResult({
    						result,
    						log,
    						errorPrefix: 'Error validating dependencies',
    						projectRoot: args.projectRoot,
    						tag: resolvedTag
    					});
    				} catch (error) {
    					log.error(`Error in validateDependencies tool: ${error.message}`);
    					return createErrorResponse(error.message);
    				}
    			}
    		)
    	});
    }
  • Entry in the central toolRegistry mapping 'validate_dependencies' to its registration function.
    validate_dependencies: registerValidateDependenciesTool,
  • Direct function wrapper that calls the core validateDependenciesCommand, handles silent mode and file existence checks, returns structured success/error.
    export async function validateDependenciesDirect(args, log) {
    	// Destructure the explicit tasksJsonPath
    	const { tasksJsonPath, projectRoot, tag } = args;
    
    	if (!tasksJsonPath) {
    		log.error('validateDependenciesDirect called without tasksJsonPath');
    		return {
    			success: false,
    			error: {
    				code: 'MISSING_ARGUMENT',
    				message: 'tasksJsonPath is required'
    			}
    		};
    	}
    
    	try {
    		log.info(`Validating dependencies in tasks: ${tasksJsonPath}`);
    
    		// Use the provided tasksJsonPath
    		const tasksPath = tasksJsonPath;
    
    		// Verify the file exists
    		if (!fs.existsSync(tasksPath)) {
    			return {
    				success: false,
    				error: {
    					code: 'FILE_NOT_FOUND',
    					message: `Tasks file not found at ${tasksPath}`
    				}
    			};
    		}
    
    		// Enable silent mode to prevent console logs from interfering with JSON response
    		enableSilentMode();
    
    		const options = { projectRoot, tag };
    		// Call the original command function using the provided tasksPath
    		await validateDependenciesCommand(tasksPath, options);
    
    		// Restore normal logging
    		disableSilentMode();
    
    		return {
    			success: true,
    			data: {
    				message: 'Dependencies validated successfully',
    				tasksPath
    			}
    		};
    	} catch (error) {
    		// Make sure to restore normal logging even if there's an error
    		disableSilentMode();
    
    		log.error(`Error validating dependencies: ${error.message}`);
    		return {
    			success: false,
    			error: {
    				code: 'VALIDATION_ERROR',
    				message: error.message
    			}
    		};
    	}
    }
  • Core command function that reads tasks.json, validates dependencies using validateTaskDependencies, logs issues or success, displays summary.
    async function validateDependenciesCommand(tasksPath, options = {}) {
    	const { context = {} } = options;
    	log('info', 'Checking for invalid dependencies in task files...');
    
    	// Read tasks data
    	const data = readJSON(tasksPath, context.projectRoot, context.tag);
    	if (!data || !data.tasks) {
    		log('error', 'No valid tasks found in tasks.json');
    		process.exit(1);
    	}
    
    	// Count of tasks and subtasks for reporting
    	const taskCount = data.tasks.length;
    	let subtaskCount = 0;
    	data.tasks.forEach((task) => {
    		if (task.subtasks && Array.isArray(task.subtasks)) {
    			subtaskCount += task.subtasks.length;
    		}
    	});
    
    	log(
    		'info',
    		`Analyzing dependencies for ${taskCount} tasks and ${subtaskCount} subtasks...`
    	);
    
    	try {
    		// Directly call the validation function
    		const validationResult = validateTaskDependencies(data.tasks);
    
    		if (!validationResult.valid) {
    			log(
    				'error',
    				`Dependency validation failed. Found ${validationResult.issues.length} issue(s):`
    			);
    			validationResult.issues.forEach((issue) => {
    				let errorMsg = `  [${issue.type.toUpperCase()}] Task ${issue.taskId}: ${issue.message}`;
    				if (issue.dependencyId) {
    					errorMsg += ` (Dependency: ${issue.dependencyId})`;
    				}
    				log('error', errorMsg); // Log each issue as an error
    			});
    
    			// Optionally exit if validation fails, depending on desired behavior
    			// process.exit(1); // Uncomment if validation failure should stop the process
    
    			// Display summary box even on failure, showing issues found
    			if (!isSilentMode()) {
    				console.log(
    					boxen(
    						chalk.red(`Dependency Validation FAILED\n\n`) +
    							`${chalk.cyan('Tasks checked:')} ${taskCount}\n` +
    							`${chalk.cyan('Subtasks checked:')} ${subtaskCount}\n` +
    							`${chalk.red('Issues found:')} ${validationResult.issues.length}`, // Display count from result
    						{
    							padding: 1,
    							borderColor: 'red',
    							borderStyle: 'round',
    							margin: { top: 1, bottom: 1 }
    						}
    					)
    				);
    			}
    		} else {
    			log(
    				'success',
    				'No invalid dependencies found - all dependencies are valid'
    			);
    
    			// Show validation summary - only if not in silent mode
    			if (!isSilentMode()) {
    				console.log(
    					boxen(
    						chalk.green(`All Dependencies Are Valid\n\n`) +
    							`${chalk.cyan('Tasks checked:')} ${taskCount}\n` +
    							`${chalk.cyan('Subtasks checked:')} ${subtaskCount}\n` +
    							`${chalk.cyan('Total dependencies verified:')} ${countAllDependencies(data.tasks)}`,
    						{
    							padding: 1,
    							borderColor: 'green',
    							borderStyle: 'round',
    							margin: { top: 1, bottom: 1 }
    						}
    					)
    				);
    			}
    		}
    	} catch (error) {
    		log('error', 'Error validating dependencies:', error);
    		process.exit(1);
    	}
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the non-destructive behavior ('without making changes'), which is valuable. However, it lacks details on what specific dependency issues are checked (e.g., circular references, links to non-existent tasks are only implied), error handling, or output format, leaving gaps in behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and includes a critical behavioral constraint ('without making changes'). Every word earns its place with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description adequately covers the purpose and non-destructive nature but lacks details on what specific issues are validated (only implied), error scenarios, or return values. For a validation tool with three parameters, it's minimally viable but leaves contextual gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters. The description does not add any parameter-specific information beyond what the schema provides (e.g., it doesn't explain how 'tag' interacts with dependency checking). Baseline 3 is appropriate when the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check tasks for dependency issues') and resource ('tasks'), distinguishing it from siblings like 'fix_dependencies' (which makes changes) and 'analyze_project_complexity' (which focuses on complexity rather than validation). The phrase 'without making changes' further differentiates it from mutation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Check tasks for dependency issues') and when not to use it ('without making changes'), clearly distinguishing it from alternatives like 'fix_dependencies' (which would resolve issues) and 'analyze_project_complexity' (which assesses complexity rather than validation).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/eyaltoledano/claude-task-master'

If you have feedback or need assistance with the MCP directory API, please join our Discord server