Skip to main content
Glama

research

Streamline AI-powered research by integrating project context, task IDs, and file paths. Save results to tasks or files, and customize detail levels within the Task Master MCP server.

Instructions

Perform AI-powered research queries with project context

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
customContextNoAdditional custom context text to include in the research
detailLevelNoDetail level for the research response (default: medium)
filePathsNoComma-separated list of file paths for context (e.g., "src/api.js,docs/readme.md")
includeProjectTreeNoInclude project file tree structure in context (default: false)
projectRootYesThe directory of the project. Must be an absolute path.
queryYesResearch query/prompt (required)
saveToNoAutomatically save research results to specified task/subtask ID (e.g., "15" or "15.2")
saveToFileNoSave research results to .taskmaster/docs/research/ directory (default: false)
tagNoTag context to operate on
taskIdsNoComma-separated list of task/subtask IDs for context (e.g., "15,16.2,17")

Implementation Reference

  • Core handler function implementing the research tool logic: validates args, gathers project context (tasks/files/tree), calls performResearch AI function, handles auto-save to tasks/subtasks/files, returns detailed results with token usage and telemetry.
    export async function researchDirect(args, log, context = {}) {
    	// Destructure expected args
    	const {
    		query,
    		taskIds,
    		filePaths,
    		customContext,
    		includeProjectTree = false,
    		detailLevel = 'medium',
    		saveTo,
    		saveToFile = false,
    		projectRoot,
    		tag
    	} = args;
    	const { session } = context; // Destructure session from context
    
    	// Enable silent mode to prevent console logs from interfering with JSON response
    	enableSilentMode();
    
    	// Create logger wrapper using the utility
    	const mcpLog = createLogWrapper(log);
    
    	try {
    		// Check required parameters
    		if (!query || typeof query !== 'string' || query.trim().length === 0) {
    			log.error('Missing or invalid required parameter: query');
    			disableSilentMode();
    			return {
    				success: false,
    				error: {
    					code: 'MISSING_PARAMETER',
    					message:
    						'The query parameter is required and must be a non-empty string'
    				}
    			};
    		}
    
    		// Parse comma-separated task IDs if provided
    		const parsedTaskIds = taskIds
    			? taskIds
    					.split(',')
    					.map((id) => id.trim())
    					.filter((id) => id.length > 0)
    			: [];
    
    		// Parse comma-separated file paths if provided
    		const parsedFilePaths = filePaths
    			? filePaths
    					.split(',')
    					.map((path) => path.trim())
    					.filter((path) => path.length > 0)
    			: [];
    
    		// Validate detail level
    		const validDetailLevels = ['low', 'medium', 'high'];
    		if (!validDetailLevels.includes(detailLevel)) {
    			log.error(`Invalid detail level: ${detailLevel}`);
    			disableSilentMode();
    			return {
    				success: false,
    				error: {
    					code: 'INVALID_PARAMETER',
    					message: `Detail level must be one of: ${validDetailLevels.join(', ')}`
    				}
    			};
    		}
    
    		log.info(
    			`Performing research query: "${query.substring(0, 100)}${query.length > 100 ? '...' : ''}", ` +
    				`taskIds: [${parsedTaskIds.join(', ')}], ` +
    				`filePaths: [${parsedFilePaths.join(', ')}], ` +
    				`detailLevel: ${detailLevel}, ` +
    				`includeProjectTree: ${includeProjectTree}, ` +
    				`projectRoot: ${projectRoot}`
    		);
    
    		// Prepare options for the research function
    		const researchOptions = {
    			taskIds: parsedTaskIds,
    			filePaths: parsedFilePaths,
    			customContext: customContext || '',
    			includeProjectTree,
    			detailLevel,
    			projectRoot,
    			tag,
    			saveToFile
    		};
    
    		// Prepare context for the research function
    		const researchContext = {
    			session,
    			mcpLog,
    			commandName: 'research',
    			outputType: 'mcp'
    		};
    
    		// Call the performResearch function
    		const result = await performResearch(
    			query.trim(),
    			researchOptions,
    			researchContext,
    			'json', // outputFormat - use 'json' to suppress CLI UI
    			false // allowFollowUp - disable for MCP calls
    		);
    
    		// Auto-save to task/subtask if requested
    		if (saveTo) {
    			try {
    				const isSubtask = saveTo.includes('.');
    
    				// Format research content for saving
    				const researchContent = `## Research Query: ${query.trim()}
    
    **Detail Level:** ${result.detailLevel}
    **Context Size:** ${result.contextSize} characters
    **Timestamp:** ${new Date().toLocaleDateString()} ${new Date().toLocaleTimeString()}
    
    ### Results
    
    ${result.result}`;
    
    				if (isSubtask) {
    					// Save to subtask
    					const { updateSubtaskById } = await import(
    						'../../../../scripts/modules/task-manager/update-subtask-by-id.js'
    					);
    
    					const tasksPath = path.join(
    						projectRoot,
    						'.taskmaster',
    						'tasks',
    						'tasks.json'
    					);
    					await updateSubtaskById(
    						tasksPath,
    						saveTo,
    						researchContent,
    						false, // useResearch = false for simple append
    						{
    							session,
    							mcpLog,
    							commandName: 'research-save',
    							outputType: 'mcp',
    							projectRoot,
    							tag
    						},
    						'json'
    					);
    
    					log.info(`Research saved to subtask ${saveTo}`);
    				} else {
    					// Save to task
    					const updateTaskById = (
    						await import(
    							'../../../../scripts/modules/task-manager/update-task-by-id.js'
    						)
    					).default;
    
    					const taskIdNum = parseInt(saveTo, 10);
    					const tasksPath = path.join(
    						projectRoot,
    						'.taskmaster',
    						'tasks',
    						'tasks.json'
    					);
    					await updateTaskById(
    						tasksPath,
    						taskIdNum,
    						researchContent,
    						false, // useResearch = false for simple append
    						{
    							session,
    							mcpLog,
    							commandName: 'research-save',
    							outputType: 'mcp',
    							projectRoot,
    							tag
    						},
    						'json',
    						true // appendMode = true
    					);
    
    					log.info(`Research saved to task ${saveTo}`);
    				}
    			} catch (saveError) {
    				log.warn(`Error saving research to task/subtask: ${saveError.message}`);
    			}
    		}
    
    		// Restore normal logging
    		disableSilentMode();
    
    		return {
    			success: true,
    			data: {
    				query: result.query,
    				result: result.result,
    				contextSize: result.contextSize,
    				contextTokens: result.contextTokens,
    				tokenBreakdown: result.tokenBreakdown,
    				systemPromptTokens: result.systemPromptTokens,
    				userPromptTokens: result.userPromptTokens,
    				totalInputTokens: result.totalInputTokens,
    				detailLevel: result.detailLevel,
    				telemetryData: result.telemetryData,
    				tagInfo: result.tagInfo,
    				savedFilePath: result.savedFilePath
    			}
    		};
    	} catch (error) {
    		// Make sure to restore normal logging even if there's an error
    		disableSilentMode();
    
    		log.error(`Error in researchDirect: ${error.message}`);
    		return {
    			success: false,
    			error: {
    				code: error.code || 'RESEARCH_ERROR',
    				message: error.message
    			}
    		};
    	}
    }
  • Registers the 'research' MCP tool with the server: defines name, description, full Zod input schema, and execute handler that normalizes project root, resolves tag, delegates to researchDirect core function, and handles API results/errors.
    export function registerResearchTool(server) {
    	server.addTool({
    		name: 'research',
    		description: 'Perform AI-powered research queries with project context',
    
    		parameters: z.object({
    			query: z.string().describe('Research query/prompt (required)'),
    			taskIds: z
    				.string()
    				.optional()
    				.describe(
    					'Comma-separated list of task/subtask IDs for context (e.g., "15,16.2,17")'
    				),
    			filePaths: z
    				.string()
    				.optional()
    				.describe(
    					'Comma-separated list of file paths for context (e.g., "src/api.js,docs/readme.md")'
    				),
    			customContext: z
    				.string()
    				.optional()
    				.describe('Additional custom context text to include in the research'),
    			includeProjectTree: z
    				.boolean()
    				.optional()
    				.describe(
    					'Include project file tree structure in context (default: false)'
    				),
    			detailLevel: z
    				.enum(['low', 'medium', 'high'])
    				.optional()
    				.describe('Detail level for the research response (default: medium)'),
    			saveTo: z
    				.string()
    				.optional()
    				.describe(
    					'Automatically save research results to specified task/subtask ID (e.g., "15" or "15.2")'
    				),
    			saveToFile: z
    				.boolean()
    				.optional()
    				.describe(
    					'Save research results to .taskmaster/docs/research/ directory (default: false)'
    				),
    			projectRoot: z
    				.string()
    				.describe('The directory of the project. Must be an absolute path.'),
    			tag: z.string().optional().describe('Tag context to operate on')
    		}),
    		execute: withNormalizedProjectRoot(async (args, { log, session }) => {
    			try {
    				const resolvedTag = resolveTag({
    					projectRoot: args.projectRoot,
    					tag: args.tag
    				});
    				log.info(
    					`Starting research with query: "${args.query.substring(0, 100)}${args.query.length > 100 ? '...' : ''}"`
    				);
    
    				// Call the direct function
    				const result = await researchDirect(
    					{
    						query: args.query,
    						taskIds: args.taskIds,
    						filePaths: args.filePaths,
    						customContext: args.customContext,
    						includeProjectTree: args.includeProjectTree || false,
    						detailLevel: args.detailLevel || 'medium',
    						saveTo: args.saveTo,
    						saveToFile: args.saveToFile || false,
    						projectRoot: args.projectRoot,
    						tag: resolvedTag
    					},
    					log,
    					{ session }
    				);
    
    				return handleApiResult({
    					result,
    					log: log,
    					errorPrefix: 'Error performing research',
    					projectRoot: args.projectRoot
    				});
    			} catch (error) {
    				log.error(`Error in research tool: ${error.message}`);
    				return createErrorResponse(error.message);
    			}
    		})
    	});
    }
  • Zod schema defining all input parameters for the research tool, including query, context selectors (tasks/files/custom/tree), output options (detail/save), project/tag info.
    parameters: z.object({
    	query: z.string().describe('Research query/prompt (required)'),
    	taskIds: z
    		.string()
    		.optional()
    		.describe(
    			'Comma-separated list of task/subtask IDs for context (e.g., "15,16.2,17")'
    		),
    	filePaths: z
    		.string()
    		.optional()
    		.describe(
    			'Comma-separated list of file paths for context (e.g., "src/api.js,docs/readme.md")'
    		),
    	customContext: z
    		.string()
    		.optional()
    		.describe('Additional custom context text to include in the research'),
    	includeProjectTree: z
    		.boolean()
    		.optional()
    		.describe(
    			'Include project file tree structure in context (default: false)'
    		),
    	detailLevel: z
    		.enum(['low', 'medium', 'high'])
    		.optional()
    		.describe('Detail level for the research response (default: medium)'),
    	saveTo: z
    		.string()
    		.optional()
    		.describe(
    			'Automatically save research results to specified task/subtask ID (e.g., "15" or "15.2")'
    		),
    	saveToFile: z
    		.boolean()
    		.optional()
    		.describe(
    			'Save research results to .taskmaster/docs/research/ directory (default: false)'
    		),
    	projectRoot: z
    		.string()
    		.describe('The directory of the project. Must be an absolute path.'),
    	tag: z.string().optional().describe('Tag context to operate on')
    }),
  • Central tool registry maps the 'research' tool name to its registration function registerResearchTool, enabling dynamic tool registration by MCP server init code.
    research: registerResearchTool,
  • Central re-export module that imports and re-exports researchDirect from its direct-functions implementation, making it available to tool execute handlers.
    import { researchDirect } from './direct-functions/research.js';
    import { scopeDownDirect } from './direct-functions/scope-down.js';
    import { scopeUpDirect } from './direct-functions/scope-up.js';
    import { setTaskStatusDirect } from './direct-functions/set-task-status.js';
    import { updateSubtaskByIdDirect } from './direct-functions/update-subtask-by-id.js';
    import { updateTaskByIdDirect } from './direct-functions/update-task-by-id.js';
    import { updateTasksDirect } from './direct-functions/update-tasks.js';
    import { useTagDirect } from './direct-functions/use-tag.js';
    import { validateDependenciesDirect } from './direct-functions/validate-dependencies.js';
    
    // Re-export utility functions
    export { findTasksPath } from './utils/path-utils.js';
    
    // Use Map for potential future enhancements like introspection or dynamic dispatch
    export const directFunctions = new Map([
    	['getCacheStatsDirect', getCacheStatsDirect],
    	['parsePRDDirect', parsePRDDirect],
    	['updateTasksDirect', updateTasksDirect],
    	['updateTaskByIdDirect', updateTaskByIdDirect],
    	['updateSubtaskByIdDirect', updateSubtaskByIdDirect],
    	['setTaskStatusDirect', setTaskStatusDirect],
    	['nextTaskDirect', nextTaskDirect],
    	['expandTaskDirect', expandTaskDirect],
    	['addTaskDirect', addTaskDirect],
    	['addSubtaskDirect', addSubtaskDirect],
    	['removeSubtaskDirect', removeSubtaskDirect],
    	['analyzeTaskComplexityDirect', analyzeTaskComplexityDirect],
    	['clearSubtasksDirect', clearSubtasksDirect],
    	['expandAllTasksDirect', expandAllTasksDirect],
    	['removeDependencyDirect', removeDependencyDirect],
    	['validateDependenciesDirect', validateDependenciesDirect],
    	['fixDependenciesDirect', fixDependenciesDirect],
    	['complexityReportDirect', complexityReportDirect],
    	['addDependencyDirect', addDependencyDirect],
    	['removeTaskDirect', removeTaskDirect],
    	['initializeProjectDirect', initializeProjectDirect],
    	['modelsDirect', modelsDirect],
    	['moveTaskDirect', moveTaskDirect],
    	['moveTaskCrossTagDirect', moveTaskCrossTagDirect],
    	['researchDirect', researchDirect],
    	['addTagDirect', addTagDirect],
    	['deleteTagDirect', deleteTagDirect],
    	['listTagsDirect', listTagsDirect],
    	['useTagDirect', useTagDirect],
    	['renameTagDirect', renameTagDirect],
    	['copyTagDirect', copyTagDirect],
    	['scopeUpDirect', scopeUpDirect],
    	['scopeDownDirect', scopeDownDirect]
    ]);
    
    // Re-export all direct function implementations
    export {
    	getCacheStatsDirect,
    	parsePRDDirect,
    	updateTasksDirect,
    	updateTaskByIdDirect,
    	updateSubtaskByIdDirect,
    	setTaskStatusDirect,
    	nextTaskDirect,
    	expandTaskDirect,
    	addTaskDirect,
    	addSubtaskDirect,
    	removeSubtaskDirect,
    	analyzeTaskComplexityDirect,
    	clearSubtasksDirect,
    	expandAllTasksDirect,
    	removeDependencyDirect,
    	validateDependenciesDirect,
    	fixDependenciesDirect,
    	complexityReportDirect,
    	addDependencyDirect,
    	removeTaskDirect,
    	initializeProjectDirect,
    	modelsDirect,
    	moveTaskDirect,
    	moveTaskCrossTagDirect,
    	researchDirect,
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'AI-powered research' and 'project context' but doesn't describe what the tool actually does behaviorally: how it performs research, what sources it uses, whether it makes network calls, what the output format looks like, or any limitations/constraints. For a complex 10-parameter tool with no annotations, this is insufficient transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that states the core functionality without unnecessary words. It's appropriately sized for a tool description and front-loads the essential information. Every word earns its place in conveying the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (10 parameters, no annotations, no output schema), the description is incomplete. It doesn't explain what 'research' means in this context, what kind of results to expect, how the AI-powered aspect works, or how project context integrates. For a tool that presumably returns research findings, the lack of output schema means the description should at least hint at return values, which it doesn't.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all 10 parameters thoroughly. The description adds no parameter-specific information beyond what's in the schema - it doesn't explain how parameters like 'filePaths', 'taskIds', or 'saveTo' relate to the research process. With complete schema coverage, the baseline is 3 even without additional param semantics in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Perform AI-powered research queries with project context', which specifies the action (perform research), method (AI-powered), and scope (with project context). It distinguishes from siblings like 'analyze_project_complexity' or 'generate' by focusing on research queries rather than analysis or generation tasks. However, it doesn't explicitly differentiate from all possible research-like siblings, keeping it at 4 rather than 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when this research tool is appropriate compared to other tools like 'analyze_project_complexity' for analysis or 'generate' for content creation. There's no indication of prerequisites, limitations, or typical use cases beyond the vague 'with project context'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/eyaltoledano/claude-task-master'

If you have feedback or need assistance with the MCP directory API, please join our Discord server