Skip to main content
Glama

models

Manage AI model configurations for task generation and research operations in Task Master. Set primary, fallback, and research models, or list available models with cost details for optimized task execution.

Instructions

Get information about available AI models or set model configurations. Run without arguments to get the current model configuration and API key status for the selected model providers.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
azureNoIndicates the set model ID is a custom Azure OpenAI model.
bedrockNoIndicates the set model ID is a custom AWS Bedrock model.
listAvailableModelsNoList all available models not currently in use. Input/output costs values are in dollars (3 is $3.00).
ollamaNoIndicates the set model ID is a custom Ollama model.
openrouterNoIndicates the set model ID is a custom OpenRouter model.
projectRootYesThe directory of the project. Must be an absolute path.
setFallbackNoSet the model to use if the primary fails. Model provider API key is required in the MCP config ENV.
setMainNoSet the primary model for task generation/updates. Model provider API key is required in the MCP config ENV.
setResearchNoSet the model for research-backed operations. Model provider API key is required in the MCP config ENV.
vertexNoIndicates the set model ID is a custom Google Vertex AI model.

Implementation Reference

  • Primary handler function `modelsDirect` implementing core logic: handles model listing, setting for roles (main/research/fallback), configuration retrieval, and validation.
    export async function modelsDirect(args, log, context = {}) {
    	const { session } = context;
    	const { projectRoot } = args; // Extract projectRoot from args
    
    	// Create a logger wrapper that the core functions can use
    	const mcpLog = createLogWrapper(log);
    
    	log.info(`Executing models_direct with args: ${JSON.stringify(args)}`);
    	log.info(`Using project root: ${projectRoot}`);
    
    	// Validate flags: only one custom provider flag can be used simultaneously
    	const customProviderFlags = CUSTOM_PROVIDERS_ARRAY.filter(
    		(provider) => args[provider]
    	);
    
    	if (customProviderFlags.length > 1) {
    		log.error(
    			'Error: Cannot use multiple custom provider flags simultaneously.'
    		);
    		return {
    			success: false,
    			error: {
    				code: 'INVALID_ARGS',
    				message:
    					'Cannot use multiple custom provider flags simultaneously. Choose only one: openrouter, ollama, bedrock, azure, vertex, or openai-compatible.'
    			}
    		};
    	}
    
    	try {
    		enableSilentMode();
    
    		try {
    			// Check for the listAvailableModels flag
    			if (args.listAvailableModels === true) {
    				return await getAvailableModelsList({
    					session,
    					mcpLog,
    					projectRoot
    				});
    			}
    
    			// Handle setting any model role using unified function
    			const modelContext = { session, mcpLog, projectRoot };
    			const modelSetResult = await handleModelSetting(args, modelContext);
    			if (modelSetResult) {
    				return modelSetResult;
    			}
    
    			// Default action: get current configuration
    			return await getModelConfiguration({
    				session,
    				mcpLog,
    				projectRoot
    			});
    		} finally {
    			disableSilentMode();
    		}
    	} catch (error) {
    		log.error(`Error in models_direct: ${error.message}`);
    		return {
    			success: false,
    			error: {
    				code: 'DIRECT_FUNCTION_ERROR',
    				message: error.message,
    				details: error.stack
    			}
    		};
    	}
    }
  • Zod input schema for the 'models' tool parameters, defining options for setting main/research/fallback models, listing available models, and provider flags.
    parameters: z.object({
    	setMain: z
    		.string()
    		.optional()
    		.describe(
    			'Set the primary model for task generation/updates. Model provider API key is required in the MCP config ENV.'
    		),
    	setResearch: z
    		.string()
    		.optional()
    		.describe(
    			'Set the model for research-backed operations. Model provider API key is required in the MCP config ENV.'
    		),
    	setFallback: z
    		.string()
    		.optional()
    		.describe(
    			'Set the model to use if the primary fails. Model provider API key is required in the MCP config ENV.'
    		),
    	listAvailableModels: z
    		.boolean()
    		.optional()
    		.describe(
    			'List all available models not currently in use. Input/output costs values are in dollars (3 is $3.00).'
    		),
    	projectRoot: z
    		.string()
    		.describe('The directory of the project. Must be an absolute path.'),
    	openrouter: z
    		.boolean()
    		.optional()
    		.describe('Indicates the set model ID is a custom OpenRouter model.'),
    	ollama: z
    		.boolean()
    		.optional()
    		.describe('Indicates the set model ID is a custom Ollama model.'),
    	bedrock: z
    		.boolean()
    		.optional()
    		.describe('Indicates the set model ID is a custom AWS Bedrock model.'),
    	azure: z
    		.boolean()
    		.optional()
    		.describe('Indicates the set model ID is a custom Azure OpenAI model.'),
    	vertex: z
    		.boolean()
    		.optional()
    		.describe(
    			'Indicates the set model ID is a custom Google Vertex AI model.'
    		),
    	'openai-compatible': z
    		.boolean()
    		.optional()
    		.describe(
    			'Indicates the set model ID is a custom OpenAI-compatible model. Requires baseURL parameter.'
    		),
    	baseURL: z
    		.string()
    		.optional()
    		.describe(
    			'Custom base URL for providers that support it (e.g., https://api.example.com/v1).'
    		)
    }),
  • `registerModelsTool` function that adds the 'models' MCP tool to the server with name, description, schema, and thin wrapper execute handler calling `modelsDirect`.
    export function registerModelsTool(server) {
    	server.addTool({
    		name: 'models',
    		description:
    			'Get information about available AI models or set model configurations. Run without arguments to get the current model configuration and API key status for the selected model providers.',
    		parameters: z.object({
    			setMain: z
    				.string()
    				.optional()
    				.describe(
    					'Set the primary model for task generation/updates. Model provider API key is required in the MCP config ENV.'
    				),
    			setResearch: z
    				.string()
    				.optional()
    				.describe(
    					'Set the model for research-backed operations. Model provider API key is required in the MCP config ENV.'
    				),
    			setFallback: z
    				.string()
    				.optional()
    				.describe(
    					'Set the model to use if the primary fails. Model provider API key is required in the MCP config ENV.'
    				),
    			listAvailableModels: z
    				.boolean()
    				.optional()
    				.describe(
    					'List all available models not currently in use. Input/output costs values are in dollars (3 is $3.00).'
    				),
    			projectRoot: z
    				.string()
    				.describe('The directory of the project. Must be an absolute path.'),
    			openrouter: z
    				.boolean()
    				.optional()
    				.describe('Indicates the set model ID is a custom OpenRouter model.'),
    			ollama: z
    				.boolean()
    				.optional()
    				.describe('Indicates the set model ID is a custom Ollama model.'),
    			bedrock: z
    				.boolean()
    				.optional()
    				.describe('Indicates the set model ID is a custom AWS Bedrock model.'),
    			azure: z
    				.boolean()
    				.optional()
    				.describe('Indicates the set model ID is a custom Azure OpenAI model.'),
    			vertex: z
    				.boolean()
    				.optional()
    				.describe(
    					'Indicates the set model ID is a custom Google Vertex AI model.'
    				),
    			'openai-compatible': z
    				.boolean()
    				.optional()
    				.describe(
    					'Indicates the set model ID is a custom OpenAI-compatible model. Requires baseURL parameter.'
    				),
    			baseURL: z
    				.string()
    				.optional()
    				.describe(
    					'Custom base URL for providers that support it (e.g., https://api.example.com/v1).'
    				)
    		}),
    		execute: withToolContext('models', async (args, context) => {
    			try {
    				context.log.info(
    					`Starting models tool with args: ${JSON.stringify(args)}`
    				);
    
    				// Use args.projectRoot directly (normalized by withToolContext)
    				const result = await modelsDirect(
    					{ ...args, projectRoot: args.projectRoot },
    					context.log,
    					{ session: context.session }
    				);
    
    				return handleApiResult({
    					result,
    					log: context.log,
    					errorPrefix: 'Error managing models',
    					projectRoot: args.projectRoot
    				});
    			} catch (error) {
    				context.log.error(`Error in models tool: ${error.message}`);
    				return createErrorResponse(error.message);
    			}
    		})
    	});
    }
  • Central tool registry object mapping tool name 'models' to `registerModelsTool` for dynamic server registration.
    export const toolRegistry = {
    	initialize_project: registerInitializeProjectTool,
    	models: registerModelsTool,
    	rules: registerRulesTool,
    	parse_prd: registerParsePRDTool,
    	'response-language': registerResponseLanguageTool,
    	analyze_project_complexity: registerAnalyzeProjectComplexityTool,
    	expand_task: registerExpandTaskTool,
    	expand_all: registerExpandAllTool,
    	scope_up_task: registerScopeUpTool,
    	scope_down_task: registerScopeDownTool,
    	get_tasks: registerGetTasksTool,
    	get_task: registerGetTaskTool,
    	next_task: registerNextTaskTool,
    	complexity_report: registerComplexityReportTool,
    	set_task_status: registerSetTaskStatusTool,
    	add_task: registerAddTaskTool,
    	add_subtask: registerAddSubtaskTool,
    	update: registerUpdateTool,
    	update_task: registerUpdateTaskTool,
    	update_subtask: registerUpdateSubtaskTool,
    	remove_task: registerRemoveTaskTool,
    	remove_subtask: registerRemoveSubtaskTool,
    	clear_subtasks: registerClearSubtasksTool,
    	move_task: registerMoveTaskTool,
    	add_dependency: registerAddDependencyTool,
    	remove_dependency: registerRemoveDependencyTool,
    	validate_dependencies: registerValidateDependenciesTool,
    	fix_dependencies: registerFixDependenciesTool,
    	list_tags: registerListTagsTool,
    	add_tag: registerAddTagTool,
    	delete_tag: registerDeleteTagTool,
    	use_tag: registerUseTagTool,
    	rename_tag: registerRenameTagTool,
    	copy_tag: registerCopyTagTool,
    	research: registerResearchTool,
    	autopilot_start: registerAutopilotStartTool,
    	autopilot_resume: registerAutopilotResumeTool,
    	autopilot_next: registerAutopilotNextTool,
    	autopilot_status: registerAutopilotStatusTool,
    	autopilot_complete: registerAutopilotCompleteTool,
    	autopilot_commit: registerAutopilotCommitTool,
    	autopilot_finalize: registerAutopilotFinalizeTool,
    	autopilot_abort: registerAutopilotAbortTool,
    	generate: registerGenerateTool
    };
  • Helper `getModelConfiguration` used by handler to fetch and format current model configs, API key statuses, and model details for main/research/fallback roles.
    async function getModelConfiguration(options = {}) {
    	const { mcpLog, projectRoot, session } = options;
    
    	const report = (level, ...args) => {
    		if (mcpLog && typeof mcpLog[level] === 'function') {
    			mcpLog[level](...args);
    		}
    	};
    
    	if (!projectRoot) {
    		throw new Error('Project root is required but not found.');
    	}
    
    	// Use centralized config path finding instead of hardcoded path
    	const configPath = findConfigPath(null, { projectRoot });
    	const configExists = isConfigFilePresent(projectRoot);
    
    	log(
    		'debug',
    		`Checking for config file using findConfigPath, found: ${configPath}`
    	);
    	log(
    		'debug',
    		`Checking config file using isConfigFilePresent(), exists: ${configExists}`
    	);
    
    	if (!configExists) {
    		throw new Error(CONFIG_MISSING_ERROR);
    	}
    
    	try {
    		// Get current settings - these should use the config from the found path automatically
    		const mainProvider = getMainProvider(projectRoot);
    		const mainModelId = getMainModelId(projectRoot);
    		const mainBaseURL = getBaseUrlForRole('main', projectRoot);
    		const researchProvider = getResearchProvider(projectRoot);
    		const researchModelId = getResearchModelId(projectRoot);
    		const researchBaseURL = getBaseUrlForRole('research', projectRoot);
    		const fallbackProvider = getFallbackProvider(projectRoot);
    		const fallbackModelId = getFallbackModelId(projectRoot);
    		const fallbackBaseURL = getBaseUrlForRole('fallback', projectRoot);
    
    		// Check API keys
    		const mainCliKeyOk = isApiKeySet(mainProvider, session, projectRoot);
    		const mainMcpKeyOk = getMcpApiKeyStatus(mainProvider, projectRoot);
    		const researchCliKeyOk = isApiKeySet(
    			researchProvider,
    			session,
    			projectRoot
    		);
    		const researchMcpKeyOk = getMcpApiKeyStatus(researchProvider, projectRoot);
    		const fallbackCliKeyOk = fallbackProvider
    			? isApiKeySet(fallbackProvider, session, projectRoot)
    			: true;
    		const fallbackMcpKeyOk = fallbackProvider
    			? getMcpApiKeyStatus(fallbackProvider, projectRoot)
    			: true;
    
    		// Get available models to find detailed info
    		const availableModels = getAvailableModels(projectRoot);
    
    		// Find model details
    		const mainModelData = availableModels.find((m) => m.id === mainModelId);
    		const researchModelData = availableModels.find(
    			(m) => m.id === researchModelId
    		);
    		const fallbackModelData = fallbackModelId
    			? availableModels.find((m) => m.id === fallbackModelId)
    			: null;
    
    		// Return structured configuration data
    		return {
    			success: true,
    			data: {
    				activeModels: {
    					main: {
    						provider: mainProvider,
    						modelId: mainModelId,
    						baseURL: mainBaseURL,
    						sweScore: mainModelData?.swe_score || null,
    						cost: mainModelData?.cost_per_1m_tokens || null,
    						keyStatus: {
    							cli: mainCliKeyOk,
    							mcp: mainMcpKeyOk
    						}
    					},
    					research: {
    						provider: researchProvider,
    						modelId: researchModelId,
    						baseURL: researchBaseURL,
    						sweScore: researchModelData?.swe_score || null,
    						cost: researchModelData?.cost_per_1m_tokens || null,
    						keyStatus: {
    							cli: researchCliKeyOk,
    							mcp: researchMcpKeyOk
    						}
    					},
    					fallback: fallbackProvider
    						? {
    								provider: fallbackProvider,
    								modelId: fallbackModelId,
    								baseURL: fallbackBaseURL,
    								sweScore: fallbackModelData?.swe_score || null,
    								cost: fallbackModelData?.cost_per_1m_tokens || null,
    								keyStatus: {
    									cli: fallbackCliKeyOk,
    									mcp: fallbackMcpKeyOk
    								}
    							}
    						: null
    				},
    				message: 'Successfully retrieved current model configuration'
    			}
    		};
    	} catch (error) {
    		report('error', `Error getting model configuration: ${error.message}`);
    		return {
    			success: false,
    			error: {
    				code: 'CONFIG_ERROR',
    				message: error.message
    			}
    		};
    	}
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions API key requirements for setting models, which is useful, but lacks details on behavioral traits such as whether changes are persistent, if there are rate limits, error handling, or what happens when configurations are set (e.g., immediate effect vs. requires restart). For a tool with 10 parameters and no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences that are front-loaded: the first states the dual purpose, and the second provides specific usage guidance. There is no wasted text, making it efficient, though it could be slightly more structured to separate the 'get' and 'set' functionalities more clearly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (10 parameters, no output schema, no annotations), the description is incomplete. It covers basic purpose and a usage scenario but lacks details on return values, error conditions, or how the tool integrates with the broader system (e.g., sibling tools). This makes it adequate but with clear gaps for a tool of this scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema, as it does not explain parameter interactions or provide additional context like how 'listAvailableModels' relates to 'setMain' or 'setFallback.' Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'Get information about available AI models or set model configurations,' which provides a general purpose but lacks specificity about what 'set model configurations' entails. It distinguishes from siblings by focusing on models rather than tasks or dependencies, but the dual purpose (get info vs. set configs) makes it somewhat vague compared to more focused sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context: 'Run without arguments to get the current model configuration and API key status for the selected model providers.' This gives explicit guidance on when to use it in a no-argument scenario. However, it does not specify when to use it versus alternatives (e.g., which sibling tools might interact with models) or when not to use it, keeping it from a score of 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/eyaltoledano/claude-task-master'

If you have feedback or need assistance with the MCP directory API, please join our Discord server