Skip to main content
Glama
hoangdn3
by hoangdn3

search_models

Find and filter AI models by provider, pricing, context length, and capabilities like vision, function calling, or JSON mode to match your specific requirements.

Instructions

Search and filter OpenRouter.ai models based on various criteria

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryNoOptional search query to filter by name, description, or provider
providerNoFilter by specific provider (e.g., "anthropic", "openai", "cohere")
minContextLengthNoMinimum context length in tokens
maxContextLengthNoMaximum context length in tokens
maxPromptPriceNoMaximum price per 1K tokens for prompts
maxCompletionPriceNoMaximum price per 1K tokens for completions
capabilitiesNoFilter by model capabilities
limitNoMaximum number of results to return (default: 10)

Implementation Reference

  • Main handler function for the 'search_models' tool. It refreshes the model cache if needed, calls modelCache.searchModels with the input parameters, and returns the results as JSON or an error message.
    export async function handleSearchModels( request: { params: { arguments: SearchModelsToolRequest } }, apiClient: OpenRouterAPIClient, modelCache: ModelCache ) { const args = request.params.arguments; try { // Refresh the cache if needed if (!modelCache.isCacheValid()) { const models = await apiClient.getModels(); modelCache.setModels(models); } // Search models based on criteria const results = modelCache.searchModels({ query: args.query, provider: args.provider, minContextLength: args.minContextLength, maxContextLength: args.maxContextLength, maxPromptPrice: args.maxPromptPrice, maxCompletionPrice: args.maxCompletionPrice, capabilities: args.capabilities, limit: args.limit || 10, }); return { content: [ { type: 'text', text: JSON.stringify(results, null, 2), }, ], }; } catch (error) { if (error instanceof Error) { return { content: [ { type: 'text', text: `Error searching models: ${error.message}`, }, ], isError: true, }; } throw error; } }
  • TypeScript interface defining the input schema for the search_models tool parameters.
    export interface SearchModelsToolRequest { query?: string; provider?: string; minContextLength?: number | string; maxContextLength?: number | string; maxPromptPrice?: number | string; maxCompletionPrice?: number | string; capabilities?: { functions?: boolean; tools?: boolean; vision?: boolean; json_mode?: boolean; }; limit?: number | string; }
  • MCP tool registration for 'search_models', including name, description, and detailed JSON input schema.
    { name: 'search_models', description: 'Search and filter OpenRouter.ai models based on various criteria', inputSchema: { type: 'object', properties: { query: { type: 'string', description: 'Optional search query to filter by name, description, or provider', }, provider: { type: 'string', description: 'Filter by specific provider (e.g., "anthropic", "openai", "cohere")', }, minContextLength: { type: 'number', description: 'Minimum context length in tokens', }, maxContextLength: { type: 'number', description: 'Maximum context length in tokens', }, maxPromptPrice: { type: 'number', description: 'Maximum price per 1K tokens for prompts', }, maxCompletionPrice: { type: 'number', description: 'Maximum price per 1K tokens for completions', }, capabilities: { type: 'object', description: 'Filter by model capabilities', properties: { functions: { type: 'boolean', description: 'Requires function calling capability', }, tools: { type: 'boolean', description: 'Requires tools capability', }, vision: { type: 'boolean', description: 'Requires vision capability', }, json_mode: { type: 'boolean', description: 'Requires JSON mode capability', } } }, limit: { type: 'number', description: 'Maximum number of results to return (default: 10)', minimum: 1, maximum: 50 } } }, },
  • Dispatch handler in the CallToolRequestSchema switch statement that routes 'search_models' calls to handleSearchModels.
    case 'search_models': return handleSearchModels({ params: { arguments: request.params.arguments as SearchModelsToolRequest } }, this.apiClient, this.modelCache);
  • Core helper method in ModelCache class that implements model filtering logic based on query, provider, context length, price, capabilities, and limit.
    public searchModels(params: { query?: string; provider?: string; minContextLength?: number | string; maxContextLength?: number | string; maxPromptPrice?: number | string; maxCompletionPrice?: number | string; capabilities?: { functions?: boolean; tools?: boolean; vision?: boolean; json_mode?: boolean; }; limit?: number | string; }): any[] { let results = this.getAllModels(); // Apply text search if (params.query) { const query = params.query.toLowerCase(); results = results.filter((model) => model.id.toLowerCase().includes(query) || (model.description && model.description.toLowerCase().includes(query)) || (model.provider && model.provider.toLowerCase().includes(query)) ); } // Filter by provider if (params.provider) { results = results.filter((model) => model.provider && model.provider.toLowerCase() === params.provider!.toLowerCase() ); } // Filter by context length if (params.minContextLength !== undefined) { const minContextLength = typeof params.minContextLength === 'string' ? parseInt(params.minContextLength, 10) : params.minContextLength; if (!isNaN(minContextLength)) { results = results.filter( (model) => model.context_length >= minContextLength ); } } if (params.maxContextLength !== undefined) { const maxContextLength = typeof params.maxContextLength === 'string' ? parseInt(params.maxContextLength, 10) : params.maxContextLength; if (!isNaN(maxContextLength)) { results = results.filter( (model) => model.context_length <= maxContextLength ); } } // Filter by price if (params.maxPromptPrice !== undefined) { const maxPromptPrice = typeof params.maxPromptPrice === 'string' ? parseFloat(params.maxPromptPrice) : params.maxPromptPrice; if (!isNaN(maxPromptPrice)) { results = results.filter( (model) => !model.pricing?.prompt || model.pricing.prompt <= maxPromptPrice ); } } if (params.maxCompletionPrice !== undefined) { const maxCompletionPrice = typeof params.maxCompletionPrice === 'string' ? parseFloat(params.maxCompletionPrice) : params.maxCompletionPrice; if (!isNaN(maxCompletionPrice)) { results = results.filter( (model) => !model.pricing?.completion || model.pricing.completion <= maxCompletionPrice ); } } // Filter by capabilities if (params.capabilities) { if (params.capabilities.functions) { results = results.filter( (model) => model.capabilities?.function_calling ); } if (params.capabilities.tools) { results = results.filter((model) => model.capabilities?.tools); } if (params.capabilities.vision) { results = results.filter((model) => model.capabilities?.vision); } if (params.capabilities.json_mode) { results = results.filter((model) => model.capabilities?.json_mode); } } // Apply limit if (params.limit !== undefined) { const limit = typeof params.limit === 'string' ? parseInt(params.limit, 10) : params.limit; if (!isNaN(limit) && limit > 0) { results = results.slice(0, limit); } } return results; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hoangdn3/mcp-ocr-fallback'

If you have feedback or need assistance with the MCP directory API, please join our Discord server