Skip to main content
Glama
gerred

MCP Server Replicate

search_available_models

Find AI models for image generation by searching with descriptive queries and optional style filters through the Replicate API.

Instructions

Search for available models matching the query.

    Args:
        query: Search query describing the desired model
        style: Optional style to filter by

    Returns:
        List of matching models with scores
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
styleNo

Implementation Reference

  • Handler function for the search_available_models tool. Searches Replicate models using the client, scores them based on popularity, featured status, version stability, tags matching style and image generation, sorts by score, and returns a ModelList.
    async def search_available_models(
        query: str,
        style: str | None = None,
    ) -> ModelList:
        """Search for available models matching the query.
    
        Args:
            query: Search query describing the desired model
            style: Optional style to filter by
    
        Returns:
            List of matching models with scores
        """
        search_query = query
        if style:
            search_query = f"{style} style {search_query}"
    
        async with ReplicateClient(api_token=os.getenv("REPLICATE_API_TOKEN")) as client:
            result = await client.search_models(search_query)
            models = [Model(**model) for model in result["models"]]
    
            # Score models but don't auto-select
            scored_models = []
            for model in models:
                score = 0
                run_count = getattr(model, "run_count", 0) or 0
                score += min(50, (run_count / 1000) * 50)
                if getattr(model, "featured", False):
                    score += 20
                if model.latest_version:
                    score += 10
                tags = getattr(model, "tags", [])
                if style and any(style.lower() in tag.lower() for tag in tags):
                    score += 15
                if "image" in tags or "text-to-image" in tags:
                    score += 15
                scored_models.append((model, score))
    
            # Sort by score but return all for user selection
            scored_models.sort(key=lambda x: x[1], reverse=True)
            return ModelList(
                models=[m[0] for m in scored_models],
                next_cursor=result.get("next_cursor"),
                total_count=result.get("total_count"),
            )
  • The @mcp.tool() decorator registers the search_available_models function as an MCP tool with the function name.
    async def search_available_models(
  • Pydantic model defining the output schema for search_available_models, used as return type.
    github_url: Optional[str] = Field(None, description="URL to model's GitHub repository")
    paper_url: Optional[str] = Field(None, description="URL to model's research paper")
    license_url: Optional[str] = Field(None, description="URL to model's license")
    run_count: Optional[int] = Field(None, description="Number of times this model has been run")
    cover_image_url: Optional[str] = Field(None, description="URL to model's cover image")
    latest_version: Optional[ModelVersion] = Field(None, description="Latest version of the model")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions returns a 'List of matching models with scores', adding some behavioral context about output format. However, it lacks details on pagination, rate limits, authentication needs, or error conditions, which are critical for a search operation with potential large datasets.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by structured Args and Returns sections. Every sentence earns its place by clarifying inputs and outputs without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, and no output schema, the description provides basic purpose and parameter hints but is incomplete. It doesn't cover behavioral aspects like response format details, error handling, or usage context relative to siblings, which are needed for a search tool in a model management system.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter descriptions. The description adds basic semantics for 'query' (search query describing desired model) and 'style' (optional style filter), compensating partially. However, it doesn't explain what 'style' entails (e.g., artistic, realistic) or query syntax, leaving gaps for the 2 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'search' and resource 'available models', specifying it finds models matching a query. It distinguishes from siblings like 'list_models' (which likely lists all without filtering) and 'get_model_details' (which retrieves specific model info). However, it doesn't explicitly contrast with 'search_models' (a sibling with similar name), leaving some ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'list_models' or 'search_models' (a sibling with similar name). The description implies usage for query-based filtering, but doesn't specify scenarios, prerequisites, or exclusions, leaving the agent to infer context from tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gerred/mcp-server-replicate'

If you have feedback or need assistance with the MCP directory API, please join our Discord server