Skip to main content
Glama
gerred

MCP Server Replicate

create_prediction

Generate predictions using AI models on Replicate by specifying model inputs and confirming execution for image creation or inference tasks.

Instructions

Create a new prediction using a specific model version on Replicate.

    Args:
        input: Model input parameters including version or model details
        confirmed: Whether the user has explicitly confirmed the generation

    Returns:
        Prediction details if confirmed, or a confirmation request if not
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
inputYes
confirmedNo

Implementation Reference

  • The primary MCP tool handler for 'create_prediction'. Handles user confirmation, model/version resolution, and delegates to ReplicateClient for actual prediction creation.
    @mcp.tool()
    async def create_prediction(input: dict[str, Any], confirmed: bool = False) -> dict[str, Any]:
        """Create a new prediction using a specific model version on Replicate.
    
        Args:
            input: Model input parameters including version or model details
            confirmed: Whether the user has explicitly confirmed the generation
    
        Returns:
            Prediction details if confirmed, or a confirmation request if not
        """
        # If not confirmed, return info about what will be generated
        if not confirmed:
            # Extract model info for display
            model_info = ""
            if "version" in input:
                model_info = f"version: {input['version']}"
            elif "model_owner" in input and "model_name" in input:
                model_info = f"model: {input['model_owner']}/{input['model_name']}"
    
            return {
                "requires_confirmation": True,
                "message": (
                    "⚠️ This will use Replicate credits to generate an image with these parameters:\n\n"
                    f"Model: {model_info}\n"
                    f"Prompt: {input.get('prompt', 'Not specified')}\n"
                    f"Quality: {input.get('quality', 'balanced')}\n\n"
                    "Please confirm if you want to proceed with the generation."
                ),
            }
    
        async with ReplicateClient(api_token=os.getenv("REPLICATE_API_TOKEN")) as client:
            # If version is provided directly, use it
            if "version" in input:
                version = input.pop("version")
            # Otherwise, try to find the model and get its latest version
            elif "model_owner" in input and "model_name" in input:
                model_id = f"{input.pop('model_owner')}/{input.pop('model_name')}"
                search_result = await client.search_models(model_id)
                if not search_result["models"]:
                    raise ValueError(f"Model not found: {model_id}")
                model = search_result["models"][0]
                if not model.get("latest_version"):
                    raise ValueError(f"No versions found for model: {model_id}")
                version = model["latest_version"]["id"]
            else:
                raise ValueError("Must provide either 'version' or both 'model_owner' and 'model_name'")
    
            # Create prediction with remaining parameters as input
            result = await client.create_prediction(version=version, input=input, webhook=input.pop("webhook", None))
    
            # Return result with prompt about waiting
            return {
                **result,
                "_next_prompt": "after_generation",  # Signal to show the waiting prompt
            }
  • The ReplicateClient helper method called by the tool handler to perform the actual API call to create the prediction on Replicate.
    async def create_prediction(
        self,
        version: str,
        input: Dict[str, Any],
        webhook: Optional[str] = None,
    ) -> Dict[str, Any]:
        """Create a new prediction using a model version.
    
        Args:
            version: Model version ID
            input: Model input parameters
            webhook: Optional webhook URL for prediction updates
    
        Returns:
            Dict containing prediction details
    
        Raises:
            Exception: If the prediction creation fails
        """
        if not self.client:
            raise RuntimeError("Client not initialized. Check error property for details.")
    
        try:
            await self._ensure_http_client()
            
            # Prepare request body
            body = {
                "version": version,
                "input": input,
            }
            if webhook:
                body["webhook"] = webhook
    
            # Create prediction using rate-limited request
            response = await self._make_request(
                "POST",
                "/predictions",
                json=body
            )
            data = response.json()
    
            # Format response
            result = {
                "id": data["id"],
                "status": data["status"],
                "input": data["input"],
                "output": data.get("output"),
                "error": data.get("error"),
                "logs": data.get("logs"),
                "created_at": data.get("created_at"),
                "started_at": data.get("started_at"),
                "completed_at": data.get("completed_at"),
                "urls": data.get("urls", {}),
            }
    
            # Add metrics if available
            if "metrics" in data:
                result["metrics"] = data["metrics"]
    
            return result
    
        except Exception as err:
            logger.error(f"Failed to create prediction: {str(err)}")
            raise Exception(f"Failed to create prediction: {str(err)}") from err
  • The @mcp.tool() decorator registers the create_prediction function as an MCP tool named 'create_prediction' (defaults to function name).
    @mcp.tool()
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the confirmation mechanism and return behavior, which adds some context. However, it lacks critical details like authentication requirements, rate limits, cost implications, or whether this is a read/write operation, leaving significant gaps in behavioral understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose. The Args and Returns sections are structured but slightly verbose; every sentence adds value, such as clarifying the confirmation logic, making it efficient overall.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (2 parameters with nested objects, no output schema, and no annotations), the description is incomplete. It explains the confirmation flow but omits details on error handling, response formats, or integration with siblings like 'get_prediction'. For a tool that creates predictions, this leaves too many unknowns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains that 'input' includes 'Model input parameters including version or model details' and 'confirmed' relates to user confirmation, adding meaning beyond the bare schema. However, it doesn't detail the structure of 'input' or provide examples, so it only partially addresses the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create a new prediction') and resource ('using a specific model version on Replicate'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'generate_image' or 'subscribe_to_generation', which might also involve prediction-like operations, so it misses the highest clarity level.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing a model version, or compare it to siblings like 'generate_image' or 'search_available_models'. Without this context, users might struggle to select the correct tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gerred/mcp-server-replicate'

If you have feedback or need assistance with the MCP directory API, please join our Discord server