Skip to main content
Glama
phuihock
by phuihock

zero_shot_image_classification

Classify images using custom labels without prior training. Upload an image and specify categories to identify its content with AI-powered visual recognition.

Instructions

Classify an image with zero-shot labels using DeepInfra OpenAI-compatible API (CLIP).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
image_urlYes
candidate_labelsYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The handler function that executes the zero_shot_image_classification tool. It constructs a vision-enabled chat completion request to classify the image from the given candidate labels using the configured model.
    async def zero_shot_image_classification(image_url: str, candidate_labels: list[str]) -> str:
        """Classify an image with zero-shot labels using DeepInfra OpenAI-compatible API (CLIP)."""
        model = DEFAULT_MODELS["zero_shot_image_classification"]
        try:
            # Use chat/completions with vision capability to get classification
            response = await client.chat.completions.create(
                model=model,
                messages=[
                    {
                        "role": "user",
                        "content": [
                            {
                                "type": "text",
                                "text": f"Classify this image into one of these categories: {', '.join(candidate_labels)}. Return a JSON with 'label' and 'score' fields."
                            },
                            {
                                "type": "image_url",
                                "image_url": {"url": image_url}
                            }
                        ]
                    }
                ],
                max_tokens=200,
            )
            if response.choices:
                return response.choices[0].message.content
            else:
                return "Unable to classify image"
        except Exception as e:
            return f"Error classifying image: {type(e).__name__}: {str(e)}"
  • Conditional registration of the zero_shot_image_classification tool using the FastMCP @app.tool() decorator.
    if "all" in ENABLED_TOOLS or "zero_shot_image_classification" in ENABLED_TOOLS:
        @app.tool()
  • Function signature providing input schema (image_url: str, candidate_labels: list[str]) and output str, along with docstring description used for tool schema in MCP.
    async def zero_shot_image_classification(image_url: str, candidate_labels: list[str]) -> str:
        """Classify an image with zero-shot labels using DeepInfra OpenAI-compatible API (CLIP)."""
  • Helper configuration defining the default model for the zero_shot_image_classification tool.
    "zero_shot_image_classification": os.getenv("MODEL_ZERO_SHOT_IMAGE_CLASSIFICATION", "openai/gpt-4o-mini"),
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the API (DeepInfra OpenAI-compatible) and model (CLIP), but lacks details on behavioral traits such as rate limits, authentication needs, error handling, or what the output looks like (though an output schema exists). For a tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Classify an image with zero-shot labels') and adds necessary context ('using DeepInfra OpenAI-compatible API (CLIP)'). There is no wasted text, and it's appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which covers return values) and no annotations, the description is minimal but covers the basic purpose. However, for a classification tool with 2 parameters and no annotation coverage, it lacks details on usage context, parameter semantics, and behavioral aspects, making it incomplete for optimal agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter descriptions. The description doesn't add any meaning to the parameters 'image_url' and 'candidate_labels' beyond what their names imply. It mentions 'zero-shot labels' which relates to 'candidate_labels', but doesn't explain format, constraints, or examples. Baseline is 3 due to low coverage, but the description doesn't fully compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Classify an image') and the resource ('image'), with the specific method 'zero-shot labels using DeepInfra OpenAI-compatible API (CLIP)'. It distinguishes from siblings like 'image_classification' by specifying the zero-shot approach, though it doesn't explicitly contrast with other image-related tools like 'object_detection' or 'generate_image'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for zero-shot classification with CLIP, suggesting it's for when you have candidate labels but no pre-trained model. However, it doesn't explicitly state when to use this versus alternatives like 'image_classification' (which might be supervised) or other siblings, nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/phuihock/mcp-deeinfra'

If you have feedback or need assistance with the MCP directory API, please join our Discord server