Skip to main content
Glama

get_ad_image

Retrieve and display Meta ad images for visual analysis by providing an ad ID. This tool downloads ad creatives to enable direct examination of visual content within AI models.

Instructions

Get, download, and visualize a Meta ad image in one step. Useful to see the image in the LLM.

Args:
    ad_id: Meta Ads ad ID
    access_token: Meta API access token (optional - will use cached token if not provided)

Returns:
    The ad image ready for direct visual analysis

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
ad_idYes
access_tokenNo

Implementation Reference

  • The core handler function for the 'get_ad_image' MCP tool. It retrieves ad details, extracts creative image hashes or fallback URLs, downloads the image using Meta Ads API endpoints, processes it with PIL for MCP Image format, and handles multiple fallback paths for robustness (hash-based lookup, direct URLs from creatives, etc.). Decorated with @mcp_server.tool() for automatic MCP registration.
    @mcp_server.tool()
    @meta_api_tool
    async def get_ad_image(ad_id: str, access_token: Optional[str] = None) -> Image:
        """
        Get, download, and visualize a Meta ad image in one step. Useful to see the image in the LLM.
        
        Args:
            ad_id: Meta Ads ad ID
            access_token: Meta API access token (optional - will use cached token if not provided)
        
        Returns:
            The ad image ready for direct visual analysis
        """
        if not ad_id:
            return "Error: No ad ID provided"
            
        print(f"Attempting to get and analyze creative image for ad {ad_id}")
        
        # First, get creative and account IDs
        ad_endpoint = f"{ad_id}"
        ad_params = {
            "fields": "creative{id},account_id"
        }
        
        ad_data = await make_api_request(ad_endpoint, access_token, ad_params)
        
        if "error" in ad_data:
            return f"Error: Could not get ad data - {json.dumps(ad_data)}"
        
        # Extract account_id
        account_id = ad_data.get("account_id", "")
        if not account_id:
            return "Error: No account ID found"
        
        # Extract creative ID
        if "creative" not in ad_data:
            return "Error: No creative found for this ad"
            
        creative_data = ad_data.get("creative", {})
        creative_id = creative_data.get("id")
        if not creative_id:
            return "Error: No creative ID found"
        
        # Get creative details to find image hash
        creative_endpoint = f"{creative_id}"
        creative_params = {
            "fields": "id,name,image_hash,asset_feed_spec"
        }
        
        creative_details = await make_api_request(creative_endpoint, access_token, creative_params)
        
        # Identify image hashes to use from creative
        image_hashes = []
        
        # Check for direct image_hash on creative
        if "image_hash" in creative_details:
            image_hashes.append(creative_details["image_hash"])
        
        # Check asset_feed_spec for image hashes - common in Advantage+ ads
        if "asset_feed_spec" in creative_details and "images" in creative_details["asset_feed_spec"]:
            for image in creative_details["asset_feed_spec"]["images"]:
                if "hash" in image:
                    image_hashes.append(image["hash"])
        
        if not image_hashes:
            # If no hashes found, try to extract from the first creative we found in the API
            # and also check for direct URLs as fallback
            creative_json = await get_ad_creatives(access_token=access_token, ad_id=ad_id)
            creative_data = json.loads(creative_json)
            
            # Try to extract hash from data array
            if "data" in creative_data and creative_data["data"]:
                for creative in creative_data["data"]:
                    # Check object_story_spec for image hash
                    if "object_story_spec" in creative and "link_data" in creative["object_story_spec"]:
                        link_data = creative["object_story_spec"]["link_data"]
                        if "image_hash" in link_data:
                            image_hashes.append(link_data["image_hash"])
                    # Check direct image_hash on creative
                    elif "image_hash" in creative:
                        image_hashes.append(creative["image_hash"])
                    # Check asset_feed_spec for image hashes
                    elif "asset_feed_spec" in creative and "images" in creative["asset_feed_spec"]:
                        images = creative["asset_feed_spec"]["images"]
                        if images and len(images) > 0 and "hash" in images[0]:
                            image_hashes.append(images[0]["hash"])
            
            # If still no image hashes found, try direct URL fallback approach
            if not image_hashes:
                print("No image hashes found, trying direct URL fallback...")
                
                image_url = None
                if "data" in creative_data and creative_data["data"]:
                    creative = creative_data["data"][0]
                    
                    # Prioritize higher quality image URLs in this order:
                    # 1. image_urls_for_viewing (usually highest quality)
                    # 2. image_url (direct field)
                    # 3. object_story_spec.link_data.picture (usually full size)
                    # 4. thumbnail_url (last resort - often profile thumbnail)
                    
                    if "image_urls_for_viewing" in creative and creative["image_urls_for_viewing"]:
                        image_url = creative["image_urls_for_viewing"][0]
                        print(f"Using image_urls_for_viewing: {image_url}")
                    elif "image_url" in creative and creative["image_url"]:
                        image_url = creative["image_url"]
                        print(f"Using image_url: {image_url}")
                    elif "object_story_spec" in creative and "link_data" in creative["object_story_spec"]:
                        link_data = creative["object_story_spec"]["link_data"]
                        if "picture" in link_data and link_data["picture"]:
                            image_url = link_data["picture"]
                            print(f"Using object_story_spec.link_data.picture: {image_url}")
                    elif "thumbnail_url" in creative and creative["thumbnail_url"]:
                        image_url = creative["thumbnail_url"]
                        print(f"Using thumbnail_url (fallback): {image_url}")
                
                if not image_url:
                    return "Error: No image URLs found in creative"
                
                # Download the image directly
                print(f"Downloading image from direct URL: {image_url}")
                image_bytes = await download_image(image_url)
                
                if not image_bytes:
                    return "Error: Failed to download image from direct URL"
                
                try:
                    # Convert bytes to PIL Image
                    img = PILImage.open(io.BytesIO(image_bytes))
                    
                    # Convert to RGB if needed
                    if img.mode != "RGB":
                        img = img.convert("RGB")
                        
                    # Create a byte stream of the image data
                    byte_arr = io.BytesIO()
                    img.save(byte_arr, format="JPEG")
                    img_bytes = byte_arr.getvalue()
                    
                    # Return as an Image object that LLM can directly analyze
                    return Image(data=img_bytes, format="jpeg")
                    
                except Exception as e:
                    return f"Error processing image from direct URL: {str(e)}"
        
        print(f"Found image hashes: {image_hashes}")
        
        # Now fetch image data using adimages endpoint with specific format
        image_endpoint = f"act_{account_id}/adimages"
        
        # Format the hashes parameter exactly as in our successful curl test
        hashes_str = f'["{image_hashes[0]}"]'  # Format first hash only, as JSON string array
        
        image_params = {
            "fields": "hash,url,width,height,name,status",
            "hashes": hashes_str
        }
        
        print(f"Requesting image data with params: {image_params}")
        image_data = await make_api_request(image_endpoint, access_token, image_params)
        
        if "error" in image_data:
            return f"Error: Failed to get image data - {json.dumps(image_data)}"
        
        if "data" not in image_data or not image_data["data"]:
            return "Error: No image data returned from API"
        
        # Get the first image URL
        first_image = image_data["data"][0]
        image_url = first_image.get("url")
        
        if not image_url:
            return "Error: No valid image URL found"
        
        print(f"Downloading image from URL: {image_url}")
        
        # Download the image
        image_bytes = await download_image(image_url)
        
        if not image_bytes:
            return "Error: Failed to download image"
        
        try:
            # Convert bytes to PIL Image
            img = PILImage.open(io.BytesIO(image_bytes))
            
            # Convert to RGB if needed
            if img.mode != "RGB":
                img = img.convert("RGB")
                
            # Create a byte stream of the image data
            byte_arr = io.BytesIO()
            img.save(byte_arr, format="JPEG")
            img_bytes = byte_arr.getvalue()
            
            # Return as an Image object that LLM can directly analyze
            return Image(data=img_bytes, format="jpeg")
            
        except Exception as e:
            return f"Error processing image: {str(e)}"
  • Package-level export and re-export of get_ad_image in __all__ and from .core import, making it available at the top-level package.
        'get_ad_image',
        'update_ad',
        'get_insights',
        # 'get_login_link' is conditionally exported via core.__all__
        'login_cli',
        'main',
        'search_interests',
        'get_interest_suggestions',
        'estimate_audience_size',
        'search_behaviors',
        'search_demographics',
        'search_geo_locations'
    ]
    
    # Import key functions to make them available at package level
    from .core import (
        get_ad_accounts,
        get_account_info,
        get_campaigns,
        get_campaign_details,
        create_campaign,
        get_adsets,
        get_adset_details,
        update_adset,
        get_ads,
        get_ad_details,
        get_ad_creatives,
        get_ad_image,
  • Explicit import from .core making get_ad_image available at package level.
    get_ad_image,
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behaviors: it performs multiple actions (get, download, visualize), mentions token caching for the optional parameter, and indicates the output is for visual analysis. However, it lacks details on rate limits, error handling, or authentication requirements beyond the token.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by structured sections for Args and Returns. Every sentence adds value without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does well by explaining the tool's multi-step behavior and parameter semantics. However, it could be more complete by detailing the return format (e.g., image data type) or error cases, especially for a tool with visual output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate. It adds meaningful semantics for both parameters: 'ad_id' is explained as 'Meta Ads ad ID', and 'access_token' is clarified as optional with caching behavior. This goes beyond the schema's basic titles, though it could specify format or source for the ad_id.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('get, download, and visualize') and resource ('Meta ad image'), and distinguishes it from siblings like 'get_ad_details' or 'get_ad_creatives' by emphasizing the visual output for LLM analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Useful to see the image in the LLM'), but does not explicitly state when not to use it or name alternatives among the many sibling tools (e.g., 'get_ad_details' for non-visual data).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pipeboard-co/meta-ads-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server