Skip to main content
Glama

meta_ads_pixels_stats

Retrieve aggregated pixel event counts over a rolling time window to detect unusual drops in key events like PageView, Purchase, or Lead, signaling potential pixel break issues. Read-only data for monitoring pixel health.

Instructions

Returns aggregated pixel-event counts over a rolling time window. Returns an array of {date, event_name, count} rows. Read-only. Use this to spot unusual drops in PageView / Purchase / Lead volume that indicate a pixel break. For per-event metadata (parameter names, sample payloads) use meta_ads_pixels_events instead.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
account_idNoMeta Ads account ID in the format 'act_XXXXXXXXXX' (e.g. 'act_1234567890'). Optional — falls back to META_ADS_ACCOUNT_ID from the configured credentials. The leading 'act_' prefix is required.
pixel_idYesPixel ID to query.
periodNoAggregation window. Default last_30d. Longer windows cost more Graph API quota but are necessary to spot slow degradations.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Declares read-only nature, mentions rolling time window, and discloses that longer periods cost more Graph API quota—important behavioral traits given no annotations provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no filler: return format first, usage guidance second, sibling alternative third. Efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, description fully specifies return format. Covers use case, differentiation, and parameter nuances. Complete for a read tool with well-described parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. Description adds value by explaining account_id fallback to environment variable and period cost trade-offs, though pixel_id lacks additional context beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns aggregated pixel-event counts with a specific structure ('array of {date, event_name, count} rows') and distinguishes itself from sibling meta_ads_pixels_events which handles per-event metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('spot unusual drops...indicating a pixel break') and when not to ('For per-event metadata...use meta_ads_pixels_events instead'), providing excellent contextual guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/logly/mureo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server