Skip to main content
Glama

Server Details

Decode video ads, load brand intelligence, generate ad scripts.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Heista-co/heista-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 11 of 11 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: powersource builders differ by input source, intelligence tools serve different discovery modes, and generate_adscript is the sole generation tool. No overlapping responsibilities.

Naming Consistency4/5

Most tools follow verb_noun pattern (check_balance, create_powersource_*, decode_ad, get_*). However, 'adformula_intelligence' and 'decoder_intelligence' reverse the pattern, and 'decoder_intelligence' vs 'decode_ad' introduces slight inconsistency.

Tool Count5/5

11 tools cover the entire ad creative intelligence pipeline without bloat. Each tool is justified and non-redundant, fitting the server's specialized scope well.

Completeness5/5

The tool surface fully covers brand profiling, ad structure discovery, decoding, script generation, and status polling. No obvious missing operations for the stated domain of creative intelligence.

Available Tools

11 tools
adformula_intelligenceAd Formula IntelligenceA
Read-onlyIdempotent
Inspect

Get proven ad formula blueprints — structural patterns clustered from 3-10+ winning ads that independently converged on the same beat architecture while Meta kept rewarding them with sustained spend. Each formula carries: source ad count, average active days (runtime proof), confidence score, 6-layer beat blueprint, per-beat visual direction. Formulas are the category-replication source. Use for generate_adscript with source_type="formula". Free. Filter by vertical first, then narrow by creative_format or marketing_angle to match the brand. When picking among results: prioritise (1) avg_active_days as primary proof, (2) marketing_angle alignment with PowerSource buyer tension, (3) source_ad_count for cluster robustness, (4) confidence_score as tiebreaker. Note: formulas are abstracted from source ads — they carry the structure but not exact transcripts. For sentence-level fidelity, use a single decode instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax formulas to return (1-10, default 5).
verticalNoIndustry vertical to filter formulas. Examples: BEAUTY_SKINCARE, HEALTH_SUPPLEMENTS, FITNESS, FOOD_BEVERAGE, FASHION_APPAREL, SAAS_SOFTWARE, FINANCE_FINTECH, INFO_PRODUCTS, TECH_GADGETS. Omit for all verticals.
hook_typeNoFilter by opening hook subtype. Examples: CURIOSITY_SPIKE, IDENTITY_HOOK, CONTRADICTION_HOOK, DIRECT_QUESTION_HOOK, PAST_SELF_OPEN, DATA_POINT_START, PROVOCATION. Omit for all hook types.
algo_intentNoStructural engine to filter by. Examples: PROBLEM_AGITATE_SOLVE, MECHANISM_REVEAL, TRANSFORMATION_ARC, SOCIAL_PROOF_STACK, COMPARISON_CONTRAST, URGENCY_SCARCITY. Omit for all intents.
creative_formatNoCreative format to filter by. Examples: TALKING_HEAD_BROLL, VOICEOVER_BROLL, UGC_TESTIMONIAL, PRODUCT_DEMO, SLIDESHOW_OVERLAY, INFLUENCER. Omit for all formats.
marketing_angleNoMarketing angle to filter by. Examples: PROBLEM_SOLUTION, SOCIAL_PROOF_RESULTS, HOW_TO_TUTORIAL, INGREDIENT_SCIENCE, ASPIRATIONAL_IDENTITY, VALUE_STACK. Omit for all angles.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true; description adds that formulas are abstracted, include source ad count, average active days, confidence score, etc., and are not exact transcripts. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

All sentences contribute important information. The description is front-loaded with purpose, then details, then usage guidance, and there is no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description thoroughly covers return fields (source ad count, avg active days, etc.) and integration with generate_adscript. With 6 parameters all documented in schema and additional usage advice, the tool is fully specified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so parameters are well-documented. The description adds value by suggesting filtering order and the prioritization strategy for selecting results, which goes beyond schema details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns 'proven ad formula blueprints' and differentiates from siblings like decode_ad and generate_adscript by specifying that formulas are abstracted structural patterns, not exact transcripts. It also notes the tool is free and provides filtering advice.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance: 'Use for generate_adscript with source_type="formula".' 'Filter by vertical first, then narrow...' and a prioritized order for selecting results. Also tells when not to use it: 'For sentence-level fidelity, use a single decode instead.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_balanceCheck BalanceA
Read-onlyIdempotent
Inspect

Check Heista API credit balance, this month's usage broken down by operation, and pricing for every paid tool. Returns balance in cents, lifetime spend, month-to-date counts per tool, and a top-up link. Call when the user asks about credits, balance, usage, top-ups, or pricing.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds specific return details (balance in cents, lifetime spend, monthly counts, top-up link), which enriches the behavioral model beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two well-structured sentences. The first sentence details the tool's function and outputs, the second provides clear usage context. Every word adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters, thorough annotations, and no output schema, the description adequately covers the tool's purpose and return information. It could optionally mention error cases or authentication needs, but the current level is sufficient for an agent to correctly invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so no parameter description is needed. According to the guidelines, a baseline of 4 is appropriate since the schema coverage is 100% and no parameters exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool checks credit balance, usage breakdown by operation, and pricing for paid tools. It clearly identifies the verb 'Check' and the resources (balance, usage, pricing). No sibling tool overlaps with this purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description directly states 'Call when the user asks about credits, balance, usage, top-ups, or pricing.' This provides clear usage triggers. Although no explicit when-not-to-use is given, sibling tools are sufficiently distinct to avoid confusion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_powersource_docsCreate PowerSource from DocumentsAInspect

Build a complete creative intelligence profile from internal brand documents — creative briefs, brand guidelines, product specs, customer research, competitive analysis. Returns the same shape as create_powersource_url: brand identity, buyer profile, tensions, angles, voice, proof. Use when the truth lives in PDFs and DOCX, not on the website. Pass file_ids from the Files API or document_urls (PDF, DOCX, TXT, MD). Optionally pass context_url for additional live brand context. Costs 100 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
file_idsNoArray of file IDs from a previous upload. Up to 10 files.
context_urlNoOptional website URL to layer live brand context on top of the documents (colors, fonts, current messaging).
document_urlsNoArray of public URLs pointing to documents (PDF, DOCX, TXT, MD). Up to 10 URLs.
idempotency_keyNoOptional unique key to make this call safely retryable. If the same key + org repeats, the original result is returned without re-charging.
documents_inlineNoInline documents as base64. Use when the user has uploaded a file into chat and no public URL exists. IMPORTANT: The synthesis pipeline reads TEXT ONLY — it ignores images, diagrams, and visual layout. For any PDF or DOCX the user drops into chat: (1) read the file using your file-reading tools, (2) extract the text content preserving section headers and structure, (3) save as a clean .md or .txt file, (4) base64-encode the text file and submit here. Do NOT base64-encode the original PDF — extract text first. This keeps payloads small (a 50-page brief extracts to ~50KB of text vs 5MB of PDF) and produces better results because the pipeline gets clean structured text instead of OCR-extracted noise from embedded images. Max 5MB per file, 10 files total across all input types.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses credit cost (100 credits), output shape (same as create_powersource_url), and idempotency key usage. It does not contradict annotations (readOnlyHint: false, etc.). It could mention that it is non-destructive but this is already implied by annotations. Good balance.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with purpose and output, followed by usage context and parameter details. Every sentence adds value, and there is no redundancy. It fits essential information in a concise yet comprehensive paragraph.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, no output schema, no enums), the description is comprehensive. It explains output shape by referencing a sibling tool, cost, and parameter behavior. Minor lack: it doesn't specify behavior when both file_ids and document_urls are provided, but overall it covers key aspects well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers all parameters with descriptions, so baseline is 3. The description adds significant value for the documents_inline parameter, including detailed instructions on extracting text and base64 encoding. For other parameters, it adds minimal extra detail beyond schema. Overall, it compensates for schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: building a creative intelligence profile from internal brand documents. It specifies document types (briefs, guidelines, etc.) and output fields (brand identity, buyer profile, tensions, etc.). It distinguishes itself from the sibling tool create_powersource_url by emphasizing the use of documents versus website URLs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use this tool: 'when the truth lives in PDFs and DOCX, not on the website.' It also explains how to pass documents via file_ids or document_urls, and mentions optional context_url. However, it could be more explicit about when not to use it (e.g., if data is already in a structured format).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_powersource_fullCreate PowerSource Full (URL + Documents)AInspect

Build the highest-fidelity creative intelligence profile by combining a website URL with internal brand documents. Use when you want public messaging triangulated against internal strategy — the result has stronger conviction on voice, positioning, and proof than either source alone. Returns the same shape as create_powersource_url. Requires both a URL and at least one document (file_id or document_url). Costs 200 credits. For URL-only or docs-only, use the single-mode variants.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesWebsite URL to analyze. Supports any public website. REQUIRED.
file_idsNoArray of file IDs from a previous upload. Up to 10 files.
document_urlsNoArray of public URLs pointing to documents (PDF, DOCX, TXT, MD). Up to 10 URLs.
idempotency_keyNoOptional unique key to make this call safely retryable.
documents_inlineNoInline documents as base64. The pipeline reads TEXT ONLY — for any PDF or DOCX, extract the text content first using your file-reading tools, save as .md or .txt, then base64-encode and submit here. Supported formats: PDF, DOCX, TXT, MD. Max 5MB per file.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate mutation (readOnlyHint=false) and non-destructive (destructiveHint=false). Description adds cost (200 credits), requirement of both URL and document, and states result shape matches another tool. Does not detail any side effects or authorization needs, but context is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences: purpose, use case, return shape, and alternatives. Front-loaded with key information, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers usage, requirements, cost, and alternatives. Lacks output schema, but references shape of sibling tool. Adequate for a complex tool, though more detail on return fields would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but description adds critical nuance: 'Requires both a URL and at least one document' and provides detailed guidance on inline documents (text-only pipeline, handling PDF/DOCX). This significantly aids correct parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'builds the highest-fidelity creative intelligence profile' by combining URL and documents, and distinguishes from siblings by mentioning single-mode variants for URL-only or docs-only.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use when you want public messaging triangulated against internal strategy' and advises 'For URL-only or docs-only, use the single-mode variants.' Also notes cost and requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_powersource_urlCreate PowerSource from URLAInspect

Build a creative intelligence profile of any brand from their website. Optimized for direct-response ad creative, brand voice replication, and audience targeting. Output is structured for injection into ad scripts, not for executive brand strategy.

What's sourced from where (every section in the response carries a _scope tag):

  • BRAND-LEVEL (cached at domain level, identical across products on the same site): brand_voice, brand_story, brand_style (colors), brand_assets (fonts, logo).

  • PRODUCT-LEVEL (fresh per URL — varies by which page you scanned): identity, offer, selling_points, ctas.

  • SYNTHESIZED (derived from both layers via strategy agents): buyer_profile, tensions, angles, emotional_arcs, narrative.

  • SITE-LEVEL pulse signals (homepage hero scan, apply brand-wide not per-PDP): promotions.has_seasonal, has_new_drop, has_announcement.

Feeds directly into generate_adscript and other Heista generation tools — pass the returned brief_id or job_id as powersource_id. Costs 100 credits. Re-scanning the exact same URL within your org returns the cached result for free. A different page on the same domain still costs 100 credits, but the brand layer (voice, story, style, assets) is reused from cache so synthesis is faster.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesWebsite URL to analyze. Supports any public website (e.g., gymshark.com, notion.so). Bare domains auto-resolve to https.
webhook_urlNoHTTPS URL to receive a POST notification when the scan completes or fails. Eliminates need for polling.
force_refreshNoForce re-extraction of brand data even if cached. Use when a brand has rebranded or updated their website.
idempotency_keyNoOptional unique key to make this call safely retryable. If the same key + org repeats, the original result is returned without re-charging.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Provides detailed behavioral traits beyond annotations: credit cost, caching rules, scope tags for data freshness (brand vs product level), and derived sections. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with bullet points and sections, front-loaded with purpose. Slightly lengthy but justified by complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage: explains caching, credit costs, output structure via scope tags, relationships with siblings, and no output schema needed as described.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already covers all 4 parameters with descriptions (100% coverage). The description adds context about caching interactions (force_refresh) but does not significantly enhance parameter meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it builds a creative intelligence profile from a URL, specifies it's optimized for ad creative and not for executive strategy, and references sibling tools like generate_adscript that use the output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly describes usage context (ad scripts vs. executive strategy), caching behavior, credit costs, and how it feeds into other tools via brief_id or job_id.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

decode_adDecode Video AdAInspect

Reverse-engineer any video ad into its structural formula. Returns a beat-by-beat breakdown, hook classification, behavioral psychology, creative format, and performance signals (active days, runtime). Use the result as a structural template for new scripts via generate_adscript. Submit a URL — returns a job_id to poll with get_decode. Supports Facebook Ad Library, TikTok, Instagram Reels, YouTube Shorts, and direct .mp4 URLs. Costs 15 credits for videos ≤60s, 20 credits for 61-120s.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesVideo URL to decode. Supports: Facebook Ad Library, TikTok, Instagram Reels, YouTube Shorts, or direct .mp4 URL.
idempotency_keyNoOptional unique key to make this call safely retryable. If the same key + org repeats, the original result is returned without re-charging.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds behavioral detail beyond annotations: it reveals it's an async operation returning a job_id to poll, costs credits (15-20), and supports specific platforms. Annotations indicate not read-only and not destructive, which aligns with the write-like behavior of costing and polling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with clear structure: first sentence states core purpose, second details outputs and usage, third covers support and costs. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description fully explains the async workflow (returns job_id), credit costs, supported platforms, and integration with generate_adscript. This is complete for a decode tool of medium complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the description adds value by explaining the URL supports multiple platforms and the idempotency_key's retry-safe behavior. This enhances understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool reverse-engineers video ads into structural formulas, with specific outputs like beat-by-beat breakdown and hook classification. It distinguishes itself from siblings like decoder_intelligence and generate_adscript by focusing on decoding and linking to script generation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use context (reverse-engineer ads for structure) and links to generate_adscript. It lists supported URL formats and credit costs, though it doesn't explicitly state when not to use or alternatives for other ad types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

decoder_intelligenceDecoder IntelligenceA
Read-onlyIdempotent
Inspect

Browse individual decoded ads from Heista's corpus of real winning Meta/TikTok creative. Each result returns the full structural breakdown — beat timeline, classification, psychology, and runtime performance — plus an id you can pass into generate_adscript with source_type="decode" to write a fresh script on that exact structure. Use when you want a specific ad as a template, not an averaged formula. Free. Filter by vertical, creative_format, marketing_angle, hook_type, or brand name (partial match).

ParametersJSON Schema
NameRequiredDescriptionDefault
brandNoFilter by brand name (case-insensitive partial match). Examples: "Gymshark", "AG1", "Huel". Omit for all brands.
limitNoMax decoded ads to return (1-10, default 5).
verticalNoIndustry vertical to filter decoded ads. Examples: BEAUTY_SKINCARE, HEALTH_SUPPLEMENTS, FITNESS, FOOD_BEVERAGE, FASHION_APPAREL, SAAS_SOFTWARE, FINANCE_FINTECH, INFO_PRODUCTS, TECH_GADGETS. Omit for all verticals.
hook_typeNoFilter by opening hook type. Examples: CURIOSITY_SPIKE, IDENTITY_HOOK, CONTRADICTION_HOOK, PROVOCATION, STORY_START, DIRECT_QUESTION_HOOK. Omit for all hook types.
algo_intentNoStructural engine to filter by. Examples: PROBLEM_AGITATE_SOLVE, MECHANISM_REVEAL, TRANSFORMATION_ARC, SOCIAL_PROOF_STACK, COMPARISON_CONTRAST, URGENCY_SCARCITY. Omit for all intents.
creative_formatNoCreative format to filter by. Examples: TALKING_HEAD_BROLL, VOICEOVER_BROLL, UGC_TESTIMONIAL, PRODUCT_DEMO, SLIDESHOW_OVERLAY, INFLUENCER. Omit for all formats.
marketing_angleNoMarketing angle to filter by. Examples: PROBLEM_SOLUTION, SOCIAL_PROOF_RESULTS, HOW_TO_TUTORIAL, INGREDIENT_SCIENCE, ASPIRATIONAL_IDENTITY, VALUE_STACK. Omit for all angles.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description aligns with annotations (readOnlyHint, destructiveHint, idempotentHint). It adds context about the corpus source and the downstream use of the returned 'id', which goes beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences with no redundancy. The first sentence states purpose and return value, the second explains downstream usage, and the third gives guidance and lists filters. Very efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's purpose, return structure, filter options, and a key downstream use case. It lacks explicit mention of filter combination logic (AND/OR) and default limit, but given the schema covers defaults, it's mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 7 parameters have descriptions in the input schema (100% coverage), so the description adds moderate value by providing examples and usage context. However, the schema already explains each parameter well, so the description's additional meaning is limited.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool browses individual decoded ads and specifies the resource (decoded ads) and action (browse). It distinguishes from siblings by mentioning an 'id' for use in generate_adscript, implying a specific use case not covered by averaged formulas.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use guidance: 'Use when you want a specific ad as a template, not an averaged formula.' This implies an alternative tool for averaged formulas, though it doesn't name it directly. The guidance is clear and helpful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_adscriptGenerate Ad ScriptAInspect

Generate direct-response video ad scripts from a proven structure plus a brand's PowerSource. Output is direct-response video ad copy for paid social (Meta, TikTok, Reels) in the brand's voice, with a hook, beat-by-beat body, and CTA close. Pass source_id (from adformula_intelligence, decoder_intelligence, or decode_ad) plus source_type and a powersource_id (job_id or brief_id from create_powersource_*). script_mode: "blueprint" preserves the source structure exactly; "remix" keeps the psychological architecture but writes original copy. Generate 1-5 variants per call (tensions and selling points auto-rotated across variants). Metered pricing — typically 2-5 credits per script depending on length (~2 credits for a 15s script, ~5 credits for a 60s script). Pre-flight reserves a 17-credit ceiling and refunds the difference once actual usage is measured.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNoNumber of scripts to generate (1-5, default 1). Each script uses a different tension and selling point combination for variety.
tensionNoLock to a specific behavioral tension from the PowerSource (e.g., "Frustration → Relief"). Omit to let the system select the best match.
audienceNoAudience segment from the PowerSource. "buyer_profile" (default) uses the composite buyer. "audience_0", "audience_1", etc. target specific segments.
durationNoTarget duration in seconds (remix mode only, 10-120). Blueprint mode locks to the source duration.
source_idYesThe ID of the structural source to write from. For source_type="decode": either a job_id from your own decode_ad call OR an id from decoder_intelligence (corpus ad). For source_type="formula": a formula id from adformula_intelligence.
voice_modeNoVoice register for the script. "creator" (default) = authentic creator voice for UGC, PowerSource locks facts/tensions/selling points but NOT voice register. "brand" = full PowerSource brand voice for brand-owned content (website, OOH, brand films). Most ad scripts should use "creator".
script_modeNoScript mode. "blueprint" (default) follows the source formula exactly — same beat structure, same timing. "remix" uses the psychological architecture but writes original copy.
source_typeYesType of structural source. "decode" = a single decoded ad (your own or from the corpus). "formula" = a clustered blueprint built from multiple winning ads.
powersource_idYesIdentifier for the brand PowerSource that supplies voice, selling points, tensions, and audience. Accepts either a job_id from create_powersource_* or a brief_id from get_powersource — both work.
selling_pointsNoLock to specific selling points from the PowerSource (max 5). Omit to let the system select the best match for each beat.
idempotency_keyNoOptional unique key to make this call safely retryable. If the same key + org repeats, the original result is returned without re-charging.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description goes beyond the annotations by detailing behavioral traits such as auto-rotation of tensions and selling points across variants, metered pricing with a pre-flight credit ceiling, and the effect of the idempotency_key. Since annotations are minimal (readOnlyHint=false, etc.), the description fully covers the tool's non-destructive but consumptive behavior and pricing model.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph of 6 sentences, tightly packed with essential information. Each sentence serves a purpose: core function, parameter sourcing, mode distinctions, variant generation, and pricing. No wasted words, and the most critical information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (11 parameters, no output schema), the description is comprehensive. It explains how to obtain prerequisite IDs from sibling tools, the difference between blueprint and remix modes, auto-rotation of variants, and the credit pricing model. The only minor gap is a precise description of the output structure beyond 'hook, beat-by-beat body, and CTA close,' but this is adequate for an AI agent to understand the return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by explaining the provenance of source_id (from specific tools) and powersource_id (from create_powersource_*), clarifying that duration only works in remix mode, and noting auto-rotation of tensions/selling points for the count parameter. These enrich the schema descriptions without repeating them.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: generating direct-response video ad scripts from a proven structure and a brand's PowerSource. It specifies the output format (hook, beat-by-beat body, CTA) and distinguishes itself from siblings by referencing where to obtain the required source_id and powersource_id (adformula_intelligence, decoder_intelligence, etc.), making its role in the workflow unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit instructions on when to use the tool, including how to pass source_id, source_type, and powersource_id from specific sibling tools. It also explains the two script modes (blueprint vs remix) and when each is appropriate. However, it does not include explicit 'when not to use' or alternatives among siblings, which would further enhance guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_decodeGet Decode ResultA
Read-onlyIdempotent
Inspect

Retrieve a completed decode (full structural breakdown) or check status of a running job. Pass the job_id from decode_ad. If status is processing, wait 15 seconds and call again.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesJob ID returned by decode_ad. Call this tool to poll status or retrieve completed results.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly and idempotent. Description adds polling behavior and status-checking logic, which is valuable beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, then usage details. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a polling tool: explains retrieval, status check, and retry logic. No output schema needed provided. Annotations cover safety.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with description for job_id referencing decode_ad. Description reinforces where to get the parameter, adding value over schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a completed decode or checks status, with specific verb 'Retrieve' and resource 'decode result'. It distinguishes from sibling decode_ad by referencing it as the source of job_id.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage context: pass job_id from decode_ad, and if status is processing, wait 15 seconds and retry. Lacks explicit 'when not to use', but guidance is strong.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_hook_intelligenceGet Hook IntelligenceA
Read-onlyIdempotent
Inspect

Get proven hook patterns from Heista's corpus of decoded winning ads. Returns hook examples, templates, the psychology behind why each one stops the scroll, and runtime performance data. Use to write scroll-stopping openers grounded in what works. Free. Filter by vertical (e.g. BEAUTY_SKINCARE) and hook_type (e.g. CURIOSITY_SPIKE).

ParametersJSON Schema
NameRequiredDescriptionDefault
verticalNoIndustry vertical to filter corpus patterns. Examples: BEAUTY_SKINCARE, HEALTH_SUPPLEMENTS, FITNESS, FOOD_BEVERAGE, FASHION_APPAREL, SAAS_SOFTWARE, FINANCE_FINTECH, INFO_PRODUCTS, TECH_GADGETS. Omit for all verticals.
hook_typeNoSpecific hook type to retrieve patterns for. Examples: CURIOSITY_SPIKE, OPEN_LOOP_STATEMENT, HIDDEN_TRUTH_REVEAL, IDENTITY_HOOK, CONTRADICTION_HOOK, PROVOCATION, STORY_START, DIRECT_QUESTION_HOOK, CHALLENGE_INTRO, CONTRAST_SETUP. Omit to get the top performing types for the vertical.
marketing_angleNoMarketing angle to filter by. Examples: PROBLEM_SOLUTION, SOCIAL_PROOF_RESULTS, HOW_TO_TUTORIAL, OFFER_URGENCY, ASPIRATIONAL_IDENTITY, VALUE_STACK. Omit for all angles.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds value beyond annotations by disclosing that the tool is free and returns specific data types (examples, templates, psychology, performance data). Annotations already indicate readOnlyHint, idempotentHint, and non-destructive behavior, so no contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences plus a filtering note. Every sentence is essential: first states purpose, second gives usage context and filtering options. No fluff, well front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given it has 3 optional parameters, no required ones, no output schema, the description does a good job listing what the tool returns. It could be slightly more specific about the structure of the response, but it's adequate for the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for each parameter. The description adds guidance on omitting parameters to get top-performing types or all verticals, which goes beyond the schema. It could mention the marketing_angle parameter explicitly, but the schema covers it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool returns proven hook patterns from a corpus of winning ads, including examples, templates, psychology, and performance data. It uses the verb 'Get' and resource 'hook intelligence', which distinguishes it from sibling tools like 'adformula_intelligence' or 'decode_ad'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description tells the user to use it for writing scroll-stopping openers, mentions it's free, and provides filtering options. It does not explicitly state when not to use it or name alternatives, but the context is clear enough for an agent to decide.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_powersourceGet PowerSource ResultA
Read-onlyIdempotent
Inspect

Retrieve a completed PowerSource (full creative intelligence profile) or check status of a running scan. Pass the job_id from any create_powersource_* call. If status is processing, wait 3-5 seconds and call again. During synthesis, partial intelligence appears progressively — buyer archetype, tensions, selling points. Read each response.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesJob ID returned by any create_powersource_* call. Use this to poll status or retrieve completed results.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (read-only, idempotent), description details polling behavior, partial progressive results, and examples of returned fields. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with key information front-loaded. No extraneous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-param retrieval tool without output schema. Describes polling and partial results, though could mention completion states or error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers job_id fully, but description adds value by explaining its source (create_powersource_* calls) and usage direction.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it retrieves completed PowerSource or checks status, referencing job_id from create_powersource_* calls. Distinguishes from sibling creation tools and other retrieval tools like get_decode.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells to pass job_id and wait 3-5 seconds if processing. Could mention not to use before creation, but context implies that.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.