Skip to main content
Glama

Server Details

Viral-content intelligence for AI agents — 7 read-only MCP tools, evidence-layer scoring.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
khan-ashifur/hooklayer
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: analyze_account for deep creator analysis, find_viral_template for niche templates, match_voice for rewriting in creator's voice, predict_virality for script scoring, score_hook for hook validation, trend_pulse for trending topics, and viral_remix for remixing viral videos. Descriptions include usage guidance, eliminating ambiguity.

Naming Consistency4/5

Names follow a verb_noun pattern mostly (analyze_account, match_voice, predict_virality, score_hook, viral_remix), but 'trend_pulse' is unusual (verb missing). Overall consistent snake_case and clear intent, with minor deviation.

Tool Count5/5

Seven tools is well-scoped for a domain-specific content creation workflow—neither sparse nor bloated. Each tool addresses a distinct step in the process (analysis, template discovery, voice matching, virality prediction, hook scoring, trend detection, remixing).

Completeness4/5

The tools cover a complete content ideation/validation cycle: creator analysis, trend discovery, template finding, voice matching, script generation/remixing, hook scoring, and virality prediction. A minor gap is the lack of a 'generate from scratch' tool without any source, but viral_remix can accept a transcript directly, partially filling that need.

Available Tools

7 tools
analyze_accountA
Read-onlyIdempotent
Inspect

Deep intelligence on a TikTok creator by handle. Returns viral DNA scores (viral_dna_score, replicability_score, originality_score, consistency_score, audience_fatigue), format fingerprint, top 5 videos with transcripts, content gaps, AND a recommended_chain field with pre-filled next tool calls. USE WHEN the user references a creator by @handle, asks "analyze X", wants competitor research, or needs creator context before generating content. The recommended_chain suggests which tools to call next (match_voice, trend_pulse, viral_remix) with parameters pre-filled — review and execute them as appropriate. TikTok only in v1. Costs 5 credits. 1-hour cache per handle. TOOL HEALTH: Every response includes a quality field with a level (full | partial | degraded) and a reason. If quality.level is partial or degraded, you MUST flag this to the user explicitly in chat (e.g. "Heads up — this call returned partial data: ") before reporting any results. Never silently route around a degraded response. REPORTING: When you summarize this in chat, you MUST surface viral_dna.viral_dna_signals, viral_dna.replicability_signals, viral_dna.originality_signals (each as bullet lists with the cited evidence string verbatim) AND viral_dna.would_fail_because verbatim AND provenance.video_post_dates so the user can see freshness. Never hide the evidence array behind a paraphrase — these are the auditability layer.

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesTikTok handle, with or without @ prefix (e.g. "@mrbeast" or "mrbeast")
platformNoPlatform — currently only "tiktok" is supported

Output Schema

ParametersJSON Schema
NameRequiredDescription
profileNoCreator profile (handle, bio, followers, following, total_likes, total_videos)
from_paygNoWhether credits came from pay-as-you-go balance
steal_mapNoActionable elements to replicate from this creator
viral_dnaNoFive viral DNA scores (0-100 each)
content_gapsNoUntapped content opportunities the creator is missing
recent_videosNoTop recent videos with view counts, transcripts, and engagement
pattern_analysisNoContent pattern breakdown (posting cadence, topic clusters)
credits_remainingNoCredits remaining after this call
from_subscriptionNoWhether credits came from subscription
recommended_chainNoSuggested next tool calls with pre-filled parameters — advisory only, agent decides whether to execute
format_fingerprintNoDominant content format patterns (durations, styles, edit types)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description covers costs (5 credits), caching (1-hour), platform restriction (TikTok only), and the automatic call pattern for sibling tools. It does not detail error states or rate limits, but is transparent about its core behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured, front-loaded with purpose and outputs, then usage guidelines and details. Every sentence adds value, though slightly lengthy for a single paragraph.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description fully explains return values (viral scores, format fingerprint, top videos, recommended_chain) and provides additional context like caching, costs, and platform. It is complete for an analysis tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds value by clarifying handle format (with/without @) and that platform is only tiktok, which goes beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides deep intelligence on a TikTok creator, listing specific outputs like viral DNA scores and recommended_chain. It distinguishes from sibling tools by mentioning the recommended_chain that pre-fills calls to siblings like match_voice.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit usage scenarios are provided: when user references a creator by @handle, asks 'analyze X', wants competitor research, or needs creator context. It also mentions costs and cache, and implicitly guides towards sibling tools via recommended_chain.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_viral_templateA
Read-onlyIdempotent
Inspect

Find proven viral templates in a niche with example videos. Returns niche-fit ranked templates with hook pattern, format structure, average views, and example URLs. USE WHEN the user asks "what's working in [niche]", "give me templates I can copy", or wants concrete copyable structures rather than abstract trends. Pairs with viral_remix for end-to-end script generation. Costs 1 credit.

ParametersJSON Schema
NameRequiredDescriptionDefault
nicheYesNiche to search (e.g. "Beauty & Skincare")
min_viewsNoOptional filter — only return examples above this view count

Output Schema

ParametersJSON Schema
NameRequiredDescription
nicheNoThe niche searched
from_paygNoWhether credits came from pay-as-you-go balance
templatesNoRanked viral templates with hook patterns and example URLs
credits_remainingNoCredits remaining after this call
from_subscriptionNoWhether credits came from subscription
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes output structure (ranked templates with hook pattern, format structure, average views, example URLs) and mentions credit cost. No annotations provided, so description carries full burden and does so well, though no side effects disclosed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise, front-loaded with primary action and output. Every sentence adds value: purpose, return description, usage scenarios, sibling pairing, and cost. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers main aspects: purpose, output, usage scenarios, credit cost. Lacks error handling or authentication info, but given tool simplicity (2 params) and no output schema, it is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both params. Description adds usage context (e.g., 'example URLs' as part of output) but does not elaborate on parameter details beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states action ('Find') and resource ('proven viral templates in a niche'). Differentiates from sibling 'trend_pulse' by specifying it returns concrete copyable structures rather than abstract trends.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides use cases with example user queries ('what's working in [niche]', 'give me templates I can copy'). Mentions pairing with 'viral_remix' and credit cost, but lacks explicit when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

match_voiceA
Read-only
Inspect

Extract a creator's voice DNA from 3+ reference samples (URLs or text) and rewrite a draft in their voice. Returns voice profile (energy, humor, vocabulary, signature moves), reusable prompt instructions, the rewritten draft, AND a deterministic voice_metrics block: vocab_diversity_ttr (type-token ratio), filler_rate_per_100_words, avg_sentence_length_words, total_words, and signature_phrases[] (top 5 recurring 2-3-grams with counts). USE WHEN the user wants to write in another creator's style, has reference content to match, or chained from analyze_account.recommended_chain (which pre-fills reference_samples from the analyzed creator's videos). Costs 2 credits. TOOL HEALTH: Every response includes a quality field (level: full | partial | degraded, plus a reason string). If quality.level is partial or degraded, you MUST flag this to the user explicitly in chat ('Heads up — this call returned partial data: ') before reporting any results. Never silently route around a degraded response. REPORTING: When you summarize this in chat, you MUST surface the voice_metrics block as numbers — TTR, filler rate, avg sentence length, and the top signature_phrases with their counts. The qualitative voice_profile labels (energy, personality) alone are vibes; the numeric metrics are the reproducible signature. Cite both.

ParametersJSON Schema
NameRequiredDescriptionDefault
draftYesThe text to rewrite
reference_samplesYesAt least 3 reference samples — TikTok/YouTube/Instagram URLs or raw text

Output Schema

ParametersJSON Schema
NameRequiredDescription
from_paygNoWhether credits came from pay-as-you-go balance
voice_profileNoExtracted voice DNA profile
rewritten_draftNoThe input draft rewritten in the matched voice
credits_remainingNoCredits remaining after this call
from_subscriptionNoWhether credits came from subscription
prompt_instructionsNoReusable prompt to replicate this voice
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the burden of disclosing behavior. It details the output (voice profile, prompt instructions, rewritten draft) and mentions cost. It could additionally disclose if original drafts are stored or if there are rate limits, but the current level is sufficient for a transformation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each adding distinct value: action, output, and usage guidance. It's front-loaded with the primary verb and resource, and every sentence earns its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately describes return values (voice profile, prompt instructions, rewritten draft). It also covers cost, chaining context, and prerequisite (3+ samples), making it complete for an agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, both parameters already have descriptions. The tool description reinforces their meaning (e.g., 'reference samples (URLs or text)' matches the schema's description) but does not add new semantic value beyond clarifying the minimum count, which is already in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool extracts a creator's 'voice DNA' from 3+ reference samples and rewrites a draft. It distinguishes from siblings by explicitly mentioning chaining from analyze_account and specifying the unique action of voice extraction and rewriting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'USE WHEN the user wants to write in another creator's style, has reference content to match, or chained from analyze_account.recommended_chain', providing clear use cases and prerequisite context. It also mentions cost, helping agents decide based on resource consumption.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

predict_viralityA
Read-onlyIdempotent
Inspect

Score a draft script for viral potential. Returns a structurally-independent virality_score (the adversarial/skeptical pass — what the script realistically earns after attack vectors land), an optimistic_score (the upstream forensic-analyzer pass), score_range [low, high] showing honest variance, viral DNA breakdown, retention diagnosis, target emotion, share reason, a 6-element signals[] array with evidence per sub-score, a would_fail_because counterfactual, a calibration_check (anchor + agreement), AND attack_vectors[] naming specific failure modes. The two scores being side-by-side is the WHOLE POINT — historical Hooklayer demos hit 87 every run because the upstream score was self-graded. Now virality_score is the skeptical number; cite it. USE WHEN the user has a finished draft and wants pre-publish verification, or chained after viral_remix to validate the generated script. Pass either a script string or a video URL. Costs 2 credits. TOOL HEALTH: response includes a quality field (level: full | partial | degraded, plus reason). If level is partial or degraded, flag it in chat before reporting results. Never silently route around degraded data. REPORTING: When you summarize this in chat, you MUST surface (a) virality_score as the headline number with the score_range, (b) optimistic_score named explicitly as "upstream / self-graded" so the user sees both, (c) the calibration_check.anchor and agreement verbatim, (d) the signals[] array as a bullet list with each evidence string, (e) would_fail_because verbatim, (f) attack_vectors[] as a bullet list with severity. NEVER report optimistic_score alone — that recreates the self-grading loop the v3 ship was designed to break.

ParametersJSON Schema
NameRequiredDescriptionDefault
nicheNoNiche context
scriptYesThe draft script to predict on (or a video URL — auto-extracts transcript)

Output Schema

ParametersJSON Schema
NameRequiredDescription
blueprintNoRetention diagnosis with weak seconds identified
from_paygNoWhether credits came from pay-as-you-go balance
viral_dnaNoHook type, structure, emotional triggers breakdown
share_reasonNoWhy viewers would share this content
target_emotionNoPrimary emotion the content targets
virality_scoreNoOverall virality prediction (0-100)
credits_remainingNoCredits remaining after this call
from_subscriptionNoWhether credits came from subscription
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the cost (2 credits) and the ability to auto-extract transcripts from video URLs. It implies read-only behavior (scoring), which is consistent. It does not mention side effects or auth needs, but for a scoring tool, this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences, each serving a distinct purpose: output enumeration, usage guidance, input specification, and cost. No repetitive or extraneous information. Highly structured and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description thoroughly explains the tool's return values (viral DNA breakdown, score, etc.). It covers purpose, usage, input options, cost, and integration with sibling tools. Complete for its complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds value by clarifying that the 'script' parameter accepts both plain text and video URLs (with auto-extraction). For the 'niche' parameter, the description only says 'Niche context', which is already in the schema. Overall, it enriches parameter meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Score a draft script for viral potential.' It enumerates specific outputs including viral DNA breakdown, score, retention diagnosis, emotion, and share reason. It also differentiates itself by suggesting usage after viral_remix, distinguishing from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage contexts: 'USE WHEN the user has a finished draft and wants pre-publish verification, or chained after viral_remix.' It also specifies input options ('script string or a video URL'). However, it does not explicitly state when not to use this tool or mention alternatives for cases where the draft is not finished.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

score_hookA
Read-onlyIdempotent
Inspect

Score a TikTok/Reels/Shorts hook against proven viral patterns. Returns a 0-100 score, percentile rank, matched viral pattern, three rewritten versions at higher quality, a one-sentence verdict naming the closest calibration anchor (10/30/50/70/85/95), a 6-element signals[] array with evidence per sub-score, and a would_fail_because counterfactual. USE WHEN the user has a draft hook to validate, wants to A/B between alternatives, or needs to catch AI-generated slop before publishing. Pairs after viral_remix to verify the generated hook. Costs 1 credit. TOOL HEALTH: Every response includes a quality: { level: "full" | "partial" | "degraded", reason?: string } field. If quality.level is "partial" or "degraded", you MUST flag this to the user explicitly in chat ("Heads up — this call returned partial data: ") before reporting any results. Never silently route around a degraded response. REPORTING: When you summarize this in chat, you MUST surface (a) the score paired with the closest anchor from the why field, (b) the signals[] array as a bullet list with each evidence string verbatim — never paraphrase or drop signals, and (c) the would_fail_because field verbatim. If you score multiple hooks in sequence, also explicitly compare their signals[], not just the numeric scores — that's how the user judges which one wins.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe hook to score (3-200 chars typical)
nicheNoOptional niche context (e.g. "Beauty & Skincare", "Finance & Business")
platformYesTarget platform — affects pattern matching

Output Schema

ParametersJSON Schema
NameRequiredDescription
whyNoOne-sentence verdict explaining the score
scoreNoHook quality score (0-100)
rewritesNoThree rewritten versions at higher quality
breakdownNoSub-scores (0-100) for five quality dimensions
from_paygNoWhether credits came from pay-as-you-go balance
strengthsNoWhat the hook does well
percentileNoPercentile rank vs all scored hooks
weaknessesNoWhat could be improved
pattern_matchNoMatched viral pattern name
credits_remainingNoCredits remaining after this call
from_subscriptionNoWhether credits came from subscription
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses credit cost, return details (score, percentile, pattern, rewrites, verdict). No contradictions, but could mention if any side effects exist (though none expected for a scoring tool).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences plus one usage guideline and cost note. No fluff, every sentence earns its place, front-loaded with primary purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 params and no output schema, description is thorough. Lists five return items, usage context, and sibling relationship. Slight gap: no detail on return format, but acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and description adds value beyond schema: length constraint for text, examples for niche, explicit platforms. Enhances understanding without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Score' and the resource 'hook', specifying it scores against viral patterns and lists returned items. It distinguishes from sibling tools by mentioning it pairs after viral_remix.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases: validate a draft hook, A/B alternatives, catch AI slop. Also notes it pairs after viral_remix, giving clear context and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trend_pulseA
Read-onlyIdempotent
Inspect

Surface what is actually peaking in short-form video right now for a niche. Returns 3 rising opportunities (format/hook/style/topic) with growth rates, per-entry signal_strength (0-1), sources[] (Google Trends + YouTube velocity + Reddit hot + internal corpus), signal_window, plus 2 saturated patterns to avoid AND top-level provenance with cache_age_hours and cache_status. USE WHEN the user asks "what should I post about", "what's trending in [niche]", or before generating content for the first time. Pairs after analyze_account to validate a creator's formula against current trends. Costs 1 credit. 12-hour cache per niche. TOOL HEALTH: Every response includes a quality: { level: "full" | "partial" | "degraded", reason?: string } field. If quality.level is "partial" or "degraded", you MUST flag this to the user explicitly in chat ("Heads up — this call returned partial data: ") before reporting any results. Never silently route around a degraded response. REPORTING: When you summarize this in chat, you MUST cite the data_sources array verbatim and surface cache_status (fresh|stale) — the user needs to know if they're looking at live data. For each rising/saturated entry, surface the growth percentage AND the signal_window verbatim. Never round growth percentages: if the response says "+178.4%", report "+178.4%" — never "+180%".

ParametersJSON Schema
NameRequiredDescriptionDefault
nicheNoNiche filter (e.g. "Beauty & Skincare", "Fitness & Health"). Omit for broadly applicable trends.

Output Schema

ParametersJSON Schema
NameRequiredDescription
nicheNoThe niche these trends apply to
risingNoRising opportunities (format/hook/style) with growth rates and suggestions
from_paygNoWhether credits came from pay-as-you-go balance
saturatedNoSaturated patterns to avoid
credits_remainingNoCredits remaining after this call
from_subscriptionNoWhether credits came from subscription
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description fully covers behavior: returns 3 rising and 2 saturated patterns, costs 1 credit, has 12-hour cache per niche. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, no wasted words, front-loaded with output description, then usage, then pairing, then constraints. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a trend tool with 1 optional param. Describes output structure, usage context, and constraints. No output schema, but return values are described adequately. Could mention format of growth rates but not necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 1 parameter (niche) with 100% description coverage. Description adds 'Omit for broadly applicable trends', which provides context beyond the schema's description. Scores above baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Surface what is actually peaking in short-form video right now for a niche' with specific output (3 rising opportunities, 2 saturated patterns). Distinguishes from siblings like analyze_account by noting it pairs after that tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'USE WHEN the user asks "what should I post about"...' and 'Pairs after analyze_account'. Provides clear context for when to invoke, including timing relative to other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

viral_remixA
Read-only
Inspect

Take a viral source video and produce a fresh script that mirrors its viral DNA — same scene structure, same energy pattern, different topic. Returns the extracted formula, scene-by-scene script, camera shots, text overlays, and a verify_hook block prompting you to score the generated hook via score_hook. USE WHEN the user finds a video they want to copy the structure of, or chained from analyze_account.recommended_chain. Pass EITHER source_url (auto-extracts transcript) OR transcript directly — one is required. Costs 3 credits. NO SELF-RATING: viral_remix deliberately does NOT return a self-rated hook score. The script generator rating its own hook is structurally invalid (cardinal coupling). After every viral_remix call, you MUST call score_hook with the verify_hook.hook_text to get a structurally-independent quality signal before reporting to the user. Skipping this step is hiding the self-grading loop.

ParametersJSON Schema
NameRequiredDescriptionDefault
nicheNoOptional niche hint
my_topicNoWhat the remix should be about. Default: same niche as original.
source_urlNoTikTok/YouTube/Instagram URL — transcript will be auto-extracted
transcriptNoPre-extracted transcript (alternative to source_url, faster)

Output Schema

ParametersJSON Schema
NameRequiredDescription
overlaysNoText overlay suggestions per scene
from_paygNoWhether credits came from pay-as-you-go balance
hook_scoreNoSelf-rated hook quality score (0-100)
camera_shotsNoPhone-native camera direction per scene
fresh_scriptNoComplete scene-by-scene script mirroring the source structure
credits_remainingNoCredits remaining after this call
extracted_formulaNoThe viral DNA formula extracted from the source
from_subscriptionNoWhether credits came from subscription
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses credit cost, return items (formula, script, shots, etc.), and input constraint (one of source_url/transcript required). However, it omits details like what happens if both inputs are provided or error handling for invalid URLs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is 4 sentences, each adding essential information. It front-loads the main action and uses clear structure. No redundant or irrelevant details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description sufficiently describes return values (formula, script, shots, overlays, hook score). It also mentions credit cost and input constraints. Missing some edge case behavior (e.g., handling of both inputs) but overall complete for typical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all 4 parameters. The description adds value by explaining the relationship between source_url and transcript, the niche hint's optionality, and the default behavior for my_topic. This goes beyond the schema's basic descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('produce a fresh script') and clearly identifies the resource ('viral source video') and outcome (mirror viral DNA). It distinguishes from siblings like 'find_viral_template' by focusing on remixing rather than template discovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'USE WHEN the user finds a video they want to copy the structure of, or chained from analyze_account.recommended_chain'. Also clarifies input requirements (source_url or transcript) and costs. Does not explicitly exclude alternative tools, but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.