Skip to main content
Glama

Vizzy — Arabic AI Creative Platform

Server Details

Arabic-first AI creative platform for Egyptian and Arab businesses. Generate social media designs, write marketing copy in Egyptian dialect, build content calendars, produce Sora-2 videos, AI photoshoots, music tracks, and business documents — with your brand identity automatically applied. Requires a Grow or Business subscription at vizzy.space.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 13 of 13 tools scored. Lowest: 3.4/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct creative service—ads analysis, content planning, design generation, music, video, etc. There is no overlapping functionality.

Naming Consistency3/5

Most tool names are lowercase with underscores, but they mix verb-noun patterns (e.g., create_design) with noun phrases (e.g., ads_analysis, linkedin_post). This inconsistency could cause confusion.

Tool Count5/5

13 tools is appropriate for the breadth of creative services (from copywriting to video generation). It's not overwhelming yet covers key areas.

Completeness4/5

The tool set covers the core creative workflow—ideation, copy, design, video, music, analysis. Missing features like asset management or versioning would be nice but not essential for a content generation platform.

Available Tools

13 tools
ads_analysisAInspect

Create a complete digital advertising strategy for the Egyptian/Arab market. Returns target audience profile, messaging, platform recommendations, example ad headlines, and KPIs. Costs 80 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesBrand or campaign (e.g. 'مطعم سوشي في القاهرة', 'fashion e-commerce targeting Egyptian women 18-35')
languageNoLanguage (default: arabic)
platformNoAd platform(s): facebook / instagram / tiktok / google / all
extra_instructionsNoBudget range, campaign goal (awareness/sales/leads), competitors to mention
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavior fully. It mentions credit cost and returned items but lacks details on side effects, limitations, or whether the operation is read-only. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first defines purpose, second lists outputs and cost. No redundant information, efficient and scannable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description enumerates return values (target audience, messaging, etc.) and notes credit cost. This provides sufficient context for a generation tool, though output format remains unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with each parameter having a description. The tool description adds market focus and output context but does not significantly enhance understanding of individual parameters beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a complete digital advertising strategy for a specific market (Egyptian/Arab), listing specific outputs. This distinguishes it from sibling tools like marketing_ideas or write_copy.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for Egyptian/Arab market strategies and mentions a credit cost, but does not explicitly state when to use it over alternatives or provide exclusions. Usage context is implied, not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

content_calendarAInspect

Create a full content calendar with post ideas, captions, visual briefs, and hashtags. Built for Egyptian/Arab brands — aware of Ramadan, Eid, national holidays, and local culture. Returns a structured calendar array. Costs 80 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesBrand name or campaign (e.g. 'براند فشار مصري', 'Ramadan campaign for clothing brand')
periodNoCalendar period. Default: monthly
languageNoLanguage (default: arabic)
platformNoTarget platforms (default: instagram and facebook)
extra_instructionsNoIndustry, campaign theme, tone of voice
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the credit cost (80 credits) and the nature of the output (structured calendar array), but it does not detail side effects or limitations beyond that. This is good but could mention if modifications are possible.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundant information. Every sentence adds value: the first states the core function, the second adds cultural context and cost. Exceptional conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters and no output schema, the description covers purpose, audience, output type, and cost. It does not elaborate on what 'visual briefs' entails, but overall it is complete enough for an AI agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and parameter descriptions are clear. The description adds value by mentioning the output structure and credit cost, but does not significantly enhance understanding of parameters beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a content calendar with post ideas, captions, visual briefs, and hashtags, and it specifies the target audience (Egyptian/Arab brands) and cultural awareness (Ramadan, Eid, etc.), which distinguishes it from sibling tools like marketing_ideas or write_copy.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (for culturally relevant content calendars) but does not explicitly state when not to use it or mention alternatives like marketing_ideas for more generic ideas. However, the cultural focus provides clear context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_designAInspect

Generate a social media image or marketing design using AI. Optimized for Arabic text and Egyptian/Arab market aesthetics. Brand identity (colors, logo, style) from the user's Vizzy profile is automatically applied. Returns a public image URL. Costs 160 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNoVisual style of the design
topicYesWhat the design is about — in Arabic or English (e.g. 'بوست عيد الفطر لمطعم', 'Eid sale for a coffee brand')
languageNoText language on the design. Default: arabic
platformNoTarget social media platform (affects dimensions and style)
aspect_ratioNoImage dimensions — square (1:1), portrait (4:5), landscape (16:9), wide (2:1)
extra_instructionsNoAdditional design guidance (e.g. 'خلفية ذهبية، نص أبيض، احتفالي')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description covers key behaviors: automatic brand application, public image URL output, and credit cost. Does not explicitly label as creation (contrary to read-only), but the nature is clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences plus cost statement, front-loaded with purpose. Every sentence provides value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains the output (public URL), automations (brand identity), and cost (160 credits). Nearly complete for a design generator; missing potential caveats about content restrictions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers all parameters with descriptions (100% coverage). The description adds no additional parameter-level meaning beyond context like 'Arabic text' alignment with language parameter. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly specifies 'generate a social media image or marketing design using AI', distinguishing it from sibling tools like generate_document or generate_video. Adds unique value by noting Arabic/Egyptian market focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States automatic brand identity application and credit cost, implying use when a branded design is needed. Does not explicitly exclude other scenarios or name alternatives, but the context is sufficient for an AI agent to decide.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_documentBInspect

Generate a professional business document — proposal, report, presentation, or spreadsheet. Returns a downloadable URL (PDF, DOCX, or PPTX). Supports Arabic and English documents. Costs 80 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
languageNoDocument language (default: en)
company_nameNoCompany or brand name
user_requestYesDescribe the document (e.g. 'عمل عرض تقديمي لبراند ملابس للمستثمرين', 'create a business proposal for a coffee shop in Cairo')
document_typeNoType of document
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries the burden. It discloses that the tool costs 80 credits, returns a URL in PDF/DOCX/PPTX, and supports Arabic and English. However, it does not mention permissions, rate limits, or side effects like resource consumption.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with the core action and key details. No fluff; every sentence adds essential information (document types, output, language support, cost).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description covers main functionality, output format, language support, and cost. Missing details about processing time, error handling, or file persistence, but adequate for basic usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good descriptions. The description adds value by mentioning output format and credit cost, but it omits the 'contract' document type present in the enum, causing slight misalignment.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool generates professional business documents like proposals, reports, presentations, or spreadsheets and returns a downloadable URL. This purpose is distinct from siblings (e.g., ads_analysis, write_copy), though not explicitly differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like write_copy or create_design. It mentions cost (80 credits) but does not specify prerequisites, scenarios, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_musicAInspect

Generate an original music track — Arabic, Egyptian, or any genre. Returns a public audio URL. Takes 2-4 minutes to generate. ⚠️ Costs 160 credits — confirm with the user.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNoGenre/style (e.g. 'arabic pop', 'cinematic orchestral', 'lo-fi', 'oriental', 'electronic')
topicYesTheme of the music (e.g. 'موسيقى رمضانية هادئة', 'upbeat Egyptian pop for a product launch', 'cinematic ambient for a luxury brand')
languageNoLanguage for lyrics if vocal (default: arabic)
instrumentalNoTrue = music only (default), False = include vocals
extra_instructionsNoMood, tempo, instruments (e.g. 'عود وكمان، إيقاع بطيء وهادئ')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses latency (2-4 minutes), cost (160 credits), and the need for user confirmation. This adds valuable behavioral context beyond what the schema provides.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each serving a clear purpose: first defines the tool, second clarifies output format, third provides critical usage warnings. No wasted words; front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters (all documented in schema), no output schema, and no annotations, the description covers key behavioral aspects (latency, cost, output format). It could mention the default value for 'instrumental' but that is in schema. Adequate for a straightforward generation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description briefly mentions genres but doesn't add additional syntax or format details beyond the schema. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Generate an original music track' with specific genres mentioned ('Arabic, Egyptian, or any genre'), and indicates it returns a public audio URL. This distinguishes it from sibling tools like generate_document or generate_video.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage guidance: 'Takes 2-4 minutes to generate' and '⚠️ Costs 160 credits — confirm with the user.' This informs the agent about time and cost, though it doesn't explicitly state when to use this tool over alternatives or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_videoAInspect

Generate a short cinematic marketing video using Sora-2. Returns a public video URL. Takes 1-2 minutes. ⚠️ EXPENSIVE — Costs 600 credits. Always confirm with the user before calling.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesWhat the video is about (e.g. 'فيديو إعلاني لمنتج عطر فاخر', 'cinematic ad for a premium coffee brand')
languageNoLanguage (default: arabic)
aspect_ratioNoDimensions — portrait (9:16 for Reels/TikTok), landscape (16:9), square (1:1). Default: portrait
extra_instructionsNoMood, style, scene description (e.g. 'غروب الشمس، موسيقى هادئة، ألوان دافئة')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes duration (1-2 minutes) and output type (public URL). No annotations exist, so description carries burden. Lacks details on credit deduction, idempotency, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise, two sentences with emoji highlighting cost warning. Front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema or annotations, description covers key aspects: what it does, duration, cost, confirmation requirement. Could mention error conditions or limitations but sufficient overall.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with examples. Description adds no additional meaning beyond schema, but baseline is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb (generate), resource (video), model (Sora-2), and output (public URL). Distinguishes from sibling tools like generate_music or generate_document.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly warns of high cost (600 credits) and instructs to confirm with user. Does not compare to alternatives but provides strong contextual guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

linkedin_postAInspect

Write a professional LinkedIn post in formal Arabic (فصحى) or English. Returns full post text, hashtags, and opening hook. Costs 160 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesWhat to write about (e.g. 'إطلاق منتج جديد', 'company milestone announcement', 'leadership insight about the Egyptian startup scene')
languageNoLanguage — arabic uses formal Arabic (فصحى), not dialect. Default: arabic
extra_instructionsNoTone, achievement to highlight, call to action
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses return values (post text, hashtags, hook) and cost (160 credits). However, it does not clarify that it only generates text (not posts to LinkedIn) or mention any side effects, which is a minor gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core action, and contains no superfluous information. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains return values. It also mentions cost. However, it lacks details about constraints (e.g., character limit) and the distinction between generation and actual posting, leaving some gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear property descriptions. The description adds context about return values but does not enhance parameter meaning beyond the schema, earning a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool writes a professional LinkedIn post in specific languages (formal Arabic or English) and returns the post text, hashtags, and opening hook. This distinguishes it from sibling tools like write_copy or marketing_ideas, which are more general.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for generating LinkedIn posts but does not explicitly provide when-to-use or when-not-to-use guidance. No alternatives are mentioned, leaving it to the agent to infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

marketing_ideasAInspect

Generate creative marketing campaign ideas for the Egyptian/Arab market. Deep understanding of Ramadan, Eid, Egyptian consumer behavior, and local culture. Returns campaign concepts with hooks, channels, and content types. Costs 80 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesBrand, product, or campaign (e.g. 'براند ملابس مصري عصري', 'F&B restaurant Ramadan campaign')
languageNoLanguage (default: arabic)
platformNoPlatform(s): instagram / tiktok / facebook / multi-platform
extra_instructionsNoTarget audience, campaign season, budget level, tone
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided. The description only mentions a credit cost (80 credits) but does not disclose any other behavioral traits such as destructive actions, authentication needs, or rate limits. For a generation tool, more transparency about non-destructive nature would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the primary purpose, and includes key output details and cost without extraneous information. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description provides a good overview of the tool's purpose, market focus, output components, and cost. It is fairly complete for a creative idea generation tool, though some additional usage context could be added.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for each parameter. The description adds meaning by explaining the output (hooks, channels, content types) and the cultural focus, which enriches understanding beyond the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it generates creative marketing campaign ideas for the Egyptian/Arab market, with understanding of local culture. It specifies returning campaign concepts with hooks, channels, and content types. This distinguishes it from siblings like ads_analysis or content_calendar.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for generating ideas but does not explicitly state when to use versus alternatives or provide exclusions. Context signals show siblings but no direct guidance is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

media_spendingAInspect

Create a monthly media spending plan with channel budget allocation. Returns a breakdown by channel with rationale, expected reach, and KPIs. Designed for Egyptian market ad budgets (EGP). Costs 80 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesBrand or campaign name
languageNoLanguage (default: arabic)
platformNoFocus platforms if specific
extra_instructionsNoSTRONGLY RECOMMENDED: Monthly budget (e.g. 'ميزانية 20,000 جنيه شهرياً', 'budget is 20,000 EGP/month')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries the burden. It states the tool creates a plan and costs 80 credits, implying mutation and a cost. However, it does not disclose side effects, required permissions, or whether the plan is persisted elsewhere.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently convey purpose, output, and key constraints (market, cost). No redundancy. Front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description covers basic purpose and output, it lacks guidance on when to use this tool vs. siblings, and does not elaborate on the output structure or how to interpret the breakdown. For a tool with no output schema and 4 parameters, additional context would help.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% so parameters are already well-described. The description adds context about Egyptian market focus and credit cost, but does not significantly enhance parameter understanding beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a monthly media spending plan with channel budget allocation, specifying output details and target market (Egyptian, EGP). It is distinct from siblings like ads_analysis or content_calendar, which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., ads_analysis for analysis, content_calendar for scheduling). There is no explicit 'when to use' or 'when not to use' indication.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

photoshootAInspect

Apply AI fashion or product photoshoot styling to an existing image. Upload a product or clothing image → get a professional-looking photoshoot output. Returns a styled image URL. ⚠️ Costs 250 credits — confirm before calling. REQUIRED: image_url must be a publicly accessible URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesStyle of photoshoot (e.g. 'elegant outdoor fashion shoot', 'studio product photography white background')
image_urlYesPublic URL of the product or clothing image to style (must be accessible without login)
aspect_ratioNoOutput dimensions. Default: portrait (4:5)
extra_instructionsNoBackground, lighting, mood (e.g. 'خلفية طبيعية خضراء، إضاءة ناعمة')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses cost (250 credits), required URL format, and that output is an image URL. It does not mention rate limits or error handling but covers primary behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences: purpose, process/output, warnings/requirements. Each sentence serves a distinct purpose with no wasted words. Front-loaded with key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description states return type ('styled image URL'). Covers input, cost, and constraints. Missing details on failure cases or styling process, but sufficient for tool usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are well-documented. The description adds context about the credit cost and reinforces the requirement for a publicly accessible URL, adding value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Apply'), resource ('AI fashion or product photoshoot styling'), input ('existing image'), and output ('styled image URL'). It distinguishes well from sibling tools, which are analysis, generation, and suggestion tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description tells when to use (for product/clothing images) and what is required (publicly accessible URL). It warns about the 250-credit cost and to confirm before calling, but does not explicitly exclude alternative uses.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

suggestAInspect

Get free post idea suggestions based on the user's brand profile — no credits deducted. Call this FIRST to show users content options before generating. Ideas are culturally tuned for the Egyptian/Arab market. Costs 0 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNoNumber of suggestions (default: 10)
themeNoTheme: ramadan / eid / summer / product launch / back-to-school / national-day
languageNoLanguage (default: arabic)
platformNoPlatform (default: instagram)
extra_instructionsNoSpecific focus or requirements
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses that the tool is free (0 credits), non-destructive (get suggestions), and culturally tuned. It does not mention authentication needs, rate limits, or response format, but the core behavior is well-covered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences front-loaded with the main action, followed by usage instruction and key differentiators. No redundant information; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers purpose, usage, and cost well, but lacks details on output format (e.g., what the suggestions look like) and prerequisites for the brand profile. Given no output schema and siblings, it is mostly complete but could enhance transparency about results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with all parameters described in the input schema. The description adds the context of 'based on the user's brand profile' but does not elaborate on parameter usage beyond the schema. Baseline 3 is appropriate as the description does not significantly augment parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides free post idea suggestions based on brand profile, culturally tuned for Egyptian/Arab market. It distinguishes from siblings by emphasizing 'Call this FIRST' before generating content, making it a unique preliminary step.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Call this FIRST to show users content options before generating,' providing clear when-to-use guidance. It also specifies cultural tuning and zero cost, helping the agent decide when to invoke this tool over others like generate_document or write_copy.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

write_copyAInspect

Write Arabic marketing copy, social media captions, or ad text. Uses Egyptian dialect by default — professional, relatable, culturally relevant to the Arab market. Returns copy text and hashtags. Costs 80 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesWhat to write about (e.g. 'عرض رمضان لمطعم كوفي', 'summer sale for a fashion brand')
languageNoLanguage — arabic uses Egyptian dialect (عامية مصرية) by default
platformNoTarget platform (affects copy length and tone)
extra_instructionsNoTone, audience, special requirements (e.g. 'فكاهي وخفيف', 'urgent, expires tonight')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden of behavioral disclosure. It states the default Egyptian dialect, professional tone, output format, and credit cost. While it doesn't cover error handling or limitations, it sufficiently describes the tool's core behavior for a generative copywriting tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loading the main purpose, followed by default behavior, output, and cost. Every sentence adds value with no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description includes the return type (copy text and hashtags) and cost. It covers the essential context for using the tool, though it could mention variations like handling multiple paragraphs or length constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the parameters are already well-documented. The description adds the default dialect but this is also noted in the schema's language enum description. No significant additional meaning is provided beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool writes Arabic marketing copy, social media captions, or ad text, specifying the dialect and target market. It also mentions the output includes copy text and hashtags, making it distinct from sibling tools like ads_analysis or marketing_ideas.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use cases (marketing copy, social media, ads) but does not explicitly state when to use this tool versus alternatives, nor does it provide exclusions or when-not-to-use guidance. Sibling tools exist but no comparison is made.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

youtube_ideasAInspect

Generate YouTube video ideas with titles, thumbnail concepts, and hooks. Tailored for Arabic YouTube creators and Egyptian businesses. Returns a list of ideas with tags. Costs 80 credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesChannel topic or brand niche (e.g. 'قناة طبخ مصري', 'business & entrepreneurship', 'skincare brand Egypt')
languageNoLanguage (default: arabic)
extra_instructionsNoTarget audience, video length, style preferences
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It mentions the cost (80 credits) but fails to describe other important aspects like whether the generation is deterministic, what the max number of ideas is, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences concisely cover purpose, outputs, target audience, and cost. No wasted words, and the most important information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is complete for a simple generative tool: it states inputs, outputs, and cost. However, it lacks mention of how many ideas are returned or differentiation from the marketing_ideas sibling, which would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers all three parameters with descriptions, so baseline is 3. The description adds that the tool returns a list with tags and cost, but no additional parameter-specific meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates YouTube video ideas with specific outputs (titles, thumbnail concepts, hooks). It also specifies the target audience (Arabic YouTube creators, Egyptian businesses), distinguishing it from general idea tools like marketing_ideas.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for YouTube content creation but does not explicitly specify when to use versus alternatives or provide exclusions. The mention of Arabic and Egyptian focus gives context but lacks direct guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources