Skip to main content
Glama

Brandomica Lab

Ownership verified

Server Details

Brand name verification across domains, social handles, trademarks, app stores, and SaaS channels.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
BRNDMK/brandomica-mcp-server
GitHub Stars
0
Server Listing
brandomica-mcp-server

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

12 tools
brandomica_assess_safetyBrand Safety AssessmentA
Read-onlyIdempotent
Inspect

Return only the brand safety block for fast agent decisions. Uses the same check pipeline as brandomica_check_all and outputs overall risk, 0-100 safety score, blockers, signal breakdown, and recommended actions.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoCheck mode: 'quick' (default) for faster safety decisions, 'full' for complete evidence coveragequick
brand_nameYesThe brand name to check
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations: it specifies the tool is optimized for speed ('fast agent decisions'), mentions it uses a 'check pipeline' similar to another tool, and details the output structure (risk, score, blockers, etc.). Annotations cover safety aspects (readOnly, non-destructive, idempotent), but the description enhances this with performance and output information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, with two sentences that efficiently convey purpose, usage context, and output details without redundancy. Every sentence adds value, making it easy for an agent to quickly understand the tool's role.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (brand safety assessment) and lack of output schema, the description does well by detailing the output components (risk, score, blockers, etc.). Annotations provide safety context, but the description could slightly improve by mentioning any limitations or error handling. Overall, it's nearly complete for informed agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents both parameters. The description does not add any parameter-specific semantics beyond what's in the schema (e.g., it doesn't explain 'mode' options further or 'brand_name' constraints). Baseline score of 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Return') and resource ('brand safety block'), distinguishing it from siblings by specifying it's for 'fast agent decisions' and comparing it to 'brandomica_check_all' which suggests a more comprehensive alternative. It explicitly identifies what it outputs (overall risk, safety score, blockers, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('for fast agent decisions') and references an alternative ('brandomica_check_all'), indicating it's a streamlined version. It implies this is for quick safety assessments rather than comprehensive checks, helping the agent choose between this and other brand-related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brandomica_batch_checkBatch Brand CheckA
Read-onlyIdempotent
Inspect

Check 2-10 brand names in a single call. Runs checks concurrently (5 at a time) and returns results sorted by score descending. Each result includes availability score and safety assessment.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoCheck mode: 'quick' (default) for speed, 'full' for complete checksquick
brand_namesYesArray of 2-10 brand names to check
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies the concurrency limit ('5 at a time'), the sorting behavior ('sorted by score descending'), and the output content ('Each result includes availability score and safety assessment'). Annotations cover safety aspects (readOnly, non-destructive, idempotent, open-world), so the description complements them with operational details without contradiction, earning a high score for transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and concise, with three sentences that efficiently convey purpose, behavior, and output. Every sentence adds value: the first defines the scope, the second explains concurrency and sorting, and the third details the result content. There is no wasted text, making it highly structured and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (batch processing with concurrency), rich annotations (covering safety and idempotency), and no output schema, the description is mostly complete. It explains the input range, concurrency, sorting, and result structure. However, it lacks details on error handling or what 'availability score' and 'safety assessment' entail, which could be useful for an agent. This minor gap prevents a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not add meaning beyond the input schema, which has 100% coverage and fully documents both parameters ('mode' and 'brand_names'). It mentions '2-10 brand names' and 'check mode', but these are already covered in the schema's descriptions and constraints. With high schema coverage, the baseline is 3, as the description provides no extra parameter insights.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check 2-10 brand names in a single call'), the resource ('brand names'), and distinguishes it from siblings by emphasizing batch processing with concurrent execution. It explicitly mentions the output format ('returns results sorted by score descending'), which helps differentiate it from single-check tools like 'brandomica_check_all' or specialized checks like 'brandomica_check_domains'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for checking multiple brand names (2-10) efficiently with concurrent processing. It implicitly suggests alternatives by mentioning the scope ('availability score and safety assessment'), which might differ from siblings like 'brandomica_assess_safety' (focused only on safety) or 'brandomica_check_trademarks' (specialized checks). However, it does not explicitly state when not to use it or name specific alternatives, keeping it at a 4.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brandomica_brand_reportBrand Safety ReportA
Read-onlyIdempotent
Inspect

Generate a comprehensive Brand Safety Report with timestamped evidence for due diligence. Includes availability score, safety assessment, filing readiness, linguistic/phonetic screening, all evidence, domain costs, trademark filing estimates, and limitations. Returns full JSON report.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_nameYesThe brand name to check
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent/destructive status, lowering the bar. The description adds valuable behavioral context: reports contain 'timestamped evidence,' returns 'full JSON report' (compensating for missing output schema), and enumerates specific report sections (linguistic/phonetic screening, limitations). No contradictions with annotations—'generate' is acceptable terminology for read-only report generation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently structured: purpose/use-case first, specific contents second, return format third. Minor redundancy exists ('all evidence' is vague alongside specific evidence types like 'availability score'), but overall every sentence earns its place by conveying distinct information about scope, contents, and output format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of annotations covering safety/behavioral traits and a simple single-parameter schema, the description provides sufficient completeness. It compensates for the missing output schema by explicitly stating 'Returns full JSON report' and details the report's comprehensive contents, leaving minimal gaps for an agent to invoke this correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single brand_name parameter, the baseline score is 3. The description does not add syntax details, format examples, or semantic constraints beyond the schema's pattern and description, meeting but not exceeding the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Generate[s] a comprehensive Brand Safety Report'—specific verb and resource. It distinguishes from siblings implicitly by listing comprehensive outputs (availability score, safety assessment, filing readiness, domain costs, trademark estimates) that aggregate what individual check_* tools likely do separately, though it lacks explicit comparison language.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides an implied use case ('for due diligence') and suggests comprehensive scope via the included components. However, it lacks explicit guidance on when to use this aggregated report versus the individual brandomica_check_* siblings or brandomica_assess_safety, and mentions no prerequisites beyond the obvious brand_name parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brandomica_check_allFull Brand CheckA
Read-onlyIdempotent
Inspect

Check brand name availability across domains (with pricing), social handles, trademarks, app stores, and SaaS channels. Returns structured JSON with a 0-10 availability score and a 0-100 safety assessment. Use mode='quick' for faster results with fewer checks (domains without pricing, GitHub only, npm only, trademarks, no app stores or web presence).

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoCheck mode: 'full' runs all checks with pricing, 'quick' runs essential checks only (~3-4 API calls)full
brand_nameYesThe brand name to check
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the performance implications of different modes ('quick' uses ~3-4 API calls vs. full), specifies what checks are included or excluded in each mode, and describes the structured JSON output format with scoring details. While annotations cover safety (readOnlyHint, destructiveHint), the description enriches this with operational details without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the tool's comprehensive purpose and output format, and the second provides specific usage guidance for the 'mode' parameter. Every sentence adds critical value with zero wasted words, making it easy for an agent to parse and apply.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple check types, two operational modes) and lack of an output schema, the description provides complete context: it specifies the scope of checks, explains mode differences, details the JSON output structure (availability score and safety assessment), and relates to sibling tools implicitly through its comprehensive coverage. This compensates well for the missing output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents both parameters well. The description adds meaningful context by explaining the practical differences between 'full' and 'quick' modes (e.g., 'quick' excludes pricing, app stores, and web presence checks), which helps the agent understand the semantic impact of parameter choices beyond the schema's technical definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check brand name availability') and resources involved ('across domains, social handles, trademarks, app stores, and SaaS channels'), distinguishing it from sibling tools like 'brandomica_check_domains' or 'brandomica_check_social' which focus on specific aspects. It provides a comprehensive scope that differentiates it from narrower sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage guidance by specifying when to use the 'quick' mode ('for faster results with fewer checks') and detailing what it includes versus excludes compared to 'full' mode. It distinguishes this comprehensive check from more specialized sibling tools by listing all the areas it covers, helping the agent choose this over alternatives like 'brandomica_check_appstores' or 'brandomica_check_trademarks'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brandomica_check_appstoresApp Store SearchA
Read-onlyIdempotent
Inspect

Search iOS App Store and Google Play for apps matching the brand name.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_nameYesThe brand name to check
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, open-world, idempotent, non-destructive), so the bar is lower. The description adds value by specifying the search scope (iOS App Store and Google Play), which is useful context not captured in annotations. However, it doesn't mention potential rate limits, authentication needs, or result format details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without any fluff. It is front-loaded with the core action and resources, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (1 parameter, 100% coverage) and rich annotations, the description is adequate for a basic search tool. However, without an output schema, it doesn't explain what the search returns (e.g., app names, ratings, links), which is a gap. The lack of usage guidelines also limits completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'brand_name' well-documented in the schema. The description adds no additional parameter semantics beyond implying it's used for matching apps, which is already clear from the schema. This meets the baseline score when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Search') and the target resources ('iOS App Store and Google Play for apps matching the brand name'), making the purpose immediately understandable. It distinguishes itself from sibling tools like 'check_domains' or 'check_social' by specifying the app store context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'brandomica_check_all' or 'brandomica_batch_check'. It lacks any mention of prerequisites, limitations, or specific scenarios where this search is most appropriate, leaving the agent with no contextual usage cues.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brandomica_check_domainsDomain AvailabilityB
Read-onlyIdempotent
Inspect

Check domain availability across 6 TLDs (.com, .io, .co, .app, .dev, .ai) with purchase and renewal pricing.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_nameYesThe brand name to check
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds value by specifying the TLDs checked and that pricing information is included, which is useful context beyond annotations. However, it doesn't disclose other behavioral traits like rate limits, error handling, or response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core functionality ('Check domain availability') and adds essential details (TLDs and pricing). There is no wasted verbiage, and every part of the sentence contributes meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (checking availability across multiple TLDs with pricing), annotations cover safety and idempotency well, and the schema fully documents the single parameter. However, without an output schema, the description doesn't explain return values (e.g., structure of availability results or pricing details), leaving a gap. It's adequate but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'brand_name' fully documented in the schema. The description doesn't add any parameter-specific information beyond what the schema provides (e.g., it doesn't clarify how the brand name is used in domain checks). Baseline 3 is appropriate when the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check domain availability across 6 TLDs (.com, .io, .co, .app, .dev, .ai) with purchase and renewal pricing.' This specifies the verb ('Check'), resource ('domain availability'), and scope (6 specific TLDs with pricing). However, it doesn't explicitly differentiate from sibling tools like 'brandomica_check_all' or 'brandomica_batch_check', which likely have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'brandomica_check_all' (which might check more TLDs) or 'brandomica_batch_check' (which might handle multiple names), nor does it specify prerequisites or exclusions. The context is implied but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brandomica_check_googleWeb Presence (Google Search)A
Read-onlyIdempotent
Inspect

Search Google for existing companies or products using a brand name. Detects competitor overlap that may not appear in formal registries.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_nameYesThe brand name to check
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds value by specifying it searches Google and detects competitor overlap, but does not disclose additional behavioral traits like rate limits, authentication needs, or result format. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and concise with two sentences that efficiently convey the tool's purpose and unique value. Every sentence earns its place by specifying the action and differentiating from formal registries.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple search with one parameter), rich annotations covering safety and behavior, and no output schema, the description is mostly complete. It explains the tool's purpose and context well, but could improve by mentioning result format or limitations to fully compensate for the lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'brand_name' fully documented in the schema. The description does not add meaning beyond the schema, such as examples or usage notes for the brand name. Baseline score of 3 is appropriate as the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Search Google') and resource ('existing companies or products using a brand name'), and distinguishes it from siblings by specifying it detects 'competitor overlap that may not appear in formal registries', unlike tools like 'brandomica_check_trademarks' which likely check formal registries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for detecting competitor overlap beyond formal registries, but does not explicitly state when to use this tool versus alternatives like 'brandomica_check_all' or 'brandomica_check_social'. It provides some context but lacks clear exclusions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brandomica_check_saasPackage Registry & SaaS AvailabilityA
Read-onlyIdempotent
Inspect

Check package name availability across npm, PyPI, crates.io, RubyGems, NuGet, Homebrew, Docker Hub, and ProductHunt.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_nameYesThe brand name to check
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent, and open-world operation. The description adds value by specifying the exact platforms checked, which provides context beyond annotations. However, it does not mention potential rate limits, authentication needs, or detailed behavioral traits like response format or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Check package name availability') and lists all relevant platforms without unnecessary words. Every part of the sentence contributes directly to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema) and rich annotations, the description is mostly complete. It clearly states what the tool does and across which platforms. However, it lacks details on output format or error scenarios, which could be helpful for an agent despite the annotations covering safety aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'brand_name' clearly documented. The description does not add any parameter-specific semantics beyond what the schema provides, such as format examples or constraints. Baseline score of 3 is appropriate given the high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check package name availability') and enumerates the exact resources across which the check is performed (npm, PyPI, crates.io, RubyGems, NuGet, Homebrew, Docker Hub, and ProductHunt). It distinguishes this tool from siblings like 'brandomica_check_domains' or 'brandomica_check_social' by specifying the scope of package registries and SaaS platforms.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for checking package name availability across listed platforms, but it does not explicitly state when to use this tool versus alternatives like 'brandomica_check_all' or 'brandomica_batch_check'. No exclusions or prerequisites are mentioned, leaving the agent to infer context from the tool name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brandomica_check_socialSocial Handle AvailabilityA
Read-onlyIdempotent
Inspect

Check social media handle availability on GitHub, Twitter/X, TikTok, LinkedIn, and Instagram.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_nameYesThe brand name to check
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide excellent behavioral coverage (read-only, open-world, idempotent, non-destructive). The description adds useful context by specifying which platforms are checked, which isn't captured in annotations. No contradictions exist between description and annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero wasted words. Front-loaded with the core action ('Check social media handle availability') followed by specific platform enumeration. Every element serves a clear purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read-only tool with comprehensive annotations, the description provides adequate context about scope (which platforms). The main gap is lack of output format information (no output schema exists), but annotations cover the safety profile well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with a single well-documented parameter. The description doesn't add any parameter-specific information beyond what the schema provides (brand name checking), meeting the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check') and resource ('social media handle availability') with explicit platform enumeration (GitHub, Twitter/X, TikTok, LinkedIn, Instagram). It distinguishes from sibling tools like 'check_domains' or 'check_appstores' by focusing specifically on social media handles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (checking social media handles for brand names) but provides no explicit guidance on when to use this tool versus alternatives like 'brandomica_check_all' or 'brandomica_batch_check'. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brandomica_check_trademarksTrademark SearchA
Read-onlyIdempotent
Inspect

Check trademark registries for existing registrations of a brand name. USPTO uses Turso (hosted SQLite FTS5) as the primary provider with local bulk index as legacy fallback; EUIPO returns a manual search link.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_nameYesThe brand name to check
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context by specifying the data sources (USPTO with Turso/legacy fallback, EUIPO manual link), which helps the agent understand the tool's operational characteristics beyond the annotations. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, with the first sentence stating the core purpose. The second sentence efficiently adds technical context without redundancy. Every sentence contributes value, and there is no wasted verbiage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema), the description is mostly complete. It covers purpose, data sources, and technical implementation. However, it lacks details on output format or result interpretation, which would be helpful since there's no output schema, leaving a minor gap in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'brand_name' fully documented in the schema. The description does not add any additional meaning or examples for the parameter beyond what the schema provides, such as format details or usage tips, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check trademark registries') and resource ('existing registrations of a brand name'), distinguishing it from siblings like domain or social media checks by focusing on trademark registries. It explicitly mentions USPTO and EUIPO as the registries being searched.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for trademark searches but does not explicitly state when to use this tool versus alternatives like 'brandomica_check_all' or 'brandomica_batch_check'. It mentions technical providers (Turso, local bulk index) but lacks clear guidance on scenarios or prerequisites for choosing this tool over others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brandomica_compare_brandsCompare Brand NamesA
Read-onlyIdempotent
Inspect

Compare 2-5 brand name candidates side-by-side. Checks each across domains, social handles, trademarks, app stores, and SaaS channels. Returns availability score plus safety assessment per candidate and a highest-scoring recommendation.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_namesYesArray of 2-5 brand names to compare
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond annotations by specifying what checks are performed (domains, social handles, trademarks, app stores, SaaS channels) and what is returned (availability score, safety assessment, recommendation), though it doesn't mention rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by specifics of checks and returns in a second sentence. Every sentence adds essential information with zero waste, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple checks across channels) and lack of an output schema, the description does a good job explaining what is returned (availability score, safety assessment, recommendation). However, it could be more complete by detailing the format of the safety assessment or specifying any limitations (e.g., geographic scope of trademark checks).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'brand_names' fully documented in the schema (array of 2-5 strings with pattern constraints). The description adds minimal semantic value beyond the schema by mentioning '2-5 brand name candidates' but doesn't provide additional details like format examples or usage context beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Compare 2-5 brand name candidates side-by-side') and resource ('brand name candidates'), and distinguishes from siblings by specifying it checks multiple channels (domains, social handles, trademarks, app stores, SaaS channels) unlike single-channel tools like 'brandomica_check_domains' or 'brandomica_check_social'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Compare 2-5 brand name candidates side-by-side') and implicitly suggests alternatives by listing the specific channels it checks, but does not explicitly state when not to use it or name alternative tools like 'brandomica_batch_check' or 'brandomica_assess_safety'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brandomica_filing_readinessFiling Readiness SummaryA
Read-onlyIdempotent
Inspect

Return a decision-focused filing readiness block with verdict, filing risk, top conflicts by jurisdiction/class, evidence links, confidence, and missing critical categories.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoCheck mode: full (default) for filing decisions, quick for faster directional outputfull
brand_nameYesThe brand name to check
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations by specifying the output structure ('decision-focused filing readiness block with verdict, filing risk, top conflicts...'), which is not covered by annotations. Annotations already indicate it's read-only, non-destructive, idempotent, and open-world, so the description doesn't need to repeat those traits. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence that front-loads the core purpose and lists output components efficiently. Every word contributes to understanding the tool's function without redundancy or fluff, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (decision-focused output with multiple components) and lack of output schema, the description provides a comprehensive overview of what the tool returns. However, it could be more complete by briefly mentioning the input parameters or linking to schema details, though annotations cover safety and behavioral aspects adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not mention parameters, but schema description coverage is 100%, with both parameters ('mode' and 'brand_name') well-documented in the schema. The baseline score of 3 is appropriate since the schema handles parameter semantics effectively, and the description focuses on output rather than inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('return a decision-focused filing readiness block') and resources ('verdict, filing risk, top conflicts by jurisdiction/class, evidence links, confidence, and missing critical categories'). It distinguishes itself from sibling tools like 'brandomica_assess_safety' or 'brandomica_brand_report' by focusing on filing readiness rather than general safety assessment or reporting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for filing readiness decisions but does not explicitly state when to use this tool versus alternatives like 'brandomica_check_trademarks' or 'brandomica_compare_brands'. It mentions 'full (default) for filing decisions' in the schema, but the description itself lacks explicit guidance on when/when-not to use it or direct comparisons to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.