Skip to main content
Glama

Agentcy - Your AI Marketing Agent

Ownership verified

Server Details

A managed AI marketing agent that plugs into all your AI tools. Ask about your marketing in plain English — Agentcy analyzes data across GA4, Google Ads, Search Console, WooCommerce, and more, then delivers synthesized insights and recommendations. Not a data dump. Not a dashboard. An agent that thinks. Sign up at goagentcy.com to configure your domains and data sources. Free plan includes 50 requests/month — paid plans start at $29/mo.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

4 tools
get_current_dateA
Read-onlyIdempotent
Inspect

Get the current date, day of week, and timezone. Useful for constructing date ranges.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), but the description adds useful context about the output format ('date, day of week, and timezone') and its application ('constructing date ranges'), enhancing understanding beyond annotations without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and adds a brief, relevant usage note in the second. Both sentences earn their place by providing essential information without waste, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema) and rich annotations, the description is mostly complete, covering purpose and usage. However, it could slightly improve by hinting at the return format or error cases, though this is minor given the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is high. The description appropriately does not discuss parameters, as none exist, and instead focuses on the tool's output and utility, adding value without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and resources ('current date, day of week, and timezone'), making the purpose explicit. It distinguishes itself from sibling tools like 'list_sources' or 'marketing_data' by focusing on date/time retrieval rather than data listing or analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage ('Useful for constructing date ranges'), indicating when to use this tool. However, it does not explicitly state when not to use it or name alternatives among sibling tools, which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_sourcesA
Read-onlyIdempotent
Inspect

List available data sources and configured domains. Call this to discover which services and domains are available before querying. If exactly one domain exists, use it automatically without asking.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainNoDomain to list sources for. If omitted, lists all configured sources.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the annotations: it explains that this tool is for discovery purposes and provides a heuristic for automatic usage when only one domain exists. The annotations already cover safety (readOnlyHint: true, destructiveHint: false) and idempotency, so the description appropriately supplements with practical usage insights without contradicting them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of only two sentences that directly address purpose and usage guidelines. Every sentence earns its place by providing essential information without any redundant or verbose language, making it highly efficient for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 optional parameter, no output schema), the description is complete enough for an AI agent to understand when and how to use it. It covers purpose, guidelines, and behavioral context effectively. A 5 is reserved for more complex scenarios; here, the description adequately addresses the tool's needs, though it could hint at output format given the lack of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not explicitly mention the 'domain' parameter, but the input schema has 100% description coverage, clearly documenting the parameter's purpose. Since the schema fully describes the parameter, the description meets the baseline of 3 by not needing to repeat this information, though it could have added extra context about parameter implications.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List available data sources and configured domains') and distinguishes it from sibling tools like 'get_current_date', 'marketing_data', and 'research' by focusing on discovery of services and domains. It provides a precise verb+resource combination that is not tautological with the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Call this to discover which services and domains are available before querying') and provides guidance on when not to use it ('If exactly one domain exists, use it automatically without asking'). It clearly defines the context and alternative actions, making it highly actionable for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

marketing_dataA
Read-onlyIdempotent
Inspect

Query marketing data and analyze any website — analytics, SEO, advertising, e-commerce, CRM, social media, site health & brand identity, competitive intelligence, content creation, and data visualization. Always use a single call, even when the question spans multiple data sources or channels (e.g., GA4 + Google Search Console + Google Ads + CRM). The server auto-routes internally to all needed sources and returns a combined response with the same depth and granularity as individual queries — do NOT split multi-source or multi-channel questions into separate calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainNoDomain to query (e.g., 'example.com'). Required for analytics, ads, search, CRM, and e-commerce queries. Not needed for image generation or data visualization. If no domain is established in context, call list_sources first — if multiple domains exist, ask the user which one.
requestYesNatural language question. Include everything you need in one question — all channels, metrics, date ranges, and data sources. For example, "Give me website traffic from GA4, organic search performance from GSC, and paid search results from Google Ads for March 2026" is a single valid request. Never break a multi-part question into separate calls.
end_dateNoEnd date: YYYY-MM-DD or relative ('today', 'yesterday'). Defaults to yesterday.
start_dateNoStart date: YYYY-MM-DD or relative ('30daysAgo'). Defaults to 30 days ago.
source_hintsNoPreferred data sources. If omitted, server auto-selects via semantic routing.
reference_imagesNoURLs of reference images (logos, product photos, style references) for image generation
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains that the server 'auto-routes internally to all needed sources and returns a combined response,' describes the 'same depth and granularity as individual queries,' and warns against splitting calls. While annotations cover safety (readOnlyHint=true, destructiveHint=false) and idempotency, the description provides operational details about multi-source handling that aren't captured in structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by critical usage guidelines. Every sentence earns its place by providing essential operational instructions. While slightly longer than minimal, the length is justified by the complexity of the multi-source routing behavior that needs explanation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multi-source queries, 6 parameters) and rich annotations, the description provides good contextual completeness. It explains the key behavioral pattern (single-call multi-source queries) that isn't obvious from other fields. The main gap is lack of output schema, but the description compensates by describing the response characteristics ('combined response with the same depth and granularity').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 6 parameters thoroughly. The description doesn't add significant parameter-specific information beyond what's in the schema descriptions (e.g., it doesn't explain parameter interactions or provide additional examples). The baseline of 3 is appropriate when the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Query marketing data and analyze any website' followed by a comprehensive list of domains (analytics, SEO, advertising, etc.). It distinguishes from siblings by emphasizing its multi-source query capability versus list_sources (which lists domains) and research (which likely has different scope). The verb 'query and analyze' is specific and the resource 'marketing data' is well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Always use a single call, even when the question spans multiple data sources or channels' and 'do NOT split multi-source or multi-channel questions into separate calls.' It also references the sibling tool list_sources for domain selection when needed. The guidelines clearly define the tool's intended usage pattern versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

researchA
Read-onlyIdempotent
Inspect

Research any topic — search Google, Bing, YouTube, X/Twitter, Amazon, Yelp, Google Trends, news, and 100+ more engines. Read webpages, extract video transcripts, find reviews, track competitors. Works without a domain.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainNoOptional domain for context (e.g., 'example.com'). Helps focus competitor research. Not required for general queries.
requestYesNatural language research question
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide hints (readOnlyHint: true, openWorldHint: true, idempotentHint: true, destructiveHint: false), indicating safe, repeatable operations. The description adds valuable context beyond annotations by specifying the scope of search engines, capabilities like extracting transcripts, and the domain parameter's role in competitor research, though it lacks details on rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with key capabilities in a single, efficient sentence, followed by a clarifying statement. Every sentence adds value without redundancy, making it appropriately sized and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple search engines and capabilities) and rich annotations, the description is mostly complete. It covers purpose, usage, and behavioral context well. However, without an output schema, it does not explain return values or potential limitations, leaving a minor gap in full contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, documenting both parameters clearly. The description adds minimal semantic value beyond the schema, mentioning the domain parameter helps 'focus competitor research' and that it's 'Not required for general queries', but does not provide additional syntax or format details. Baseline 3 is appropriate as the schema handles most parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose with specific verbs ('search', 'read', 'extract', 'find', 'track') and resources ('Google, Bing, YouTube, X/Twitter, Amazon, Yelp, Google Trends, news, and 100+ more engines', 'webpages', 'video transcripts', 'reviews', 'competitors'). It clearly distinguishes from sibling tools like 'get_current_date', 'list_sources', and 'marketing_data' by focusing on comprehensive web research capabilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Research any topic', 'Works without a domain') and implies usage for competitor research with the domain parameter. However, it does not explicitly state when not to use it or name alternatives among sibling tools, such as using 'list_sources' for source enumeration instead of research.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources