eyepup
Server Details
Agentic visitor analytics — five tools that hand the next CRO fix to your coding agent.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 5 of 5 tools scored.
Each tool serves a clearly distinct function: ask general questions, log changes, get top frictions, fetch individual visitor profiles, and list hot visitors. No overlapping purposes.
All tools share the 'eyepup_' prefix and use descriptive names. However, the pattern is not strictly verb_noun (e.g., 'eyepup_todo' is a noun, 'eyepup_visitors_hot' includes an adjective). Still consistent and readable.
5 tools is well-suited for the focused domain of visitor behavior analysis and friction optimization. Each tool earns its place without being too few or too many.
The tool surface covers key workflows: identifying frictions (todo, visitors_hot), deep investigation (visitor, ask), and logging fixes (log). Minor gaps like missing aggregate analytics or profile updates are acceptable for this scope.
Available Tools
5 toolseyepup_askAsk Eyepup about your visitorsAInspect
Ask a natural-language question about your site visitors. Use this when the user asks 'why are people bouncing from /pricing', 'who's hot right now', 'what changed this week', or any other free-text question about visitor behaviour. Returns an LLM-grounded answer with evidence and ranked actions.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Lookback days (default 7). | |
| site | No | Optional apex domain to scope. | |
| question | Yes | The free-text question. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavior. It states the return is 'LLM-grounded' with evidence and ranked actions, but does not mention potential latency, cost, or limitations of the LLM response.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences pack purpose, usage guidance, and return description efficiently. No waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description adequately explains the return type. However, it could mention that the answer is AI-generated and may need verification, but overall sufficient for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds example queries but no additional meaning beyond the schema's parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool answers natural-language questions about site visitors. It provides example queries and distinguishes from siblings by focusing on free-text questions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives explicit usage scenarios ('when the user asks...') and example queries, helping the agent decide when to use this tool. However, it does not explicitly mention when not to use it or name alternative sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
eyepup_logLog a change you just shippedAInspect
Log a change you just shipped. The dossier agent reasons about every visitor profiled AFTER this log row, so it can grade whether the friction pattern recovered. CALL THIS after every UX edit.
| Name | Required | Description | Default |
|---|---|---|---|
| kind | No | ||
| site | No | ||
| paths | No | Comma-separated paths affected. | |
| title | Yes | One-line headline of what shipped. | |
| description | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behaviors. It reveals that the dossier agent reasons about visitors after the log row and grades friction recovery, which is a useful side effect. However, it omits other traits like whether the action is destructive, authentication needs, or idempotency, leaving gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences with a clear front-loaded purpose ('Log a change you just shipped.'). Every sentence adds value: purpose, side effect, and usage instruction. No redundant or extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a simple logging tool with 5 parameters and no output schema, the description covers the core purpose and when to use it. However, it lacks parameter-level explanations and comprehensive behavioral traits (e.g., side effects on data, required permissions). Adequate but not fully complete for an agent to use without guessing parameter formats.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 40% (only 'paths' and 'title' have descriptions). The tool description adds zero explanation for parameters like 'kind', 'site', or 'description'. For example, 'kind' has an enum but no guidance on selection. Thus, the description fails to compensate for missing schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool logs a change shipped, with the explicit verb 'log' and resource 'change'. It adds context about the dossier agent reasoning, distinguishing it from sibling tools like eyepup_ask or eyepup_visitor. However, it slightly lacks specificity on the kind of changes beyond UX edits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly instructs 'CALL THIS after every UX edit', providing clear when-to-use guidance. It does not mention when not to use or suggest alternatives, but the usage context is direct and unambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
eyepup_todoList the friction-pattern To-Do queueAInspect
Get the top friction patterns ranked by impact score. Use BEFORE making any UX edit — the highest-impact pattern is usually a more valuable fix than whatever the user just asked about. Each row carries a paste-ready recommended_action.
| Name | Required | Description | Default |
|---|---|---|---|
| site | No | Apex-domain scope. | |
| limit | No | Max patterns (default 5). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It implies a read-only operation ('Get') and mentions that rows include a 'recommended_action,' but does not disclose potential side effects, authorization requirements, or pagination behavior. Somewhat adequate but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences: purpose, usage instruction, and a key output detail. No filler or redundancy. Front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with well-documented parameters and no output schema, the description provides enough context: ranked list, recommended action, and usage timing. Minor missing details like ordering direction, but overall complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds no additional meaning beyond the schema for the two parameters (site and limit). It does not elaborate on valid values or formatting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves top friction patterns ranked by impact score, which matches the title. It distinguishes itself from siblings like eyepup_ask or eyepup_visitor by focusing on friction patterns and a ranked to-do queue.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises to use before making UX edits, explaining why the highest-impact pattern is more valuable. This provides clear context for when to invoke this tool vs. alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
eyepup_visitorGet the full dossier for one visitorAInspect
Fetch the full LLM-written profile for a single visitor by distinct_id. Use after eyepup_visitors_hot or eyepup_todo to dig into a specific visitor.
| Name | Required | Description | Default |
|---|---|---|---|
| distinct_id | Yes | UUID-shaped distinct_id. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It only mentions fetching a profile but does not disclose side effects, permission requirements, rate limits, or whether the operation is read-only. The description is insufficient for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two concise sentences that front-load the primary purpose and immediately provide usage context. Every sentence serves a clear purpose with no extraneous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one required parameter, no output schema, and no annotations, the description is largely sufficient. It explains the action, input, and usage context. However, it does not describe the output format beyond 'full LLM-written profile,' which is somewhat vague. Still, it is mostly complete for the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter, and the schema already describes 'distinct_id' as a UUID string. The description adds no new information about parameter meaning beyond 'by distinct_id.' Thus, it meets the baseline but does not enhance understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Fetch the full LLM-written profile'), the resource ('for a single visitor'), and the required parameter ('by distinct_id'). It also distinguishes itself from sibling tools by noting it is used after eyepup_visitors_hot or eyepup_todo to delve into a specific visitor, leaving no ambiguity about its purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly indicates when to use this tool: 'Use after eyepup_visitors_hot or eyepup_todo to dig into a specific visitor.' While it lacks explicit 'when not to use' statements, the context of being a follow-up tool is clear, providing strong guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
eyepup_visitors_hotTop high-intent visitors right nowAInspect
Return the top high-intent visitors currently active. Use when the user asks 'who's hot', 'who's about to convert', 'who should I focus on'.
| Name | Required | Description | Default |
|---|---|---|---|
| site | No | ||
| limit | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It only says 'return the top high-intent visitors currently active' but does not explain what 'high-intent' means, how visitors are determined, data freshness, or any side effects. The description is too minimal for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, directly stating the function and usage context. Every word earns its place; there is no unnecessary text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and 0% schema description coverage, the description is incomplete. It does not describe the output format (e.g., list of visitor names, IDs, scores), nor does it provide sufficient detail on parameters or behavioral transparency. The tool is simple but lacks necessary completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must explain parameters. It does not mention the 'site' or 'limit' parameters at all. The agent has no guidance on what 'site' refers to or how 'limit' affects results. This is a critical gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns 'top high-intent visitors currently active', which is a specific verb+resource. The title reinforces this. It distinguishes from sibling 'eyepup_visitor' by focusing on high-intent and current activity. The example natural language queries further clarify purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists example user queries ('who's hot', 'who's about to convert', 'who should I focus on') that trigger this tool, providing clear when-to-use guidance. This is direct and helpful for an agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!