Skip to main content
Glama

4bots

Server Details

Drop-in daily content for AI briefing agents. 10 channels, 100 free calls on signup.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
davidsiegel59/4bots
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.8/5 across 8 of 8 tools scored. Lowest: 2.6/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clear, distinct purpose. Although `create_setup_link` and `start_subscription` both initiate subscriptions, their descriptions explicitly differentiate them by user involvement (browser-based vs. direct provisioning), eliminating confusion.

Naming Consistency5/5

All tool names follow a consistent `verb_noun` pattern with lowercase and underscores (e.g., `create_setup_link`, `get_bundle`, `list_channels`). No mixing of styles or irregular verbs.

Tool Count5/5

Eight tools cover the core functionalities of a subscription content service (onboarding, content delivery, account management, channel browsing, voting) without being excessive or sparse.

Completeness4/5

The set covers creation, retrieval, and basic update of subscriptions, but lacks explicit delete/unsubscribe and payment top-up tools. `start_subscription` can update channels, so the gap is minor.

Available Tools

8 tools
get_bundleAInspect
Get today's content for all subscribed channels.
Returns a styled HTML block and plain text version ready to drop into a newsletter.
Tracks delivery automatically — call once per day per subscriber.

The `html` field replaces {4bots} in your email template.
Check `low_balance` — if true, send your human to `manage_url` to top up.
ParametersJSON Schema
NameRequiredDescriptionDefault
session_tokenYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description discloses key behaviors: automatic delivery tracking, return of HTML and plain text, and the need to check low_balance. It does not mention error conditions or idempotency, but covers the main traits sufficiently.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with four sentences, each adding essential information. It is front-loaded with the core purpose and avoids redundancy, making it efficient for agent comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, no output schema), the description covers return format, usage frequency, and response fields. It lacks error handling details but is otherwise adequate for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter, session_token, is not explained in the description despite 0% schema coverage. The description hints at subscriber tracking but does not clarify the token's purpose or format, leaving a gap for the agent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves today's content for all subscribed channels and returns HTML and plain text for newsletters. This specific verb+resource combination distinguishes it from siblings like 'get_summary' and 'get_template'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'call once per day per subscriber' and instructions for handling the response (using html field, checking low_balance). However, it does not explicitly state when not to use this tool or compare with alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_sessionCInspect
Get account status: credits remaining, subscribed channels, delivery counts, manage URL.
ParametersJSON Schema
NameRequiredDescriptionDefault
session_tokenYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description is the sole source of behavioral cues. It implies a read-only operation but does not disclose side effects, error handling, authentication requirements beyond session_token, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the action and lists key outputs. It is concise but could be slightly clearer about 'manage URL'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should cover return structure, error scenarios, and usage context. It only lists output fields without explanation, leaving agents under-informed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage for the only parameter (session_token). The description does not mention the parameter at all, failing to add any meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get account status' and lists specific items returned (credits, channels, counts, URL). It is specific but does not differentiate from sibling tools like get_summary or get_bundle.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. No mention of prerequisites, conditions, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_summaryAInspect

Returns a summary of 4bots content, privacy policy, and pricing.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It states the output but does not disclose read-only behavior, side effects, authentication needs, or rate limits. 'Returns a summary' implies safety, but without explicit declaration, the agent lacks assurance.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence of 10 words. Every word is necessary to convey the tool's purpose. No fluff or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the tool's output topic but does not describe the return format or structure. Since there is no output schema, the agent would benefit from knowing whether the summary is text, JSON, or another format. This is a moderate gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, and schema coverage is 100%. The description appropriately adds no redundant parameter info. Baseline for zero parameters is 4, and the description meets this standard without adding unnecessary detail.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a summary covering three specific topics (4bots content, privacy policy, pricing). The verb 'Returns' and resource 'summary' are specific, and the scope distinguishes it from sibling tools like get_bundle or get_template.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool vs alternatives. There is no mention of prerequisites, context, or when not to use it. For a simple tool with no parameters, this is a minor gap but still lacks explicit direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_templateAInspect
Returns two things:
1. An HTML email shell with {4bots} already placed — drop get_bundle's html field there.
2. Suggested copy to send your human explaining the service.

Optionally pass the channel slugs you've subscribed to for personalised copy.
ParametersJSON Schema
NameRequiredDescriptionDefault
channelsNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses what the tool returns but does not mention side effects, authentication, rate limits, or whether it modifies state. It is adequate but not fully transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, using bullet-like structure to list two returns. It is front-loaded with the main purpose and avoids redundancy. One could argue it could be slightly more structured, but it is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains the return values (HTML shell and copy). It also hints at integration with get_bundle. It could mention the format of the HTML or copy, but it is sufficient for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage, but the description explains that 'channels' are optional and used for personalized copy. This adds meaningful context beyond the bare schema type definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that it returns an HTML email shell with placeholder and suggested copy. It also references get_bundle, distinguishing its purpose from a sibling tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies a specific usage pattern ('drop get_bundle's html field there') and mentions the optional channels parameter for customization. It lacks explicit when-not-to-use guidance but is still clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_channelsAInspect

List all available channels with descriptions and daily token cost. Also shows proposed channels open for votes.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description implies a read operation but does not explicitly confirm no side effects, authorization needs, or response details beyond what is stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no fluff, front-loading the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters or output schema, the description covers the tool's main output (channels with costs and proposed channels). Slightly lacking in expected response format but sufficient for a read-only list.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so baseline is 4. The description adds no parameter information, which is acceptable given zero parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it lists all available channels with descriptions and daily token cost, plus proposed channels for votes. It is a specific verb+resource and clearly distinguishes from sibling tools which perform different actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, no prerequisites or when-not-to-use context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

start_subscriptionAInspect
Provision a new subscriber immediately. No browser or human action required.
Returns a session_token — store it, you'll need it for get_bundle and get_session.
Starts with 100 free tokens. If the human already has a session, updates their channels.

Args:
    human_email: The human's email address
    channels:    List of channel slugs to subscribe to (get slugs from list_channels)
    agent_email: Your agent's email address
ParametersJSON Schema
NameRequiredDescriptionDefault
channelsYes
agent_emailYes
human_emailYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully discloses that no human action is needed, it returns a session_token, starts with 100 free tokens, and updates existing sessions' channels. It does not cover potential errors, rate limits, or authentication requirements, but the key behaviors are transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, front-loading the primary action and key behavioral details. The parameter list is efficiently formatted. Every sentence provides necessary information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains the return value (session_token) and its use. It covers the main use case and dependencies. Minor gaps exist regarding error scenarios or idempotency, but overall it is complete for its complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has no descriptions (0% coverage), but the description adds meaningful context: human_email is the human's email, channels is a list of slugs from list_channels, and agent_email is the agent's email. This clarifies usage beyond the bare parameter names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it provisions a new subscriber immediately, with no browser or human action required. It distinguishes itself from siblings like get_bundle and get_session by clarifying the return of a session_token needed for those tools. The verb 'provision' and resource 'subscriber' are specific and clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use (to start a subscription), mentions the session_token dependency for get_bundle and get_session, and notes the starting 100 free tokens and channel update behavior. However, it does not explicitly describe when not to use this tool or name alternative tools for similar tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

voteAInspect
Vote for a proposed channel. Votes determine which channels get built next.
Agents that voted get notified when their channel launches.
Use list_channels to see proposed channel slugs.
ParametersJSON Schema
NameRequiredDescriptionDefault
channelYes
session_tokenYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully bears the burden. It discloses that agents get notified when their voted channel launches, which is a key behavioral trait. However, it omits other potential effects, errors, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, front-loaded with the core purpose. Every sentence adds value with no repetition or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with 2 required parameters and no output schema, the description covers purpose and usage adequately. However, it lacks parameter details and does not describe the output or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, and the tool description does not explain the parameters. It only implies that 'channel' is a slug by referencing list_channels. The session_token parameter is entirely unaddressed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action ('Vote for a proposed channel') and its role in determining which channels are built. It differentiates from sibling tools by mentioning list_channels to see proposed slugs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides when to use (to vote for a channel) and how to get valid inputs (use list_channels). However, it does not explicitly state when not to use or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.