Skip to main content
Glama

Server Details

Stripe MCP Pack — read-only access to Stripe data via API key.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-stripe_connect
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 11 of 11 tools scored. Lowest: 2.6/5.

Server CoherenceA
Disambiguation4/5

Tools are mostly distinct: Pipeworx tools handle queries and memory, while Stripe tools handle financial data. However, 'ask_pipeworx' and 'discover_tools' both serve discovery purposes, which could cause minor confusion.

Naming Consistency3/5

Naming is split between two conventions: memory tools use single verbs (remember, recall, forget), Pipeworx tools use verb_noun (ask_pipeworx, discover_tools), and Stripe tools use stripe_verb_noun. This inconsistency could hinder pattern recognition.

Tool Count5/5

With 11 tools, the set is well-scoped. The memory and discovery tools (4) complement the 7 Stripe tools without being excessive, covering core operations without overloading.

Completeness3/5

The Stripe subset provides basic CRUD for customers and listings for charges/invoices/subscriptions, but lacks update/delete operations and more advanced Stripe features (e.g., refunds, payment intents). The Pipeworx tools cover query, discovery, and memory, which is complete for their purpose.

Available Tools

11 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully discloses that Pipeworx picks the right tool, fills arguments, and returns results, making the autonomous behavior clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise (4 sentences) and front-loaded with purpose, immediately followed by how it works and examples.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter and no output schema, the description adequately explains the tool's autonomous behavior and provides examples, making it complete for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single 'question' parameter, and the description adds examples but no further semantics beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool takes a plain English question and returns an answer from the best available data source, distinguishing it from sibling tools that are specific to memory or Stripe operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'No need to browse tools or learn schemas' and provides examples, indicating this tool should be used for general queries instead of selecting specific tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so description must cover behavior. It states it returns 'most relevant tools' but doesn't specify search algorithm, sorting, or whether it uses embeddings/keywords. However, it adds value by mentioning the tool count context (500+ tools) and search method ('natural language').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each serving a purpose: what it does, what it returns, when to call it. Could combine last two sentences for tighter structure, but no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description explains what is returned (tool names and descriptions). For a search tool with simple parameters, this is sufficient. Could mention return limit behavior but schema covers that.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds context by explaining the query parameter format with examples ('analyze housing market trends') and default/max for limit. This goes beyond the schema's terse descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb ('search'), resource ('Pipeworx tool catalog'), and how it works ('natural language query'). Distinguishes from siblings like 'ask_pipeworx' by specifying it returns tool names and descriptions, not general answers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Call this FIRST when you have 500+ tools available', giving clear when-to-use guidance. No alternatives needed as this is the primary discovery tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry full burden. It states deletion behavior but lacks details on reversibility, confirmation, or side effects. Adequate but could be more transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no wasted words. Front-loaded with verb and resource. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete operation with one required parameter and no output schema, the description is complete enough. It explains the action and the required key.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and description adds no extra param info. Baseline is 3, but given the single required param with clear schema description, the tool is straightforward. Description is consistent and sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it deletes a stored memory by key, specifying verb (delete) and resource (stored memory). It distinguishes from sibling tools like remember (create) and recall (retrieve).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies when to use (when needing to delete a memory), but does not explicitly exclude alternatives or provide when-not scenarios. However, sibling names suggest complementary functions, making usage clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. It states the tool retrieves memories across sessions and lists all when key is omitted. However, it doesn't disclose potential size limits, persistence guarantees, or performance implications of listing all memories.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that front-load the main purpose. No wasted words. Could arguably merge the second sentence into the first, but it's clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given only 1 parameter, no required params, and no output schema, the description is sufficiently complete. It explains both use cases (single key retrieval and listing all) and their context (across sessions).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds context about the optional key parameter behavior ('omit to list all keys') but doesn't add beyond what schema's description already implies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Retrieve', 'list') and a clear resource ('previously stored memory by key'), and distinguishes between two modes of operation. It contrasts with siblings like 'remember' and 'forget' by focusing on retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool ('to retrieve context you saved earlier') and when to omit the key to list all memories. However, it does not provide when-not-to-use or alternatives among siblings (e.g., preferring 'discover_tools' for other context).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description covers persistence differences between authenticated vs anonymous sessions (persistent vs 24-hour TTL). This is valuable behavioral context beyond schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences with key info front-loaded: purpose, use cases, and persistence behavior. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple schema (2 string params) and no output schema needed, description is complete. Covers purpose, use cases, behavioral nuance (auth vs anonymous), and parameter semantics sufficiently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with clear descriptions for key and value. Description adds usage examples (e.g., 'subject_property') and clarifies value can be any text. Provides good context without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool stores a key-value pair in session memory, with specific examples of what to store. Differentiates from sibling tools like 'forget' and 'recall' by its focus on storage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes when to use (save intermediate findings, user preferences, context across calls). No explicit when-not-to-use or alternatives, but sibling context makes it clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stripe_get_balanceAInspect

Get the current Stripe account balance.

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesStripe secret or restricted API key (sk_...)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must carry the burden. The description mentions the action (get balance) but does not disclose behavioral traits like whether it's read-only, if it requires specific permissions beyond the API key, or what the response format looks like. However, the verb 'Get' strongly implies a read operation, and the tool is simple, so a 3 is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the purpose. Every word earns its place with no wasted content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool is simple with one parameter and no output schema. The description is complete enough for its purpose, but it could mention that the balance is returned in various currencies or that it's the current balance. Still, for a straightforward tool, a 3 is reasonable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the one parameter (_apiKey), which is already well-described in the schema as a 'Stripe secret or restricted API key'. The description adds no additional meaning beyond the schema. Baseline 3 is appropriate since schema covers it fully.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the current Stripe account balance, using the specific verb 'Get' and the resource 'current Stripe account balance'. It distinguishes itself from sibling tools that deal with customers, charges, invoices, or subscriptions, as those are all about other resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for checking the overall balance but does not explicitly state when to use it versus alternatives. It does not mention when not to use it, such as for historical balances or other financial details, nor does it reference sibling tools for comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stripe_get_customerAInspect

Get a Stripe customer by ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesCustomer ID (cus_...)
_apiKeyYesStripe secret or restricted API key (sk_...)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates a read operation (get). No annotations are present, so the description must carry the burden. It doesn't mention idempotency, error conditions, or rate limits. Adequate but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: one sentence of 6 words. No wasted words. Front-loaded with the verb 'Get' and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (2 params, no output schema), the description is minimally adequate. It explains what it does but could mention return structure or that it's a read operation. Still, it's sufficient for a straightforward retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds no additional meaning beyond the schema. The schema already explains id and _apiKey. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a Stripe customer using an ID. The verb 'Get' is specific, and the resource is a customer. It distinguishes from siblings like stripe_list_customers which lists multiple customers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you have a customer ID and need to retrieve details. However, it does not provide guidance on when not to use it or mention alternatives for listing customers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stripe_list_chargesCInspect

List recent charges.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (1-100, default 10)
_apiKeyYesStripe secret or restricted API key (sk_...)
customerNoFilter by customer ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description bears full burden. It only says 'recent' which is vague. No disclosure of pagination, rate limits, data freshness, or what 'recent' means (e.g., last 7 days?). No mention that this is a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short phrase, which is concise but lacks structure. It fits in one sentence but could be more informative without adding much length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a list operation with 3 parameters and no output schema, the description is insufficient. It doesn't explain the default ordering, pagination, or that results are limited by the 'limit' parameter. Lacks completeness for a data-fetching tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents all parameters. The description adds no additional meaning beyond the schema, but the schema itself is clear. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List recent charges' clearly states the verb (list) and resource (charges), with an implicit time scope (recent). It distinguishes from siblings like stripe_list_customers and stripe_list_invoices by specifying charges.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. For example, it doesn't mention that to list charges for a specific customer, the customer parameter should be used, or contrast with other list tools. The description provides no when/when-not or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stripe_list_customersCInspect

List Stripe customers. Supports pagination.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (1-100, default 10)
_apiKeyYesStripe secret or restricted API key (sk_...)
starting_afterNoCursor for pagination (customer ID)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description only mentions pagination but does not disclose read-only nature, rate limits, or auth requirements beyond schema. Schema already documents _apiKey as required.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short sentences, no wasted words, but could be slightly more informative without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and moderate complexity (3 params, pagination), the description is too brief. Does not explain return format, sorting, or filtering options.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and already describes all three parameters adequately. Description adds no additional meaning beyond 'supports pagination'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists Stripe customers and supports pagination, which distinguishes it from siblings like stripe_get_customer (singular) and stripe_list_charges/invoices/subscriptions (different resources).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this vs. other list tools or stripe_get_customer. Does not mention typical use cases or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stripe_list_invoicesCInspect

List invoices.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (1-100, default 10)
statusNoFilter by status (draft, open, paid, void, uncollectible)
_apiKeyYesStripe secret or restricted API key (sk_...)
customerNoFilter by customer ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It lacks details on behavior: e.g., does it return all invoices or paginate? Is it read-only? The description only says 'List invoices' without disclosing the default limit or that it lists across all customers.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely short (two words), which could be seen as concise, but it omits essential context. It is not front-loaded with key information; it lacks the word 'Stripe' and any scope indication.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain return format, but it does not. The tool lists invoices, but the description is incomplete: it does not mention pagination, filtering capabilities, or that it requires an API key (though required in schema).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no additional meaning beyond the schema, but it is not required to since the schema is comprehensive. However, no extra context is provided, so score is slightly above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'List invoices' is too vague. It doesn't specify the resource (Stripe) or any scope. Sibling tools like 'stripe_list_charges' and 'stripe_list_customers' provide similar patterns, but the description does not distinguish 'stripe_list_invoices' from them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. No mention of filters or when to use other listing tools. The description does not indicate that it lists all invoices by default or that it can be filtered by customer or status.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stripe_list_subscriptionsCInspect

List active subscriptions.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (1-100, default 10)
statusNoFilter by status (active, canceled, past_due, etc.)
_apiKeyYesStripe secret or restricted API key (sk_...)
customerNoFilter by customer ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It doesn't disclose behavioral traits like whether it's a read operation (safe), or any authentication requirements beyond the parameter. The word 'active' suggests default filtering, but this is not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Very concise single sentence with no wasted words. Could be slightly improved by front-loading the default behavior, but it's efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description could mention that it returns a list of subscription objects. The parameter coverage is complete, but behavioral details missing. It's adequate for a simple list tool but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds no additional meaning beyond what the schema provides for parameters. It does not mention that the default status is active, which would be useful.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists active subscriptions, with a specific verb 'List' and resource 'subscriptions'. However, it doesn't distinguish from sibling tools like 'stripe_list_customers' or 'stripe_list_invoices', but the resource is unique enough.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. The description doesn't mention filtering options or that it returns active subscriptions by default, but provides no context for when to use other list tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.