Skip to main content
Glama

Lenny Rachitsky Podcast Transcripts MCP Server

Server Details

MCP server for structured access to Lenny Rachitsky podcast transcripts. For content creators.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
la-rebelion/hapimcp
GitHub Stars
7

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

8 tools
getEpisodesCInspect

List episodes - Lists episodes with parsed frontmatter (guest, title, duration, youtube_url, etc.) and their canonical resource URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNo
limitNo
cursorNoOpaque pagination cursor
x-hapi-auth-stateNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a list operation with parsed data, implying read-only behavior, but doesn't mention pagination behavior (though the cursor parameter hints at it), rate limits, authentication needs (x-hapi-auth-state parameter suggests auth), or error handling. This leaves significant gaps for a tool with multiple parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('List episodes') and key details. There's no wasted verbiage, though it could be slightly more structured (e.g., separating purpose from data format).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, low schema coverage (25%), and 4 parameters, the description is incomplete. It doesn't address authentication, pagination behavior, error cases, or output format details (e.g., structure of parsed frontmatter). For a list tool with multiple parameters and siblings, this leaves the agent under-informed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low at 25%, with only the cursor parameter documented. The description adds no specific parameter semantics beyond implying listing functionality. It doesn't explain sort options (e.g., what 'views' means), limit usage, or auth requirements, failing to compensate for the schema's gaps. Baseline is 3 due to moderate parameter count but poor coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('episodes'), and specifies what data is returned ('parsed frontmatter' with specific fields and 'canonical resource URIs'). However, it doesn't explicitly differentiate from sibling tools like getEpisodesGuest or getSearch, which likely have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like getEpisodesGuest (which presumably filters by guest) or getSearch (which might search across episodes). The description mentions only what the tool does, not when it's appropriate compared to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getEpisodesGuestCInspect

Get an episode card (metadata + key URIs)

ParametersJSON Schema
NameRequiredDescriptionDefault
guestYes
x-hapi-auth-stateNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but provides minimal behavioral context. It implies a read-only operation ('Get') but doesn't disclose authentication requirements (despite 'x-hapi-auth-state' parameter), rate limits, error handling, or what 'key URIs' entail. The description adds some value by hinting at returned content but lacks critical operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded in a single sentence, with zero wasted words. Every part ('Get an episode card', 'metadata + key URIs') directly contributes to understanding the tool's purpose without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, no output schema, and 2 parameters (one required), the description is incomplete. It covers the basic purpose but lacks usage guidelines, parameter details, behavioral traits (e.g., auth needs), and output expectations, making it inadequate for effective tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but provides no parameter information. It doesn't explain 'guest' (e.g., guest name or ID) or 'x-hapi-auth-state' (authentication token), leaving both parameters undocumented. This fails to add meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('an episode card'), specifying it includes 'metadata + key URIs'. It distinguishes from generic 'getEpisodes' by focusing on guest-specific episodes, though it doesn't explicitly differentiate from other guest-related siblings like 'getEpisodesGuestChunks'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., authentication needs), compare to sibling tools like 'getEpisodes' (all episodes) or 'getEpisodesGuestChunks' (chunked data), or specify use cases like retrieving guest-specific episode summaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getEpisodesGuestChunksCInspect

List chunk descriptors for an episode - Returns chunk boundaries and URIs for chunk retrieval. Chunks may be computed on-demand using size/overlap parameters.

ParametersJSON Schema
NameRequiredDescriptionDefault
sizeNo
guestYes
overlapNo
x-hapi-auth-stateNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns chunk boundaries and URIs for retrieval, and that chunks may be computed on-demand, which hints at potential latency or processing behavior. However, it doesn't cover critical aspects like authentication needs (implied by 'x-hapi-auth-state' parameter), rate limits, error conditions, or whether this is a read-only operation. The description adds some context but is incomplete for a tool with parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences. The first sentence front-loads the core purpose, and the second adds useful behavioral context. There's no wasted text, though it could be slightly more structured for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 parameters, no annotations, no output schema), the description is incomplete. It covers the basic purpose and hints at some behavior, but lacks details on authentication, error handling, return format beyond 'chunk boundaries and URIs', and usage guidelines. For a tool with multiple parameters and no structured support, this leaves significant gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'size/overlap parameters' for on-demand computation, providing some meaning for two of the four parameters. However, it doesn't explain 'guest' (required) or 'x-hapi-auth-state', leaving half the parameters without semantic context. The description adds partial value but doesn't fully address the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List chunk descriptors for an episode' specifies the verb (list) and resource (chunk descriptors for an episode). It distinguishes from siblings like 'getEpisodes' (list episodes) and 'getEpisodesGuestTranscriptformat' (get transcript), but doesn't explicitly differentiate from 'getEpisodesGuest' which might be similar. The description is specific but lacks explicit sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions that chunks may be computed on-demand using size/overlap parameters, but doesn't specify when to choose this tool over siblings like 'getEpisodesGuest' or 'getEpisodesGuestTranscriptformat'. No context, exclusions, or prerequisites are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getEpisodesGuestChunksChunkIdtxtCInspect

Get a specific transcript chunk as plain text

ParametersJSON Schema
NameRequiredDescriptionDefault
guestYes
chunkIdYes
x-hapi-auth-stateNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states it's a read operation ('Get'), but doesn't mention authentication needs (implied by 'x-hapi-auth-state' parameter), rate limits, error handling, or what constitutes a 'chunk'. This leaves significant gaps for an agent to understand how to invoke it correctly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It's front-loaded with the core action and resource, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (3 parameters, no annotations, no output schema), the description is incomplete. It doesn't cover parameter meanings, authentication requirements, return format details (beyond 'plain text'), or how this tool relates to siblings. This leaves the agent with insufficient context for reliable use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'specific transcript chunk' but doesn't explain what 'guest' or 'chunkId' represent, their formats, or the optional 'x-hapi-auth-state'. This adds minimal meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('specific transcript chunk as plain text'), making the purpose understandable. However, it doesn't differentiate from sibling tools like 'getEpisodesGuestChunks' or 'getEpisodesGuestTranscriptformat', which likely handle similar transcript-related data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'getEpisodesGuestChunks' or 'getEpisodesGuestTranscriptformat'. The description implies usage for retrieving a specific chunk, but lacks context on prerequisites, exclusions, or comparisons to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getEpisodesGuestMetadatajsonCInspect

Get episode metadata as JSON (frontmatter)

ParametersJSON Schema
NameRequiredDescriptionDefault
guestYes
x-hapi-auth-stateNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states this is a 'Get' operation, implying it's likely read-only, but doesn't confirm this or provide any details on authentication needs, rate limits, error handling, or what the JSON output contains. The mention of 'frontmatter' is unclear and adds minimal context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise—a single phrase—and front-loaded with the core purpose. However, it's arguably too brief, as it omits necessary details like parameter explanations or usage context, which reduces its effectiveness despite the efficient structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (2 parameters, no annotations, no output schema), the description is incomplete. It doesn't explain the parameters, the return format beyond 'JSON', or behavioral aspects like authentication. The mention of 'frontmatter' is cryptic and doesn't add sufficient clarity for the agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter details. The description doesn't mention any parameters, leaving both 'guest' (required) and 'x-hapi-auth-state' undocumented. It fails to compensate for the lack of schema coverage, offering no guidance on what these parameters mean or how to use them.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Get episode metadata as JSON (frontmatter)' which provides a clear verb ('Get') and resource ('episode metadata'), but it's vague about what 'frontmatter' specifically means and doesn't distinguish this tool from its siblings like 'getEpisodes' or 'getEpisodesGuest'. The purpose is understandable but lacks specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. With siblings like 'getEpisodes', 'getEpisodesGuest', and 'getEpisodesGuestTranscriptformat', the description doesn't explain how this tool differs or when it's appropriate, leaving the agent to guess based on the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getEpisodesGuestTranscriptformatBInspect

Get transcript in a specific format - Returns the transcript in the requested format: - md: markdown (may include or exclude frontmatter based on include_frontmatter) - txt: clean text (best for LLM ingestion) - json: structured form (metadata + transcript text)

ParametersJSON Schema
NameRequiredDescriptionDefault
guestYes
formatYes
x-hapi-auth-stateNo
include_frontmatterNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the tool returns transcripts in specified formats and mentions frontmatter behavior for md format, but lacks critical behavioral details such as authentication requirements (implied by 'x-hapi-auth-state' parameter), rate limits, error handling, or whether it's read-only or mutative. The description adds some context but is insufficient for a tool with authentication parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and efficiently lists format options with brief explanations. Each sentence adds value, but it could be slightly more structured (e.g., bullet points for clarity). There's no redundant information, making it appropriately concise for the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema description coverage, 4 parameters (2 undocumented), and no output schema, the description is incomplete. It covers format options and frontmatter behavior but misses authentication details, guest parameter meaning, and return value specifics. For a tool with authentication and multiple parameters, this leaves significant gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains the 'format' parameter's enum values (md, txt, json) and their semantics, and mentions 'include_frontmatter' in relation to md format. However, it doesn't clarify the 'guest' parameter (required string) or 'x-hapi-auth-state' (string), leaving two parameters undocumented. The description adds value for some parameters but not all.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('transcript in a specific format'), specifying it returns transcripts in md, txt, or json formats. It distinguishes from siblings like 'getEpisodesGuest' (likely returns episodes) and 'getEpisodesGuestMetadatajson' (likely returns metadata only), but doesn't explicitly contrast with 'getEpisodesGuestChunks' or 'getEpisodesGuestChunksChunkIdtxt'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by listing format options and their purposes (e.g., 'txt: clean text (best for LLM ingestion)'), suggesting when to choose each format. However, it doesn't explicitly state when to use this tool versus alternatives like 'getEpisodesGuest' or 'getEpisodesGuestChunks', nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getSearchAInspect

Search episodes and transcripts - Searches across metadata (D1) and transcript text (Vectorize). Returns matches as resources, including per-hit URIs pointing to episode cards and transcript chunks. Note: pagination cursor applies to metadata search only. Example (vector search with filters):

GET /search?q=pricing&mode=vector&guest=Marty%20Cagan&keywords=pricing,monetization&top_k=5

ParametersJSON Schema
NameRequiredDescriptionDefault
qYes
modeNo
guestNo
limitNo
titleNo
top_kNo
cursorNoOpaque pagination cursor
keywordsNo
namespaceNo
episode_slugNo
x-hapi-auth-stateNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that results include per-hit URIs and that pagination cursor applies only to metadata search, which adds useful behavioral context. However, it lacks details on authentication needs, rate limits, error handling, or response format, leaving gaps for a tool with 11 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose. The example adds value by illustrating usage but could be more concise. Some sentences are informative without waste, though the example might be slightly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (11 parameters, no annotations, no output schema), the description is incomplete. It covers the search scope and result format but lacks details on authentication, error cases, and full parameter explanations. The example helps but does not fully address the tool's richness, leaving the agent with significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (9%), with only the 'cursor' parameter having a description. The description mentions 'mode' values (e.g., vector) and filters like 'guest' and 'keywords' in the example, adding some semantics beyond the schema. However, it does not explain most parameters (e.g., 'namespace', 'episode_slug', 'x-hapi-auth-state'), failing to compensate for the poor schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search episodes and transcripts') and resources ('metadata and transcript text'), distinguishing it from sibling tools like getEpisodes (which likely retrieves without search) and postSearch (which may be for different search methods). It specifies the dual search scope across metadata and transcripts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching episodes and transcripts, but does not explicitly state when to use this tool versus alternatives like getEpisodes (for listing) or postSearch (possibly for advanced searches). The example hints at vector search with filters, but no clear guidance on mode selection or tool differentiation is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

postSearchBInspect

Search episodes and transcripts (POST body) - Same search as GET /search, but parameters are provided in the request body. This is useful for longer filter payloads.

ParametersJSON Schema
NameRequiredDescriptionDefault
undefinedBodyYes
x-hapi-auth-stateNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. The description mentions it's a POST method with body parameters, which implies a write operation (though likely idempotent search). It doesn't disclose authentication needs (despite 'x-hapi-auth-state' parameter), rate limits, pagination behavior (cursor parameter hints at it), or what happens with invalid inputs. For a search tool with 2 parameters and nested objects, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: two sentences that directly state the tool's function and when to use it. Every sentence earns its place by providing essential information without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (2 parameters with nested objects, 0% schema coverage, no annotations, no output schema), the description is incomplete. It doesn't explain the search behavior, result format, error handling, or parameter details. For a search tool with many filter options and authentication hints, this leaves significant gaps for an AI agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. The description only mentions that parameters are provided in the request body, but doesn't explain any of the 11 nested properties (like 'q', 'mode', 'limit', 'cursor') or the 'x-hapi-auth-state' parameter. This leaves most parameter meanings unclear, failing to add value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search episodes and transcripts (POST body)' specifies the verb (search) and resources (episodes, transcripts). It distinguishes from the sibling 'getSearch' by noting the POST method and body parameter format. However, it doesn't fully differentiate from other search-related siblings like 'getEpisodes' which might retrieve episodes without searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'This is useful for longer filter payloads' and contrasts it with 'GET /search' (presumably 'getSearch' sibling). It implies usage when parameters are too long for URL query strings. However, it doesn't explicitly state when NOT to use it or mention alternatives beyond the GET version.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.