Skip to main content
Glama

Sleeper Hit Studio Public Discovery

Server Details

Read-only discovery for opted-in storytellers and approved public project assets.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 9 of 9 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Tools are mostly distinct, but 'search' and 'find_writers' overlap in searching writer profiles, and 'list_writer_assets' and 'get_public_pitch_assets' both list assets, though descriptions clarify scope.

Naming Consistency3/5

Naming mixes 'find_', 'get_', 'list_' prefixes inconsistently (e.g., 'find_projects' vs 'get_project_summary') and 'fetch' and 'search' are standalone verbs, breaking the pattern.

Tool Count4/5

9 tools is appropriate for the discovery domain, covering search, listing, and retrieval without being excessive or too sparse.

Completeness5/5

The tool set covers the full read-only discovery workflow: searching, listing writers and projects, retrieving summaries, profiles, and assets, with no obvious gaps.

Available Tools

9 tools
fetchA
Read-onlyIdempotent
Inspect

Fetch the full public discovery document for a search result by ID. Compatible with ChatGPT deep research and company knowledge.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesSearch result ID returned by the search tool.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, idempotentHint, and destructiveHint. The description adds that the document is 'public' and 'full,' and mentions compatibility, which enriches context. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action and resource, followed by helpful context. No extraneous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter fetch tool with no output schema, the description adequately specifies what is returned ('full public discovery document') and usage context. Lacks detail on response structure but sufficient given low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (one parameter fully described). The description only reinforces 'by ID,' adding no new meaning beyond what the schema already provides for the 'id' parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Fetch the full public discovery document for a search result by ID,' specifying both verb and resource. It distinguishes from sibling tools like get_project_summary by focusing on discovery documents, but does not explicitly differentiate from other fetch-like tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Mentions compatibility with ChatGPT deep research, implying a usage context, but provides no guidance on when to use this tool versus siblings (e.g., search, get_project_summary) or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_projectsA
Read-onlyIdempotent
Inspect

List public projects for a writer. Results include project hub URLs and public asset summaries.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoOptional title, genre, logline, or synopsis text.
writerSlugYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds that results include project hub URLs and public asset summaries, providing behavioral context beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and result details. No redundant words; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (2 params, no output schema, clear annotations), the description covers purpose and result contents. Could mention that the data is publicly accessible, but openWorldHint already implies that.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 50%: 'query' has a description, but 'writerSlug' does not. The description implies 'writerSlug' is the identifier but adds no explicit detail. It does not compensate for the missing schema description on 'writerSlug'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists public projects for a writer and specifies result contents (hub URLs, asset summaries). It uses a specific verb ('List') and resource ('public projects'), distinguishing it from siblings like 'find_writers'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for listing a writer's public projects but does not explicitly state when to use it over alternatives like 'fetch' or 'find_writers'. No guidance on when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_writersB
Read-onlyIdempotent
Inspect

Search opt-in public Sleeper Hit Studio writer profiles.

ParametersJSON Schema
NameRequiredDescriptionDefault
genreNoFilter by public discovery genre.
limitNoMaximum writers to return.
queryNoName, location, headline, specialty, genre, or background text.
formatNoFilter by public discovery format.
specialtyNoFilter by public writer specialty.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. Description adds no behavioral details beyond 'opt-in public'—no mention of result limitations, pagination, or what 'opt-in' implies. With annotations present, description fails to add meaningful behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, highly concise with no redundancy. However, the brevity sacrifices usefulness; a slightly longer description could improve clarity without being verbose. Score 4 for efficiency, but room for more detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, and description does not hint at return format or behavior (e.g., result ordering, partial matches, default limit). With 5 optional parameters and no required fields, more context on filtering logic or expected output is needed. Incomplete for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for all 5 parameters. Description does not add any additional meaning or context beyond the schema descriptions. Baseline of 3 is appropriate since the schema already provides adequate parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool searches 'opt-in public Sleeper Hit Studio writer profiles', specifying both the resource (writer profiles) and scope (opt-in, public). This distinguishes it from siblings like 'find_projects' and 'get_writer_profile' which target different resources or actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for searching public writer profiles but provides no explicit guidance on when to use this tool versus alternatives (e.g., 'find_projects' or 'search'). No mention of exclusions or specific context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_project_summaryC
Read-onlyIdempotent
Inspect

Get one public project summary for a writer.

ParametersJSON Schema
NameRequiredDescriptionDefault
writerSlugYes
projectSlugYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and openWorldHint. The description adds 'public' indicating access scope, but does not provide significant additional behavioral context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no fluff. It is concise but could benefit from slightly more detail without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and 0% schema coverage, the description omits what the summary contains, its format, or any side effects. This incompleteness risks agent confusion, especially among sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% (no parameter descriptions). The description does not explain what writerSlug or projectSlug represent, nor any constraints or format, leaving the agent to infer from parameter names alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get', the resource 'one public project summary', and the context 'for a writer'. It is specific and distinguishes from siblings like get_writer_profile, but does not explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as get_writer_profile or find_projects. There are no exclusions or context for appropriate use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_public_pitch_assetsB
Read-onlyIdempotent
Inspect

List writer-approved public pitch deck assets for a writer.

ParametersJSON Schema
NameRequiredDescriptionDefault
writerSlugYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint=false. The description adds context about the assets being 'writer-approved' and 'public', which is valuable but does not significantly expand on behavioral traits beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no extraneous words. However, it could be restructured to include usage hints or parameter details without increasing length significantly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter read-only tool with good annotations, the description is mostly adequate. However, it lacks any mention of return format or pagination, and does not help the agent distinguish from sibling tools in specific scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage for the sole parameter 'writerSlug'. The description implies it identifies a writer but does not explain format or provide examples, leaving the agent to infer meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'List' and identifies the resource as 'writer-approved public pitch deck assets', clearly distinguishing from siblings like list_project_assets or list_writer_assets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., list_writer_assets). The context signals include sibling tools with overlapping functionality, but the description does not provide any selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_writer_profileA
Read-onlyIdempotent
Inspect

Get a public writer profile by slug, including public projects and shared assets.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool is read-only, idempotent, and non-destructive. The description adds value by specifying the output includes 'public projects and shared assets', providing behavioral context beyond the annotations. However, it could detail the response structure further.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence that front-loads the purpose (verb and resource) and adds key context. Every word contributes to clarity, and there is no superfluous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one parameter and no output schema, the description covers the essential return scope (profile, projects, assets). It could be more complete by describing the profile fields or asset types, but it is sufficient for basic invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It clarifies that the 'slug' parameter identifies a writer profile by its URL-friendly slug, adding meaning beyond the raw type. However, it provides no additional validation or source hints, making it minimally adequate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a public writer profile by slug, including associated projects and assets. It uses a specific verb ('Get') and specific resource ('writer profile'), and its focus on a single writer differentiates it from sibling tools like find_writers or get_project_summary.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you have a writer slug and need the full profile, but it does not explicitly state when not to use this tool or suggest alternatives (e.g., find_writers for searching). The guidance is adequate but lacks exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_project_assetsB
Read-onlyIdempotent
Inspect

List writer-approved public share URLs for a specific project.

ParametersJSON Schema
NameRequiredDescriptionDefault
writerSlugYes
projectSlugYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint, idempotentHint, and openWorldHint. The description adds value by specifying that the URLs are 'writer-approved' and 'public share', providing context beyond the annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 10 words, front-loaded with the verb and resource, and contains no unnecessary words. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should hint at the return format, but it does not. It lacks information about pagination, structure of returned URLs, or error cases. For a simple list tool, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% and the description does not elaborate on the parameters writerSlug or projectSlug. It only mentions 'for a specific project', which is already implied by the parameter names. No additional meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists writer-approved public share URLs for a specific project. It specifies the verb 'list', the resource 'public share URLs', and the scope 'writer-approved' and 'for a specific project', effectively distinguishing it from sibling tools like list_writer_assets which likely list all assets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, exclusions, or compare with sibling tools such as get_public_pitch_assets or list_writer_assets.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_writer_assetsB
Read-onlyIdempotent
Inspect

List all writer-approved public share URLs for a writer.

ParametersJSON Schema
NameRequiredDescriptionDefault
writerSlugYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as a read-only, open-world, idempotent operation. The description adds that it returns 'writer-approved public share URLs', which is useful but does not provide additional behavioral details like pagination or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no wasted words. It is front-loaded and appropriately sized for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (1 parameter, no output schema), the description covers the basic purpose. However, it lacks details on return format or potential pagination, which are helpful for a listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The parameter writerSlug has no description in the schema (0% coverage) and the tool description does not explain it. Optional values, format, or examples are missing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and the resource 'writer-approved public share URLs for a writer'. It is specific and distinctly describes a listing operation for a particular resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not explain when to use this tool over siblings like list_project_assets or search.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources