Sleeper Hit Studio Public Discovery
Server Details
Read-only discovery for opted-in storytellers and approved public project assets.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 9 of 9 tools scored. Lowest: 2.9/5.
Tools are mostly distinct, but 'search' and 'find_writers' overlap in searching writer profiles, and 'list_writer_assets' and 'get_public_pitch_assets' both list assets, though descriptions clarify scope.
Naming mixes 'find_', 'get_', 'list_' prefixes inconsistently (e.g., 'find_projects' vs 'get_project_summary') and 'fetch' and 'search' are standalone verbs, breaking the pattern.
9 tools is appropriate for the discovery domain, covering search, listing, and retrieval without being excessive or too sparse.
The tool set covers the full read-only discovery workflow: searching, listing writers and projects, retrieving summaries, profiles, and assets, with no obvious gaps.
Available Tools
9 toolsfetchARead-onlyIdempotentInspect
Fetch the full public discovery document for a search result by ID. Compatible with ChatGPT deep research and company knowledge.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Search result ID returned by the search tool. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, idempotentHint, and destructiveHint. The description adds that the document is 'public' and 'full,' and mentions compatibility, which enriches context. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action and resource, followed by helpful context. No extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter fetch tool with no output schema, the description adequately specifies what is returned ('full public discovery document') and usage context. Lacks detail on response structure but sufficient given low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (one parameter fully described). The description only reinforces 'by ID,' adding no new meaning beyond what the schema already provides for the 'id' parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Fetch the full public discovery document for a search result by ID,' specifying both verb and resource. It distinguishes from sibling tools like get_project_summary by focusing on discovery documents, but does not explicitly differentiate from other fetch-like tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions compatibility with ChatGPT deep research, implying a usage context, but provides no guidance on when to use this tool versus siblings (e.g., search, get_project_summary) or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_projectsARead-onlyIdempotentInspect
List public projects for a writer. Results include project hub URLs and public asset summaries.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Optional title, genre, logline, or synopsis text. | |
| writerSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds that results include project hub URLs and public asset summaries, providing behavioral context beyond annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose and result details. No redundant words; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool (2 params, no output schema, clear annotations), the description covers purpose and result contents. Could mention that the data is publicly accessible, but openWorldHint already implies that.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50%: 'query' has a description, but 'writerSlug' does not. The description implies 'writerSlug' is the identifier but adds no explicit detail. It does not compensate for the missing schema description on 'writerSlug'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists public projects for a writer and specifies result contents (hub URLs, asset summaries). It uses a specific verb ('List') and resource ('public projects'), distinguishing it from siblings like 'find_writers'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing a writer's public projects but does not explicitly state when to use it over alternatives like 'fetch' or 'find_writers'. No guidance on when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_writersBRead-onlyIdempotentInspect
Search opt-in public Sleeper Hit Studio writer profiles.
| Name | Required | Description | Default |
|---|---|---|---|
| genre | No | Filter by public discovery genre. | |
| limit | No | Maximum writers to return. | |
| query | No | Name, location, headline, specialty, genre, or background text. | |
| format | No | Filter by public discovery format. | |
| specialty | No | Filter by public writer specialty. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. Description adds no behavioral details beyond 'opt-in public'—no mention of result limitations, pagination, or what 'opt-in' implies. With annotations present, description fails to add meaningful behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, highly concise with no redundancy. However, the brevity sacrifices usefulness; a slightly longer description could improve clarity without being verbose. Score 4 for efficiency, but room for more detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, and description does not hint at return format or behavior (e.g., result ordering, partial matches, default limit). With 5 optional parameters and no required fields, more context on filtering logic or expected output is needed. Incomplete for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for all 5 parameters. Description does not add any additional meaning or context beyond the schema descriptions. Baseline of 3 is appropriate since the schema already provides adequate parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool searches 'opt-in public Sleeper Hit Studio writer profiles', specifying both the resource (writer profiles) and scope (opt-in, public). This distinguishes it from siblings like 'find_projects' and 'get_writer_profile' which target different resources or actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for searching public writer profiles but provides no explicit guidance on when to use this tool versus alternatives (e.g., 'find_projects' or 'search'). No mention of exclusions or specific context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_project_summaryCRead-onlyIdempotentInspect
Get one public project summary for a writer.
| Name | Required | Description | Default |
|---|---|---|---|
| writerSlug | Yes | ||
| projectSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and openWorldHint. The description adds 'public' indicating access scope, but does not provide significant additional behavioral context beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no fluff. It is concise but could benefit from slightly more detail without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and 0% schema coverage, the description omits what the summary contains, its format, or any side effects. This incompleteness risks agent confusion, especially among sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (no parameter descriptions). The description does not explain what writerSlug or projectSlug represent, nor any constraints or format, leaving the agent to infer from parameter names alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get', the resource 'one public project summary', and the context 'for a writer'. It is specific and distinguishes from siblings like get_writer_profile, but does not explicitly contrast with them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as get_writer_profile or find_projects. There are no exclusions or context for appropriate use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_public_pitch_assetsBRead-onlyIdempotentInspect
List writer-approved public pitch deck assets for a writer.
| Name | Required | Description | Default |
|---|---|---|---|
| writerSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint=false. The description adds context about the assets being 'writer-approved' and 'public', which is valuable but does not significantly expand on behavioral traits beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no extraneous words. However, it could be restructured to include usage hints or parameter details without increasing length significantly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter read-only tool with good annotations, the description is mostly adequate. However, it lacks any mention of return format or pagination, and does not help the agent distinguish from sibling tools in specific scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage for the sole parameter 'writerSlug'. The description implies it identifies a writer but does not explain format or provide examples, leaving the agent to infer meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'List' and identifies the resource as 'writer-approved public pitch deck assets', clearly distinguishing from siblings like list_project_assets or list_writer_assets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., list_writer_assets). The context signals include sibling tools with overlapping functionality, but the description does not provide any selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_writer_profileARead-onlyIdempotentInspect
Get a public writer profile by slug, including public projects and shared assets.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool is read-only, idempotent, and non-destructive. The description adds value by specifying the output includes 'public projects and shared assets', providing behavioral context beyond the annotations. However, it could detail the response structure further.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that front-loads the purpose (verb and resource) and adds key context. Every word contributes to clarity, and there is no superfluous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one parameter and no output schema, the description covers the essential return scope (profile, projects, assets). It could be more complete by describing the profile fields or asset types, but it is sufficient for basic invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It clarifies that the 'slug' parameter identifies a writer profile by its URL-friendly slug, adding meaning beyond the raw type. However, it provides no additional validation or source hints, making it minimally adequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a public writer profile by slug, including associated projects and assets. It uses a specific verb ('Get') and specific resource ('writer profile'), and its focus on a single writer differentiates it from sibling tools like find_writers or get_project_summary.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you have a writer slug and need the full profile, but it does not explicitly state when not to use this tool or suggest alternatives (e.g., find_writers for searching). The guidance is adequate but lacks exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_project_assetsBRead-onlyIdempotentInspect
List writer-approved public share URLs for a specific project.
| Name | Required | Description | Default |
|---|---|---|---|
| writerSlug | Yes | ||
| projectSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, idempotentHint, and openWorldHint. The description adds value by specifying that the URLs are 'writer-approved' and 'public share', providing context beyond the annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 10 words, front-loaded with the verb and resource, and contains no unnecessary words. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should hint at the return format, but it does not. It lacks information about pagination, structure of returned URLs, or error cases. For a simple list tool, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% and the description does not elaborate on the parameters writerSlug or projectSlug. It only mentions 'for a specific project', which is already implied by the parameter names. No additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists writer-approved public share URLs for a specific project. It specifies the verb 'list', the resource 'public share URLs', and the scope 'writer-approved' and 'for a specific project', effectively distinguishing it from sibling tools like list_writer_assets which likely list all assets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, exclusions, or compare with sibling tools such as get_public_pitch_assets or list_writer_assets.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_writer_assetsBRead-onlyIdempotentInspect
List all writer-approved public share URLs for a writer.
| Name | Required | Description | Default |
|---|---|---|---|
| writerSlug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as a read-only, open-world, idempotent operation. The description adds that it returns 'writer-approved public share URLs', which is useful but does not provide additional behavioral details like pagination or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence with no wasted words. It is front-loaded and appropriately sized for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 parameter, no output schema), the description covers the basic purpose. However, it lacks details on return format or potential pagination, which are helpful for a listing tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The parameter writerSlug has no description in the schema (0% coverage) and the tool description does not explain it. Optional values, format, or examples are missing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'writer-approved public share URLs for a writer'. It is specific and distinctly describes a listing operation for a particular resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No usage guidelines are provided. The description does not explain when to use this tool over siblings like list_project_assets or search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchBRead-onlyIdempotentInspect
Search opted-in public writer profiles, project summaries, and writer-approved public assets. Compatible with ChatGPT deep research and company knowledge.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query for writers, projects, genres, formats, accolades, table reads, pitch assets, or trailers. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds context about compatibility with ChatGPT and company knowledge, which is useful but does not disclose additional behavioral traits beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no redundant content. It front-loads the purpose and adds compatibility context efficiently. The title is null, which is a minor missed opportunity for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and one required parameter, the description covers the search scope and integration context. It lacks details on result format or pagination, but is adequate for a straightforward search tool with strong annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema_description_coverage is 100%, so the schema already documents the parameter. The description repeats the same examples as the schema, adding no new semantic meaning. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches 'opted-in public writer profiles, project summaries, and writer-approved public assets.' The verb 'search' and resource scope are explicit. Siblings like 'find_projects' and 'find_writers' are more specific, but the description does not explicitly differentiate beyond scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through compatibility notes but does not explicitly state when to use search over sibling tools like 'find_writers' or 'find_projects'. No exclusion or alternative guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!