streambridge
Server Details
Search Chinese TV drama scenes with second-level timestamps by character, emotion, or scene type.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 4 of 4 tools scored.
Each tool targets a distinct aspect of the system: catalog overview, episode details, character list, and scene search. There is no overlap in functionality.
All tool names follow a consistent verb_noun pattern: get_content_catalog, get_episode_info, list_characters, search_scenes. The naming is clear and predictable.
With 4 tools, the count is slightly on the lower side but still reasonable for a focused media server. The tools cover core read operations without being overly sparse.
The tools only support read operations; there are no create, update, or delete functions. Additionally, missing tools like listing all episodes of a show or filtering catalog by genre represent notable gaps for a streaming service.
Available Tools
4 toolsget_content_catalogAInspect
Get StreamBridge content catalog with titles, episode counts, genres.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states 'Get', implying a read operation with no side effects. However, it does not disclose whether the result is paginated, ordered, or filtered, nor whether it requires authentication or has rate limits. Basic disclosure is present but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence with no redundant words. Every part adds value: verb, resource, and the three fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given there is no output schema, the description should explain the return format. It lists fields but does not specify whether they are part of an array, object, or structure. Sibling tools (e.g., get_episode_info) might imply a relational context, but the description does not address pagination, size limits, or error states.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, and the schema has 100% coverage (empty object). Per guidelines, baseline for 0 parameters is 4. The description adds meaning beyond the schema by specifying what the catalog contains, so no deduction.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Get' plus resource 'StreamBridge content catalog' and explicitly lists the fields (titles, episode counts, genres). This clearly states what the tool does and distinguishes it from siblings like get_episode_info (specific episode) and list_characters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance is provided. The purpose implies it is for fetching the overall catalog, but there is no mention of alternatives or context for selecting this over siblings. The usage is implied but not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_episode_infoAInspect
Get info for a specific episode: title, duration, scene count.
| Name | Required | Description | Default |
|---|---|---|---|
| episode | Yes | Episode number |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It discloses the output fields (title, duration, scene count), which adds value beyond the input schema. However, it does not mention potential errors or required permissions, but for a simple getter this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, front-loaded with purpose, and contains no unnecessary words. Every word contributes value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no output schema), the description is sufficient. It covers purpose and output, though it could mention what happens if episode is not found. Overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the only parameter 'episode', which already explains it's an integer episode number. The description adds context about the return fields but not about the parameter itself. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get info' and resource 'a specific episode', listing the returned fields (title, duration, scene count). It distinguishes from sibling tools like get_content_catalog (which likely lists episodes) and search_scenes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies using this tool when needing details for one episode, but does not explicitly state when not to use it or mention alternatives. The sibling context helps but the description itself lacks guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_charactersAInspect
List main characters in Taiping Nian and their scene counts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden for behavioral disclosure. It only states the output (list of characters with scene counts) without mentioning any constraints, side effects, or performance considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that is front-loaded with key information. Every word is necessary and adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless list tool with no output schema, the description is adequate. It tells exactly what the tool returns. However, it could be improved by mentioning default ordering or scope (e.g., all characters from the entire show).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters in the schema, so the description does not need to add parameter semantics. The description sufficiently clarifies what the tool outputs, earning a baseline of 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists main characters and their scene counts, using a specific verb and resource. It distinguishes from siblings like get_content_catalog or search_scenes, which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit guidance on when to use this tool versus alternatives or when not to use it. The usage is implied but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_scenesAInspect
Search scenes in StreamBridge video library with second-level timestamps. Supports search by dialogue, character name, emotion, and scene type. Returns results with direct watch links. Example: 'Qian Hongchu strategy', 'fight scene', 'touching moment'
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | Search keywords | |
| episode | No | Filter by episode (optional) | |
| character | No | Filter by character name (optional) | |
| scene_type | No | Scene type filter (optional) | |
| action_only | No | ||
| romance_only | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the behavioral burden. It mentions returning results with timestamps and watch links but does not explain pagination, limit effects, error handling, or the behavior of boolean filters (action_only, romance_only). This leaves gaps for correct invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with example queries, all essential. It is front-loaded with the core purpose, then lists supported search types and outputs. No redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 7 parameters and no output schema or annotations. The description covers the main search feature but omits details on limit pagination, boolean filter semantics, and result format beyond 'watch links.' For a search tool, it is adequate for basic use but incomplete for full parameter handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 57%. The description adds context for query, character, scene_type, and introduces emotion (likely part of query). However, it does not clarify the limit parameter's role or the boolean flags (action_only, romance_only). The schema already documents query, episode, character, scene_type, so the description provides marginal extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches scenes in the StreamBridge video library, with specific features like second-level timestamps and direct watch links. It distinguishes from siblings (get_content_catalog, get_episode_info, list_characters) by focusing on scene-level search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides example queries and mentions supported search dimensions (dialogue, character, emotion, scene type), which implies when to use. However, it does not explicitly contrast with sibling tools or state when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!