Skip to main content
Glama

Server Details

Search Chinese TV drama scenes with second-level timestamps by character, emotion, or scene type.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct aspect of the system: catalog overview, episode details, character list, and scene search. There is no overlap in functionality.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern: get_content_catalog, get_episode_info, list_characters, search_scenes. The naming is clear and predictable.

Tool Count4/5

With 4 tools, the count is slightly on the lower side but still reasonable for a focused media server. The tools cover core read operations without being overly sparse.

Completeness3/5

The tools only support read operations; there are no create, update, or delete functions. Additionally, missing tools like listing all episodes of a show or filtering catalog by genre represent notable gaps for a streaming service.

Available Tools

4 tools
get_content_catalogAInspect

Get StreamBridge content catalog with titles, episode counts, genres.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states 'Get', implying a read operation with no side effects. However, it does not disclose whether the result is paginated, ordered, or filtered, nor whether it requires authentication or has rate limits. Basic disclosure is present but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence with no redundant words. Every part adds value: verb, resource, and the three fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given there is no output schema, the description should explain the return format. It lists fields but does not specify whether they are part of an array, object, or structure. Sibling tools (e.g., get_episode_info) might imply a relational context, but the description does not address pagination, size limits, or error states.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, and the schema has 100% coverage (empty object). Per guidelines, baseline for 0 parameters is 4. The description adds meaning beyond the schema by specifying what the catalog contains, so no deduction.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Get' plus resource 'StreamBridge content catalog' and explicitly lists the fields (titles, episode counts, genres). This clearly states what the tool does and distinguishes it from siblings like get_episode_info (specific episode) and list_characters.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance is provided. The purpose implies it is for fetching the overall catalog, but there is no mention of alternatives or context for selecting this over siblings. The usage is implied but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_episode_infoAInspect

Get info for a specific episode: title, duration, scene count.

ParametersJSON Schema
NameRequiredDescriptionDefault
episodeYesEpisode number
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. It discloses the output fields (title, duration, scene count), which adds value beyond the input schema. However, it does not mention potential errors or required permissions, but for a simple getter this is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, front-loaded with purpose, and contains no unnecessary words. Every word contributes value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, no output schema), the description is sufficient. It covers purpose and output, though it could mention what happens if episode is not found. Overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the only parameter 'episode', which already explains it's an integer episode number. The description adds context about the return fields but not about the parameter itself. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get info' and resource 'a specific episode', listing the returned fields (title, duration, scene count). It distinguishes from sibling tools like get_content_catalog (which likely lists episodes) and search_scenes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies using this tool when needing details for one episode, but does not explicitly state when not to use it or mention alternatives. The sibling context helps but the description itself lacks guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_charactersAInspect

List main characters in Taiping Nian and their scene counts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral disclosure. It only states the output (list of characters with scene counts) without mentioning any constraints, side effects, or performance considerations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that is front-loaded with key information. Every word is necessary and adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless list tool with no output schema, the description is adequate. It tells exactly what the tool returns. However, it could be improved by mentioning default ordering or scope (e.g., all characters from the entire show).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters in the schema, so the description does not need to add parameter semantics. The description sufficiently clarifies what the tool outputs, earning a baseline of 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists main characters and their scene counts, using a specific verb and resource. It distinguishes from siblings like get_content_catalog or search_scenes, which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when to use this tool versus alternatives or when not to use it. The usage is implied but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_scenesAInspect

Search scenes in StreamBridge video library with second-level timestamps. Supports search by dialogue, character name, emotion, and scene type. Returns results with direct watch links. Example: 'Qian Hongchu strategy', 'fight scene', 'touching moment'

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYesSearch keywords
episodeNoFilter by episode (optional)
characterNoFilter by character name (optional)
scene_typeNoScene type filter (optional)
action_onlyNo
romance_onlyNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the behavioral burden. It mentions returning results with timestamps and watch links but does not explain pagination, limit effects, error handling, or the behavior of boolean filters (action_only, romance_only). This leaves gaps for correct invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with example queries, all essential. It is front-loaded with the core purpose, then lists supported search types and outputs. No redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has 7 parameters and no output schema or annotations. The description covers the main search feature but omits details on limit pagination, boolean filter semantics, and result format beyond 'watch links.' For a search tool, it is adequate for basic use but incomplete for full parameter handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 57%. The description adds context for query, character, scene_type, and introduces emotion (likely part of query). However, it does not clarify the limit parameter's role or the boolean flags (action_only, romance_only). The schema already documents query, episode, character, scene_type, so the description provides marginal extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches scenes in the StreamBridge video library, with specific features like second-level timestamps and direct watch links. It distinguishes from siblings (get_content_catalog, get_episode_info, list_characters) by focusing on scene-level search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides example queries and mentions supported search dimensions (dialogue, character, emotion, scene type), which implies when to use. However, it does not explicitly contrast with sibling tools or state when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources