suche
Server Details
Search Blu-ray and 4K UHD movies, featured titles, and detailed movie information.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
3 toolsget_featured_moviesAInspect
Get currently featured and recommended Blu-ray movies.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum results (1-20, default 8) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable temporal context with 'currently,' indicating rotating or time-sensitive content. However, it omits details about return format, pagination behavior, or caching that would aid invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence of six words that leads with the action verb. There is zero waste or redundancy; every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter tool, the description adequately covers the domain (Blu-ray movies) and operation type. While no output schema exists, the description implies a collection return through the plural 'movies.' It could be improved by hinting at the data structure, but it is sufficient for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the 'limit' parameter fully documented as 'Maximum results (1-20, default 8).' The description does not mention the parameter, but given the high schema coverage, this meets the baseline expectation without adding redundant information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' and clearly identifies the resource as 'currently featured and recommended Blu-ray movies.' The terms 'featured' and 'recommended' effectively distinguish this from sibling tools 'search_movies' (query-based) and 'get_movie_details' (specific lookup).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies discovery/browsing use cases through 'featured and recommended,' it lacks explicit guidance on when to use this versus the search or details alternatives. No prerequisites or exclusions are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_movie_detailsBInspect
Get detailed information about a specific movie including cast, trailer, and external links.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Movie slug (e.g., 'matrix-1999') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It adds value by disclosing return content (cast, trailer, external links) which compensates partially for the missing output schema. However, it lacks operational details such as idempotency, error handling for invalid slugs, or rate limiting considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, efficiently structured sentence that is front-loaded with the action ('Get detailed information...'). Every clause earns its place by either specifying the resource ('about a specific movie') or detailing the return payload ('including cast, trailer, and external links'). No redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (single string parameter, no nested objects) and absence of an output schema, the description adequately covers the return value structure. It is complete enough for an agent to understand the tool's scope, though it could benefit from mentioning error cases (e.g., slug not found).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('Movie slug (e.g., 'matrix-1999')'), the baseline is 3. The description does not mention the 'slug' parameter or add semantic context about how to identify the movie, relying entirely on the schema. It neither adds to nor detracts from the schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Get') and resource ('detailed information about a specific movie'). The phrase 'specific movie' implicitly distinguishes this from sibling tools 'search_movies' and 'get_featured_movies' (which handle queries and lists respectively). Listing specific return fields (cast, trailer, external links) clarifies what 'detailed information' entails.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus the 'search_movies' or 'get_featured_movies' siblings. While 'specific movie' implies usage when a movie identifier is known, there are no explicit when/when-not conditions or prerequisites mentioned (e.g., 'use this when you have a movie slug').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_moviesBInspect
Search the Blu-ray movie database by title, director, actor, or keywords.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum results (1-25, default 10) | |
| query | Yes | Search query (movie title, director, actor, or keywords) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It establishes the domain (Blu-ray database) and search capabilities but omits behavioral details like fuzzy vs. exact matching, pagination behavior, or what data structure is returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently conveys the tool's function without redundancy or filler. Information is front-loaded with the action and resource clearly stated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple two-parameter search tool with complete schema coverage, but lacks crucial context given the absence of an output schema—specifically, it omits that results likely contain movie IDs intended for use with the sibling get_movie_details tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting both query and limit parameters fully. The description reinforces the query parameter's purpose but adds no additional semantic context (e.g., query syntax, examples) beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Provides specific verb (Search), resource (Blu-ray movie database), and search scope (title, director, actor, keywords). Implicitly distinguishes from get_featured_movies (browsing) and get_movie_details (retrieval by ID), though lacks explicit sibling comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Lists searchable fields but provides no guidance on when to use this versus get_featured_movies or get_movie_details, nor does it mention prerequisites like minimum query length or search syntax.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!