Integrates with a local Ollama instance to facilitate multi-agent workflows and provide tools for comparing prompt responses across different large language models.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@greenroomRecommend some highly-rated Spanish language horror films from the 2010s."
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
greenroom
A simple python package containing an MCP (Model Context Protocol) server that provides entertainment recommender utilities to agents. This server integrates with TMDB, a free and community-driven database of entertainment content.
Use the tools
The greenroom MCP server can be used to answer a wide range of questions related to entertainment. Below are some example prompts that will trigger the use of multiple MCP tools, but these are just examples.
Recommendations
What kinds of entertainment can you recommend?I'm in the mood for something serious. Recommend some entertainment content.Recommend spanish language documentary films from the 2010s.I loved Arrested Development and Atlanta. Recommend other entertainment options that I would like.
Event Planning
I'm hosting a French film night. Recommend highly-rated French films across genres.Plan a binge-watching weekend including recent dramas and comedies.Let's host a sci-fi movie marathon. Recommend 5 sci-fi films from different decades.
Industry Analysis
Analyze which genres have the highest average ratings in film vs television.Compare action films made in the 1980s to those made in the 2020s.What are the top-rated spanish language television shows in each genre?
Compare the output of multiple agents
Using compare_llm_responses, what makes a great science fiction film?Using the compare_llm_reponses tool, how is machine learning used in modern filmmaking?
Features
Tools
These tools are callable actions, analogous to POST requests. An agent executes these operations which may have side effects.
Tools are annotated with @mcp.tool() in the FastMCP framework.
list_genres - Fetches all entertainment genres for films and television, returning a unified map showing which media types support each genre
discover_films - Retrieves films based on discovery criteria (genre, year, language) with essential metadata including title, release date, rating, and overview
discover_television - Retrieves television shows based on discovery criteria (genre, year, language) with essential metadata including title, first air date, rating, and overview
NB: The
Resources
These resources provide read-only data, analogous to GET requests. An agent reads the information but does not performa actions.
Resources are annotated with @mcp.resource() in the FastMCP framework.
config://version - Get server version
Contexts
Context-aware tools use FastMCP's Context parameter to access advanced MCP features like LLM sampling.
list_genres_simplified - Returns a simplified list of genre names by using
ctx.sample()to leverage the agent's LLM capabilities for data transformation - it asks the current client's LLM to reformat the data. If sampling is not supported by the current client, then the method falls back to direct extraction of genres using python code.
How It Works
Client Support
Sampling requires the MCP client to support callbacks to its LLM, which is a security-sensitive feature:
Claude Desktop: Does NOT currently support sampling
Claude Code: Also unlikely to support it currently
Use Cases
While using a context to simplify the format of list_genres is overkill, the pattern demonstrates agent-to-agent communication useful for:
Summarizing large documents
Analyzing sentiment
Making recommendations based on data
Multi-step workflows with decision points
Multiple Agents
This server includes configuration and tools to use multiple agents to work on a single task.
compare_llm_responses - Receives a prompt and fields it out to two agents (defaults to Claude and Ollama). It constrains the responses by temperature and token limit.
How It Works
*Generally, Claude's response in this case will be null because we are asking to resample the existing claude agent, which is not permitted by Anthropic.
Architecture
Project Structure
This project follows the python package src/ layout to support convenient packaging and testing. Below is a simplified diagram of the project.
Dependencies
Python 3.12
FastMCP >=2.13.0 - MCP server framework; requires Python 3.10+
uv - package manager; installation instructions
Hatchling - build system
httpx - for API calls to TMDB
python-dotenv - for API key management
Ollama (optional) - local LLM runtime for multi-agent tools like compare_llm_responses; installation instructions
This project uses the See
Setup
Create local development environment
Add TMDB api key as environment variable
Get a free API key at TMDB by creating an account, going to account settings, and navigating to the API section.
Create a file called
.envat the top level of the project. (This file is gitignored to prevent committing secrets.)Copy the content of
.env.exampleto your new file.Replace
your_tmdb_api_key_herein .env with the actual TMDB API key.
(optional) Setup Ollama
To use Ollama as a second agent (in addition to Claude). An example of usage is the compare_llm_responses tool.
Install Ollama
Start Ollama service
Pull the default model
Test Ollama is working
Development
Run the MCP Server Locally
The server will start and communicate via stdio (standard input/output), which is the standard transport for local MCP servers.
NB: You should not run the server directly (e.g.
Inspect using MCP Inspector (web ui)
Run tests
Interacting with the MCP Server
This project does not yet include a frontend with which to exercise the server, but you can use anthropic tooling to interact with the server.
via Claude Code
Claude Code has native MCP client support so it can connect to your MCP server using the stdio transport, which the FastMCP server already uses.
Run the setup command
Open claude code
Enter
/mcpto view available MCP servers. Confirm that greenroom is one of them.
Exercise the server
Resources can be referenced with @ mentions
Tools will automatically be used during the conversation
Prompts show up as / slash commands
To explicitly test a tool, ask claude to call the tool. e.g.
Call the <name-of-tool> tool from the MCP server called greenroom.
When you update the methods on the MCP server, you must rerun all of the above steps in order for the updates to be available to the claude session.
Claude code troubleshooting
When you run the set up command (claude mcp add), a configuration for that MCP server is added to your local claude settings.
On my local machine, mcp configurations are stored at /Users/$USER_NAME/.claude.json.
Manual configuration of the MCP server in claude settings:
Remove the server from claude settings on local machine. This might be useful if the configuration is not correct. Removing the server and then re-adding the server might be good way to resolve configuration issues.
How It Works
The
pyproject.tomlfile declares thefastmcpdependency managed by uvWhen an agent (e.g. claude code) starts, it launches this MCP server as a subprocess using the configured command
uvautomatically manages the virtual environment and dependenciesThe server advertises its available resources and tools (e.g. the
tools/listJSON-RPC method)During conversations, the agent can automatically call these tools when relevant
The server executes the requested tool and returns results to the agent
The agent incorporates the results into its response to you
Future Development
Add more media types (e.g., podcasts, books)
Add providers to augment data sources
Create an entertainment concierge experience (e.g., manager agent flow)