Integrates with Google Gemini API to translate natural language user queries into structured tool calls. The LLM analyzes user intent and generates appropriate function calls to tools exposed by the MCP server.
Leverages Pydantic models to define input/output schemas for tools, enabling automatic validation of tool parameters and return values.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Model Context Protocol Democalculate the average of 15, 25, and 35"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Model Context Protocol (MCP)
Modelcontextprotocol.io
Related MCP server: Yahoo Finance MCP Server
Reference Implementations
FastMCP
A high-level, fast, Pythonic way to build MCP servers and clients
Fast Agent
A framework with complete, end-to-end tested MCP Feature support including Sampling, which helps create and interact with sophisticated Agents and Workflows in minutes. Both Anthropic (Haiku, Sonnet, Opus) and OpenAI models (gpt-4o/gpt-4.1 family, o1/o3 family) are supported
Demos
FastMCP
A fastmcp tutorial
MCP server:
mcp_server.py: collection of tools/resources like calculator, trigonometry function, yahoo finance API call
MCP clients:
mcp_client_simple.py: simple client with rule-based query parsingmcp_client_llm.py: client with LLM-based query parsingmcp_client_llm_resource.py: client with LLM parsing and calling tool/resource (seereadme_tool_resource.mdfor details)
see readme_gemini.md for more details
Setup
Streamlit MCP app
see fastmcp/README.md
FastMCP - AWS
To Be Built
see readme_aws.md for more details
MCP SDK
see mcp_sdk sub-folder
To Be Verified