Skip to main content
Glama
gurveeer

MCP Server Gemini

by gurveeer

get_help

Access usage information and documentation for the Gemini MCP server, including tools, models, parameters, and examples.

Instructions

Get help and usage information for the Gemini MCP server

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
topicNoHelp topic to get information aboutoverview

Implementation Reference

  • The getHelp method is the main handler for the 'get_help' tool. It takes a topic parameter and returns the corresponding help content as an MCP response.
      private getHelp(id: any, args: any): MCPResponse {
        const topic = args?.topic || 'overview';
        let helpContent = '';
    
        switch (topic) {
          case 'overview':
            helpContent = this.getHelpContent('overview');
            break;
    
          case 'tools':
            helpContent = this.getHelpContent('tools');
            break;
    
          case 'models':
            helpContent = `# Available Gemini Models
    
    ## Thinking Models (Latest - 2.5 Series)
    **gemini-2.5-pro**
    - Most capable, best for complex reasoning
    - 2M token context window
    - Features: thinking, JSON mode, grounding, system instructions
    
    **gemini-2.5-flash** ⭐ Recommended
    - Best balance of speed and capability
    - 1M token context window
    - Features: thinking, JSON mode, grounding, system instructions
    
    **gemini-2.5-flash-lite**
    - Ultra-fast, cost-efficient
    - 1M token context window
    - Features: thinking, JSON mode, system instructions
    
    ## Standard Models (2.0 Series)
    **gemini-2.0-flash**
    - Fast and efficient
    - 1M token context window
    - Features: JSON mode, grounding, system instructions
    
    **gemini-2.0-flash-lite**
    - Most cost-efficient
    - 1M token context window
    - Features: JSON mode, system instructions
    
    **gemini-2.0-pro-experimental**
    - Excellent for coding
    - 2M token context window
    - Features: JSON mode, grounding, system instructions
    
    ## Model Selection Guide
    - Complex reasoning: gemini-2.5-pro
    - General use: gemini-2.5-flash
    - Fast responses: gemini-2.5-flash-lite
    - Cost-sensitive: gemini-2.0-flash-lite
    - Coding tasks: gemini-2.0-pro-experimental`;
            break;
    
          case 'parameters':
            helpContent = this.getHelpContent('parameters');
            break;
    
          case 'examples':
            helpContent = this.getHelpContent('examples');
            break;
    
          case 'quick-start':
            helpContent = `# Quick Start Guide
    
    ## 1. Basic Usage
    Just ask naturally:
    - "Use Gemini to [your request]"
    - "Ask Gemini about [topic]"
    
    ## 2. Common Tasks
    
    **Text Generation:**
    "Use Gemini to write a function that sorts arrays"
    
    **Image Analysis:**
    "What's in this image?" [attach image]
    
    **Model Info:**
    "List all Gemini models"
    
    **Token Counting:**
    "Count tokens for my prompt"
    
    ## 3. Advanced Features
    
    **JSON Output:**
    "Use Gemini in JSON mode to extract key points"
    
    **Current Information:**
    "Use Gemini with grounding to get latest news"
    
    **Conversations:**
    "Start a chat with Gemini about Python"
    
    ## 4. Tips
    - Use gemini-2.5-flash for most tasks
    - Lower temperature for facts, higher for creativity
    - Enable grounding for current information
    - Use conversation IDs to maintain context
    
    ## Need More Help?
    - "Get help on tools" - Detailed tool information
    - "Get help on parameters" - All parameters explained
    - "Get help on models" - Model selection guide`;
            break;
    
          default:
            helpContent =
              'Unknown help topic. Available topics: overview, tools, models, parameters, examples, quick-start';
        }
    
        return {
          jsonrpc: '2.0',
          id,
          result: {
            content: [
              {
                type: 'text',
                text: helpContent
              }
            ]
          }
        };
      }
  • Zod schema definition for validating 'get_help' tool parameters in ToolSchemas.
    getHelp: z.object({
      topic: z
        .enum(['overview', 'tools', 'models', 'parameters', 'examples', 'quick-start'])
        .optional()
    })
  • The tool registration in getAvailableTools() method, including name, description, and input schema.
    {
      name: 'get_help',
      description: 'Get help and usage information for the Gemini MCP server',
      inputSchema: {
        type: 'object',
        properties: {
          topic: {
            type: 'string',
            description: 'Help topic to get information about',
            enum: ['overview', 'tools', 'models', 'parameters', 'examples', 'quick-start'],
            default: 'overview'
          }
        }
      }
    }
  • The getHelpContent helper method provides static help text for different topics used by the get_help handler.
      private getHelpContent(topic: string): string {
        // Extract help content generation to a separate method
        switch (topic) {
          case 'overview':
            return `# Gemini MCP Server Help
    
    Welcome to the Gemini MCP Server v4.1.0! This server provides access to Google's Gemini AI models through Claude Desktop.
    
    ## Available Tools
    1. **generate_text** - Generate text with advanced features
    2. **analyze_image** - Analyze images using vision models
    3. **count_tokens** - Count tokens for cost estimation
    4. **list_models** - List all available models
    5. **embed_text** - Generate text embeddings
    6. **get_help** - Get help on using this server
    
    ## Quick Start
    - "Use Gemini to explain [topic]"
    - "Analyze this image with Gemini"
    - "List all Gemini models"
    - "Get help on parameters"
    
    ## Key Features
    - Latest Gemini 2.5 models with thinking capabilities
    - JSON mode for structured output
    - Google Search grounding for current information
    - System instructions for behavior control
    - Conversation memory for context
    - Safety settings customization
    
    Use "get help on tools" for detailed tool information.`;
    
          case 'tools':
            return `# Available Tools
    
    ## 1. generate_text
    Generate text using Gemini models with advanced features.
    
    **Parameters:**
    - prompt (required): Your text prompt
    - model: Choose from gemini-2.5-pro, gemini-2.5-flash, etc.
    - temperature: 0-2 (default 0.7)
    - maxTokens: Max output tokens (default 2048)
    - systemInstruction: Guide model behavior
    - jsonMode: Enable JSON output
    - grounding: Enable Google Search
    - conversationId: Maintain conversation context
    
    **Example:** "Use Gemini 2.5 Pro to explain quantum computing"
    
    ## 2. analyze_image
    Analyze images using vision-capable models.
    
    **Parameters:**
    - prompt (required): Question about the image
    - imageUrl OR imageBase64 (required): Image source
    - model: Vision-capable model (default gemini-2.5-flash)
    
    **Example:** "Analyze this architecture diagram"
    
    ## 3. count_tokens
    Count tokens for text with a specific model.
    
    **Parameters:**
    - text (required): Text to count
    - model: Model for counting (default gemini-2.5-flash)
    
    **Example:** "Count tokens for this paragraph"
    
    ## 4. list_models
    List available models with optional filtering.
    
    **Parameters:**
    - filter: all, thinking, vision, grounding, json_mode
    
    **Example:** "List models with thinking capability"
    
    ## 5. embed_text
    Generate embeddings for semantic search.
    
    **Parameters:**
    - text (required): Text to embed
    - model: text-embedding-004 or text-multilingual-embedding-002
    
    **Example:** "Generate embeddings for similarity search"
    
    ## 6. get_help
    Get help on using this server.
    
    **Parameters:**
    - topic: overview, tools, models, parameters, examples, quick-start
    
    **Example:** "Get help on parameters"`;
    
          case 'parameters':
            return `# Parameter Reference
    
    ## generate_text Parameters
    
    **Required:**
    - prompt (string): Your text prompt
    
    **Optional:**
    - model (string): Model to use (default: gemini-2.5-flash)
    - systemInstruction (string): System prompt for behavior
    - temperature (0-2): Creativity level (default: 0.7)
    - maxTokens (number): Max output tokens (default: 2048)
    - topK (number): Top-k sampling (default: 40)
    - topP (number): Nucleus sampling (default: 0.95)
    - jsonMode (boolean): Enable JSON output
    - jsonSchema (object): JSON schema for validation
    - grounding (boolean): Enable Google Search
    - conversationId (string): Conversation identifier
    - safetySettings (array): Content filtering settings
    
    ## Temperature Guide
    - 0.1-0.3: Precise, factual
    - 0.5-0.8: Balanced (default 0.7)
    - 1.0-1.5: Creative
    - 1.5-2.0: Very creative
    
    ## JSON Mode Example
    Enable jsonMode and provide jsonSchema:
    {
      "type": "object",
      "properties": {
        "sentiment": {"type": "string"},
        "score": {"type": "number"}
      }
    }
    
    ## Safety Settings
    Categories: HARASSMENT, HATE_SPEECH, SEXUALLY_EXPLICIT, DANGEROUS_CONTENT
    Thresholds: BLOCK_NONE, BLOCK_ONLY_HIGH, BLOCK_MEDIUM_AND_ABOVE, BLOCK_LOW_AND_ABOVE`;
    
          case 'examples':
            return `# Usage Examples
    
    ## Basic Text Generation
    "Use Gemini to explain machine learning"
    
    ## With Specific Model
    "Use Gemini 2.5 Pro to write a Python sorting function"
    
    ## With Temperature
    "Use Gemini with temperature 1.5 to write a creative story"
    
    ## JSON Mode
    "Use Gemini in JSON mode to analyze sentiment and return {sentiment, confidence, keywords}"
    
    ## With Grounding
    "Use Gemini with grounding to research latest AI developments"
    
    ## System Instructions
    "Use Gemini as a Python tutor to explain decorators"
    
    ## Conversation Context
    "Start conversation 'chat-001' about web development"
    "Continue chat-001 and ask about React hooks"
    
    ## Image Analysis
    "Analyze this screenshot and describe the UI elements"
    
    ## Token Counting
    "Count tokens for this document using gemini-2.5-pro"
    
    ## Complex Example
    "Use Gemini 2.5 Pro to review this code with:
    - System instruction: 'You are a security expert'
    - Temperature: 0.3
    - JSON mode with schema for findings
    - Grounding for latest security practices"`;
    
          default:
            return 'Unknown help topic.';
        }
      }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves help information, implying a read-only operation, but doesn't specify if it requires authentication, has rate limits, returns structured or unstructured data, or handles errors. For a tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that efficiently conveys the core purpose without unnecessary words. It's front-loaded with the main action, making it easy to parse. However, it could be slightly more structured by hinting at the parameter usage, but overall it's concise and well-formed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one optional parameter with full schema coverage) and lack of output schema, the description is minimally adequate. It states what the tool does but doesn't cover behavioral aspects like response format or error handling, which are important for a help tool. With no annotations, it should provide more context to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'topic' parameter fully documented including its enum values and default. The description doesn't add any semantic details beyond what the schema provides, such as explaining what each topic covers or how the help is formatted. Given the high schema coverage, the baseline score of 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get help and usage information for the Gemini MCP server.' It uses a specific verb ('Get') and identifies the resource ('help and usage information'), though it doesn't explicitly differentiate from sibling tools like 'list_models' which might provide model information. The purpose is unambiguous but lacks sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for usage, or compare it to sibling tools like 'list_models' for model info or 'generate_text' for examples. The agent must infer usage from the purpose alone, which is insufficient for optimal tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gurveeer/mcp-server-gemini-pro'

If you have feedback or need assistance with the MCP directory API, please join our Discord server