Skip to main content
Glama
aliargun

Gemini MCP Server

by aliargun

get_help

Access help and usage information for the Gemini MCP server, including tools, models, parameters, and examples.

Instructions

Get help and usage information for the Gemini MCP server

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
topicNoHelp topic to get information aboutoverview

Implementation Reference

  • The handler function that executes the 'get_help' tool logic. It extracts the topic from arguments, selects appropriate help content (either hardcoded or from helper), and returns it as an MCP text response.
      private getHelp(id: any, args: any): MCPResponse {
        const topic = args?.topic || 'overview';
        let helpContent = '';
    
        switch (topic) {
          case 'overview':
            helpContent = this.getHelpContent('overview');
            break;
    
          case 'tools':
            helpContent = this.getHelpContent('tools');
            break;
    
          case 'models':
            helpContent = `# Available Gemini Models
    
    ## Thinking Models (Latest - 2.5 Series)
    **gemini-2.5-pro**
    - Most capable, best for complex reasoning
    - 2M token context window
    - Features: thinking, JSON mode, grounding, system instructions
    
    **gemini-2.5-flash** ⭐ Recommended
    - Best balance of speed and capability
    - 1M token context window
    - Features: thinking, JSON mode, grounding, system instructions
    
    **gemini-2.5-flash-lite**
    - Ultra-fast, cost-efficient
    - 1M token context window
    - Features: thinking, JSON mode, system instructions
    
    ## Standard Models (2.0 Series)
    **gemini-2.0-flash**
    - Fast and efficient
    - 1M token context window
    - Features: JSON mode, grounding, system instructions
    
    **gemini-2.0-flash-lite**
    - Most cost-efficient
    - 1M token context window
    - Features: JSON mode, system instructions
    
    **gemini-2.0-pro-experimental**
    - Excellent for coding
    - 2M token context window
    - Features: JSON mode, grounding, system instructions
    
    ## Model Selection Guide
    - Complex reasoning: gemini-2.5-pro
    - General use: gemini-2.5-flash
    - Fast responses: gemini-2.5-flash-lite
    - Cost-sensitive: gemini-2.0-flash-lite
    - Coding tasks: gemini-2.0-pro-experimental`;
            break;
    
          case 'parameters':
            helpContent = this.getHelpContent('parameters');
            break;
    
          case 'examples':
            helpContent = this.getHelpContent('examples');
            break;
    
          case 'quick-start':
            helpContent = `# Quick Start Guide
    
    ## 1. Basic Usage
    Just ask naturally:
    - "Use Gemini to [your request]"
    - "Ask Gemini about [topic]"
    
    ## 2. Common Tasks
    
    **Text Generation:**
    "Use Gemini to write a function that sorts arrays"
    
    **Image Analysis:**
    "What's in this image?" [attach image]
    
    **Model Info:**
    "List all Gemini models"
    
    **Token Counting:**
    "Count tokens for my prompt"
    
    ## 3. Advanced Features
    
    **JSON Output:**
    "Use Gemini in JSON mode to extract key points"
    
    **Current Information:**
    "Use Gemini with grounding to get latest news"
    
    **Conversations:**
    "Start a chat with Gemini about Python"
    
    ## 4. Tips
    - Use gemini-2.5-flash for most tasks
    - Lower temperature for facts, higher for creativity
    - Enable grounding for current information
    - Use conversation IDs to maintain context
    
    ## Need More Help?
    - "Get help on tools" - Detailed tool information
    - "Get help on parameters" - All parameters explained
    - "Get help on models" - Model selection guide`;
            break;
    
          default:
            helpContent = 'Unknown help topic. Available topics: overview, tools, models, parameters, examples, quick-start';
        }
    
        return {
          jsonrpc: '2.0',
          id,
          result: {
            content: [{
              type: 'text',
              text: helpContent
            }]
          }
        };
      }
  • The input schema definition for the 'get_help' tool, including the optional 'topic' parameter with enum values, as part of the tool registration in getTools().
    name: 'get_help',
    description: 'Get help and usage information for the Gemini MCP server',
    inputSchema: {
      type: 'object',
      properties: {
        topic: {
          type: 'string',
          description: 'Help topic to get information about',
          enum: ['overview', 'tools', 'models', 'parameters', 'examples', 'quick-start'],
          default: 'overview'
        }
      }
    }
  • Registration of the 'get_help' tool in the tools list returned by the server, specifying name, description, and schema.
    name: 'get_help',
    description: 'Get help and usage information for the Gemini MCP server',
    inputSchema: {
      type: 'object',
      properties: {
        topic: {
          type: 'string',
          description: 'Help topic to get information about',
          enum: ['overview', 'tools', 'models', 'parameters', 'examples', 'quick-start'],
          default: 'overview'
        }
      }
    }
  • Supporting helper method that provides the static markdown content for different help topics, called by the getHelp handler.
      private getHelpContent(topic: string): string {
        // Extract help content generation to a separate method
        switch (topic) {
          case 'overview':
            return `# Gemini MCP Server Help
    
    Welcome to the Gemini MCP Server v4.1.0! This server provides access to Google's Gemini AI models through Claude Desktop.
    
    ## Available Tools
    1. **generate_text** - Generate text with advanced features
    2. **analyze_image** - Analyze images using vision models
    3. **count_tokens** - Count tokens for cost estimation
    4. **list_models** - List all available models
    5. **embed_text** - Generate text embeddings
    6. **get_help** - Get help on using this server
    
    ## Quick Start
    - "Use Gemini to explain [topic]"
    - "Analyze this image with Gemini"
    - "List all Gemini models"
    - "Get help on parameters"
    
    ## Key Features
    - Latest Gemini 2.5 models with thinking capabilities
    - JSON mode for structured output
    - Google Search grounding for current information
    - System instructions for behavior control
    - Conversation memory for context
    - Safety settings customization
    
    Use "get help on tools" for detailed tool information.`;
    
          case 'tools':
            return `# Available Tools
    
    ## 1. generate_text
    Generate text using Gemini models with advanced features.
    
    **Parameters:**
    - prompt (required): Your text prompt
    - model: Choose from gemini-2.5-pro, gemini-2.5-flash, etc.
    - temperature: 0-2 (default 0.7)
    - maxTokens: Max output tokens (default 2048)
    - systemInstruction: Guide model behavior
    - jsonMode: Enable JSON output
    - grounding: Enable Google Search
    - conversationId: Maintain conversation context
    
    **Example:** "Use Gemini 2.5 Pro to explain quantum computing"
    
    ## 2. analyze_image
    Analyze images using vision-capable models.
    
    **Parameters:**
    - prompt (required): Question about the image
    - imageUrl OR imageBase64 (required): Image source
    - model: Vision-capable model (default gemini-2.5-flash)
    
    **Example:** "Analyze this architecture diagram"
    
    ## 3. count_tokens
    Count tokens for text with a specific model.
    
    **Parameters:**
    - text (required): Text to count
    - model: Model for counting (default gemini-2.5-flash)
    
    **Example:** "Count tokens for this paragraph"
    
    ## 4. list_models
    List available models with optional filtering.
    
    **Parameters:**
    - filter: all, thinking, vision, grounding, json_mode
    
    **Example:** "List models with thinking capability"
    
    ## 5. embed_text
    Generate embeddings for semantic search.
    
    **Parameters:**
    - text (required): Text to embed
    - model: text-embedding-004 or text-multilingual-embedding-002
    
    **Example:** "Generate embeddings for similarity search"
    
    ## 6. get_help
    Get help on using this server.
    
    **Parameters:**
    - topic: overview, tools, models, parameters, examples, quick-start
    
    **Example:** "Get help on parameters"`;
    
          case 'parameters':
            return `# Parameter Reference
    
    ## generate_text Parameters
    
    **Required:**
    - prompt (string): Your text prompt
    
    **Optional:**
    - model (string): Model to use (default: gemini-2.5-flash)
    - systemInstruction (string): System prompt for behavior
    - temperature (0-2): Creativity level (default: 0.7)
    - maxTokens (number): Max output tokens (default: 2048)
    - topK (number): Top-k sampling (default: 40)
    - topP (number): Nucleus sampling (default: 0.95)
    - jsonMode (boolean): Enable JSON output
    - jsonSchema (object): JSON schema for validation
    - grounding (boolean): Enable Google Search
    - conversationId (string): Conversation identifier
    - safetySettings (array): Content filtering settings
    
    ## Temperature Guide
    - 0.1-0.3: Precise, factual
    - 0.5-0.8: Balanced (default 0.7)
    - 1.0-1.5: Creative
    - 1.5-2.0: Very creative
    
    ## JSON Mode Example
    Enable jsonMode and provide jsonSchema:
    {
      "type": "object",
      "properties": {
        "sentiment": {"type": "string"},
        "score": {"type": "number"}
      }
    }
    
    ## Safety Settings
    Categories: HARASSMENT, HATE_SPEECH, SEXUALLY_EXPLICIT, DANGEROUS_CONTENT
    Thresholds: BLOCK_NONE, BLOCK_ONLY_HIGH, BLOCK_MEDIUM_AND_ABOVE, BLOCK_LOW_AND_ABOVE`;
    
          case 'examples':
            return `# Usage Examples
    
    ## Basic Text Generation
    "Use Gemini to explain machine learning"
    
    ## With Specific Model
    "Use Gemini 2.5 Pro to write a Python sorting function"
    
    ## With Temperature
    "Use Gemini with temperature 1.5 to write a creative story"
    
    ## JSON Mode
    "Use Gemini in JSON mode to analyze sentiment and return {sentiment, confidence, keywords}"
    
    ## With Grounding
    "Use Gemini with grounding to research latest AI developments"
    
    ## System Instructions
    "Use Gemini as a Python tutor to explain decorators"
    
    ## Conversation Context
    "Start conversation 'chat-001' about web development"
    "Continue chat-001 and ask about React hooks"
    
    ## Image Analysis
    "Analyze this screenshot and describe the UI elements"
    
    ## Token Counting
    "Count tokens for this document using gemini-2.5-pro"
    
    ## Complex Example
    "Use Gemini 2.5 Pro to review this code with:
    - System instruction: 'You are a security expert'
    - Temperature: 0.3
    - JSON mode with schema for findings
    - Grounding for latest security practices"`;
    
          default:
            return 'Unknown help topic.';
        }
      }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. While 'Get help and usage information' implies a read-only, non-destructive operation, it doesn't specify what form the help takes (e.g., structured documentation, examples, error messages), whether it requires authentication, or any rate limits. The description is too vague about behavioral traits beyond the basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that efficiently conveys the core purpose without unnecessary words. It is appropriately sized for a simple tool and front-loads the essential information, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one optional parameter with full schema coverage) and lack of annotations/output schema, the description is minimally adequate. It states what the tool does but lacks details on behavioral context, usage guidance, or output format. For a help tool that might return varied documentation, more completeness would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'topic' fully documented in the schema (including description, enum values, and default). The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline score of 3 for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get help and usage information') and resource ('for the Gemini MCP server'), making it immediately understandable. However, it doesn't explicitly differentiate from sibling tools, which are all distinct operations (image analysis, token counting, text embedding, text generation, model listing) rather than help/documentation functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, timing considerations, or how it relates to the sibling tools (e.g., whether to use this before invoking other tools for guidance). The agent must infer usage context from the tool name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aliargun/mcp-server-gemini'

If you have feedback or need assistance with the MCP directory API, please join our Discord server