Skip to main content
Glama

get_help

Access help and usage information for the Gemini MCP Server, including tools, models, parameters, examples, and quick-start guides.

Instructions

Get help and usage information for the Gemini MCP server

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
topicNoHelp topic to get information aboutoverview

Implementation Reference

  • The primary handler function for the 'get_help' tool. It takes an ID and arguments, determines the help topic, generates appropriate content using helpers or inline strings, and returns an MCPResponse with the help text.
    private getHelp(id: any, args: any): MCPResponse { const topic = args?.topic || 'overview'; let helpContent = ''; switch (topic) { case 'overview': helpContent = this.getHelpContent('overview'); break; case 'tools': helpContent = this.getHelpContent('tools'); break; case 'models': helpContent = `# Available Gemini Models ## Thinking Models (Latest - 2.5 Series) **gemini-2.5-pro** - Most capable, best for complex reasoning - 2M token context window - Features: thinking, JSON mode, grounding, system instructions **gemini-2.5-flash** ⭐ Recommended - Best balance of speed and capability - 1M token context window - Features: thinking, JSON mode, grounding, system instructions **gemini-2.5-flash-lite** - Ultra-fast, cost-efficient - 1M token context window - Features: thinking, JSON mode, system instructions ## Standard Models (2.0 Series) **gemini-2.0-flash** - Fast and efficient - 1M token context window - Features: JSON mode, grounding, system instructions **gemini-2.0-flash-lite** - Most cost-efficient - 1M token context window - Features: JSON mode, system instructions **gemini-2.0-pro-experimental** - Excellent for coding - 2M token context window - Features: JSON mode, grounding, system instructions ## Model Selection Guide - Complex reasoning: gemini-2.5-pro - General use: gemini-2.5-flash - Fast responses: gemini-2.5-flash-lite - Cost-sensitive: gemini-2.0-flash-lite - Coding tasks: gemini-2.0-pro-experimental`; break; case 'parameters': helpContent = this.getHelpContent('parameters'); break; case 'examples': helpContent = this.getHelpContent('examples'); break; case 'quick-start': helpContent = `# Quick Start Guide ## 1. Basic Usage Just ask naturally: - "Use Gemini to [your request]" - "Ask Gemini about [topic]" ## 2. Common Tasks **Text Generation:** "Use Gemini to write a function that sorts arrays" **Image Analysis:** "What's in this image?" [attach image] **Model Info:** "List all Gemini models" **Token Counting:** "Count tokens for my prompt" ## 3. Advanced Features **JSON Output:** "Use Gemini in JSON mode to extract key points" **Current Information:** "Use Gemini with grounding to get latest news" **Conversations:** "Start a chat with Gemini about Python" ## 4. Tips - Use gemini-2.5-flash for most tasks - Lower temperature for facts, higher for creativity - Enable grounding for current information - Use conversation IDs to maintain context ## Need More Help? - "Get help on tools" - Detailed tool information - "Get help on parameters" - All parameters explained - "Get help on models" - Model selection guide`; break; default: helpContent = 'Unknown help topic. Available topics: overview, tools, models, parameters, examples, quick-start'; } return { jsonrpc: '2.0', id, result: { content: [{ type: 'text', text: helpContent }] } }; }
  • Registration of the 'get_help' tool in the getAvailableTools() method, which is returned by tools/list. Includes name, description, and input schema.
    { name: 'get_help', description: 'Get help and usage information for the Gemini MCP server', inputSchema: { type: 'object', properties: { topic: { type: 'string', description: 'Help topic to get information about', enum: ['overview', 'tools', 'models', 'parameters', 'examples', 'quick-start'], default: 'overview' } } } }
  • The input schema defining the parameters accepted by the get_help tool, including the optional 'topic' parameter with enumerated values.
    inputSchema: { type: 'object', properties: { topic: { type: 'string', description: 'Help topic to get information about', enum: ['overview', 'tools', 'models', 'parameters', 'examples', 'quick-start'], default: 'overview' } } }
  • Supporting helper method that generates markdown-formatted help content for specific topics, used by the getHelp handler and resource reads.
    private getHelpContent(topic: string): string { // Extract help content generation to a separate method switch (topic) { case 'overview': return `# Gemini MCP Server Help Welcome to the Gemini MCP Server v4.1.0! This server provides access to Google's Gemini AI models through Claude Desktop. ## Available Tools 1. **generate_text** - Generate text with advanced features 2. **analyze_image** - Analyze images using vision models 3. **count_tokens** - Count tokens for cost estimation 4. **list_models** - List all available models 5. **embed_text** - Generate text embeddings 6. **get_help** - Get help on using this server ## Quick Start - "Use Gemini to explain [topic]" - "Analyze this image with Gemini" - "List all Gemini models" - "Get help on parameters" ## Key Features - Latest Gemini 2.5 models with thinking capabilities - JSON mode for structured output - Google Search grounding for current information - System instructions for behavior control - Conversation memory for context - Safety settings customization Use "get help on tools" for detailed tool information.`; case 'tools': return `# Available Tools ## 1. generate_text Generate text using Gemini models with advanced features. **Parameters:** - prompt (required): Your text prompt - model: Choose from gemini-2.5-pro, gemini-2.5-flash, etc. - temperature: 0-2 (default 0.7) - maxTokens: Max output tokens (default 2048) - systemInstruction: Guide model behavior - jsonMode: Enable JSON output - grounding: Enable Google Search - conversationId: Maintain conversation context **Example:** "Use Gemini 2.5 Pro to explain quantum computing" ## 2. analyze_image Analyze images using vision-capable models. **Parameters:** - prompt (required): Question about the image - imageUrl OR imageBase64 (required): Image source - model: Vision-capable model (default gemini-2.5-flash) **Example:** "Analyze this architecture diagram" ## 3. count_tokens Count tokens for text with a specific model. **Parameters:** - text (required): Text to count - model: Model for counting (default gemini-2.5-flash) **Example:** "Count tokens for this paragraph" ## 4. list_models List available models with optional filtering. **Parameters:** - filter: all, thinking, vision, grounding, json_mode **Example:** "List models with thinking capability" ## 5. embed_text Generate embeddings for semantic search. **Parameters:** - text (required): Text to embed - model: text-embedding-004 or text-multilingual-embedding-002 **Example:** "Generate embeddings for similarity search" ## 6. get_help Get help on using this server. **Parameters:** - topic: overview, tools, models, parameters, examples, quick-start **Example:** "Get help on parameters"`; case 'parameters': return `# Parameter Reference ## generate_text Parameters **Required:** - prompt (string): Your text prompt **Optional:** - model (string): Model to use (default: gemini-2.5-flash) - systemInstruction (string): System prompt for behavior - temperature (0-2): Creativity level (default: 0.7) - maxTokens (number): Max output tokens (default: 2048) - topK (number): Top-k sampling (default: 40) - topP (number): Nucleus sampling (default: 0.95) - jsonMode (boolean): Enable JSON output - jsonSchema (object): JSON schema for validation - grounding (boolean): Enable Google Search - conversationId (string): Conversation identifier - safetySettings (array): Content filtering settings ## Temperature Guide - 0.1-0.3: Precise, factual - 0.5-0.8: Balanced (default 0.7) - 1.0-1.5: Creative - 1.5-2.0: Very creative ## JSON Mode Example Enable jsonMode and provide jsonSchema: { "type": "object", "properties": { "sentiment": {"type": "string"}, "score": {"type": "number"} } } ## Safety Settings Categories: HARASSMENT, HATE_SPEECH, SEXUALLY_EXPLICIT, DANGEROUS_CONTENT Thresholds: BLOCK_NONE, BLOCK_ONLY_HIGH, BLOCK_MEDIUM_AND_ABOVE, BLOCK_LOW_AND_ABOVE`; case 'examples': return `# Usage Examples ## Basic Text Generation "Use Gemini to explain machine learning" ## With Specific Model "Use Gemini 2.5 Pro to write a Python sorting function" ## With Temperature "Use Gemini with temperature 1.5 to write a creative story" ## JSON Mode "Use Gemini in JSON mode to analyze sentiment and return {sentiment, confidence, keywords}" ## With Grounding "Use Gemini with grounding to research latest AI developments" ## System Instructions "Use Gemini as a Python tutor to explain decorators" ## Conversation Context "Start conversation 'chat-001' about web development" "Continue chat-001 and ask about React hooks" ## Image Analysis "Analyze this screenshot and describe the UI elements" ## Token Counting "Count tokens for this document using gemini-2.5-pro" ## Complex Example "Use Gemini 2.5 Pro to review this code with: - System instruction: 'You are a security expert' - Temperature: 0.3 - JSON mode with schema for findings - Grounding for latest security practices"`; default: return 'Unknown help topic.'; } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aliargun/mcp-server-gemini'

If you have feedback or need assistance with the MCP directory API, please join our Discord server