get_help
Access help and usage information for the Gemini MCP server, including tools, models, parameters, and examples.
Instructions
Get help and usage information for the Gemini MCP server
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| topic | No | Help topic to get information about | overview |
Implementation Reference
- src/enhanced-stdio-server.ts:1104-1227 (handler)The handler function that executes the 'get_help' tool logic. It extracts the topic from arguments, selects appropriate help content (either hardcoded or from helper), and returns it as an MCP text response.private getHelp(id: any, args: any): MCPResponse { const topic = args?.topic || 'overview'; let helpContent = ''; switch (topic) { case 'overview': helpContent = this.getHelpContent('overview'); break; case 'tools': helpContent = this.getHelpContent('tools'); break; case 'models': helpContent = `# Available Gemini Models ## Thinking Models (Latest - 2.5 Series) **gemini-2.5-pro** - Most capable, best for complex reasoning - 2M token context window - Features: thinking, JSON mode, grounding, system instructions **gemini-2.5-flash** ⭐ Recommended - Best balance of speed and capability - 1M token context window - Features: thinking, JSON mode, grounding, system instructions **gemini-2.5-flash-lite** - Ultra-fast, cost-efficient - 1M token context window - Features: thinking, JSON mode, system instructions ## Standard Models (2.0 Series) **gemini-2.0-flash** - Fast and efficient - 1M token context window - Features: JSON mode, grounding, system instructions **gemini-2.0-flash-lite** - Most cost-efficient - 1M token context window - Features: JSON mode, system instructions **gemini-2.0-pro-experimental** - Excellent for coding - 2M token context window - Features: JSON mode, grounding, system instructions ## Model Selection Guide - Complex reasoning: gemini-2.5-pro - General use: gemini-2.5-flash - Fast responses: gemini-2.5-flash-lite - Cost-sensitive: gemini-2.0-flash-lite - Coding tasks: gemini-2.0-pro-experimental`; break; case 'parameters': helpContent = this.getHelpContent('parameters'); break; case 'examples': helpContent = this.getHelpContent('examples'); break; case 'quick-start': helpContent = `# Quick Start Guide ## 1. Basic Usage Just ask naturally: - "Use Gemini to [your request]" - "Ask Gemini about [topic]" ## 2. Common Tasks **Text Generation:** "Use Gemini to write a function that sorts arrays" **Image Analysis:** "What's in this image?" [attach image] **Model Info:** "List all Gemini models" **Token Counting:** "Count tokens for my prompt" ## 3. Advanced Features **JSON Output:** "Use Gemini in JSON mode to extract key points" **Current Information:** "Use Gemini with grounding to get latest news" **Conversations:** "Start a chat with Gemini about Python" ## 4. Tips - Use gemini-2.5-flash for most tasks - Lower temperature for facts, higher for creativity - Enable grounding for current information - Use conversation IDs to maintain context ## Need More Help? - "Get help on tools" - Detailed tool information - "Get help on parameters" - All parameters explained - "Get help on models" - Model selection guide`; break; default: helpContent = 'Unknown help topic. Available topics: overview, tools, models, parameters, examples, quick-start'; } return { jsonrpc: '2.0', id, result: { content: [{ type: 'text', text: helpContent }] } }; }
- src/enhanced-stdio-server.ts:360-372 (schema)The input schema definition for the 'get_help' tool, including the optional 'topic' parameter with enum values, as part of the tool registration in getTools().name: 'get_help', description: 'Get help and usage information for the Gemini MCP server', inputSchema: { type: 'object', properties: { topic: { type: 'string', description: 'Help topic to get information about', enum: ['overview', 'tools', 'models', 'parameters', 'examples', 'quick-start'], default: 'overview' } } }
- src/enhanced-stdio-server.ts:360-372 (registration)Registration of the 'get_help' tool in the tools list returned by the server, specifying name, description, and schema.name: 'get_help', description: 'Get help and usage information for the Gemini MCP server', inputSchema: { type: 'object', properties: { topic: { type: 'string', description: 'Help topic to get information about', enum: ['overview', 'tools', 'models', 'parameters', 'examples', 'quick-start'], default: 'overview' } } }
- Supporting helper method that provides the static markdown content for different help topics, called by the getHelp handler.private getHelpContent(topic: string): string { // Extract help content generation to a separate method switch (topic) { case 'overview': return `# Gemini MCP Server Help Welcome to the Gemini MCP Server v4.1.0! This server provides access to Google's Gemini AI models through Claude Desktop. ## Available Tools 1. **generate_text** - Generate text with advanced features 2. **analyze_image** - Analyze images using vision models 3. **count_tokens** - Count tokens for cost estimation 4. **list_models** - List all available models 5. **embed_text** - Generate text embeddings 6. **get_help** - Get help on using this server ## Quick Start - "Use Gemini to explain [topic]" - "Analyze this image with Gemini" - "List all Gemini models" - "Get help on parameters" ## Key Features - Latest Gemini 2.5 models with thinking capabilities - JSON mode for structured output - Google Search grounding for current information - System instructions for behavior control - Conversation memory for context - Safety settings customization Use "get help on tools" for detailed tool information.`; case 'tools': return `# Available Tools ## 1. generate_text Generate text using Gemini models with advanced features. **Parameters:** - prompt (required): Your text prompt - model: Choose from gemini-2.5-pro, gemini-2.5-flash, etc. - temperature: 0-2 (default 0.7) - maxTokens: Max output tokens (default 2048) - systemInstruction: Guide model behavior - jsonMode: Enable JSON output - grounding: Enable Google Search - conversationId: Maintain conversation context **Example:** "Use Gemini 2.5 Pro to explain quantum computing" ## 2. analyze_image Analyze images using vision-capable models. **Parameters:** - prompt (required): Question about the image - imageUrl OR imageBase64 (required): Image source - model: Vision-capable model (default gemini-2.5-flash) **Example:** "Analyze this architecture diagram" ## 3. count_tokens Count tokens for text with a specific model. **Parameters:** - text (required): Text to count - model: Model for counting (default gemini-2.5-flash) **Example:** "Count tokens for this paragraph" ## 4. list_models List available models with optional filtering. **Parameters:** - filter: all, thinking, vision, grounding, json_mode **Example:** "List models with thinking capability" ## 5. embed_text Generate embeddings for semantic search. **Parameters:** - text (required): Text to embed - model: text-embedding-004 or text-multilingual-embedding-002 **Example:** "Generate embeddings for similarity search" ## 6. get_help Get help on using this server. **Parameters:** - topic: overview, tools, models, parameters, examples, quick-start **Example:** "Get help on parameters"`; case 'parameters': return `# Parameter Reference ## generate_text Parameters **Required:** - prompt (string): Your text prompt **Optional:** - model (string): Model to use (default: gemini-2.5-flash) - systemInstruction (string): System prompt for behavior - temperature (0-2): Creativity level (default: 0.7) - maxTokens (number): Max output tokens (default: 2048) - topK (number): Top-k sampling (default: 40) - topP (number): Nucleus sampling (default: 0.95) - jsonMode (boolean): Enable JSON output - jsonSchema (object): JSON schema for validation - grounding (boolean): Enable Google Search - conversationId (string): Conversation identifier - safetySettings (array): Content filtering settings ## Temperature Guide - 0.1-0.3: Precise, factual - 0.5-0.8: Balanced (default 0.7) - 1.0-1.5: Creative - 1.5-2.0: Very creative ## JSON Mode Example Enable jsonMode and provide jsonSchema: { "type": "object", "properties": { "sentiment": {"type": "string"}, "score": {"type": "number"} } } ## Safety Settings Categories: HARASSMENT, HATE_SPEECH, SEXUALLY_EXPLICIT, DANGEROUS_CONTENT Thresholds: BLOCK_NONE, BLOCK_ONLY_HIGH, BLOCK_MEDIUM_AND_ABOVE, BLOCK_LOW_AND_ABOVE`; case 'examples': return `# Usage Examples ## Basic Text Generation "Use Gemini to explain machine learning" ## With Specific Model "Use Gemini 2.5 Pro to write a Python sorting function" ## With Temperature "Use Gemini with temperature 1.5 to write a creative story" ## JSON Mode "Use Gemini in JSON mode to analyze sentiment and return {sentiment, confidence, keywords}" ## With Grounding "Use Gemini with grounding to research latest AI developments" ## System Instructions "Use Gemini as a Python tutor to explain decorators" ## Conversation Context "Start conversation 'chat-001' about web development" "Continue chat-001 and ask about React hooks" ## Image Analysis "Analyze this screenshot and describe the UI elements" ## Token Counting "Count tokens for this document using gemini-2.5-pro" ## Complex Example "Use Gemini 2.5 Pro to review this code with: - System instruction: 'You are a security expert' - Temperature: 0.3 - JSON mode with schema for findings - Grounding for latest security practices"`; default: return 'Unknown help topic.'; } }