Skip to main content
Glama

mcp_gemini_chat_completion

Complete chat conversations using the Gemini AI model. Input messages and configure parameters like temperature, max tokens, and topK to generate tailored responses for diverse use cases.

Instructions

Gemini AI 모델을 사용하여 채팅 대화를 완성합니다.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
max_tokensNo생성할 최대 토큰 수
messagesYes대화 메시지 목록
modelYes사용할 Gemini 모델 ID (예: gemini-pro, gemini-1.5-pro 등)
temperatureNo생성 랜덤성 정도 (0.0 - 2.0)
topKNo각 위치에서 고려할 최상위 토큰 수
topPNo확률 질량의 상위 비율을 선택하는 임계값

Implementation Reference

  • MCP tool handler for mcp_gemini_chat_completion. Calls geminiService.chatCompletion with args, extracts the assistant message content, formats as MCP ToolResponse, and handles errors.
    async handler(args: any): Promise<ToolResponse> { try { const result = await geminiService.chatCompletion(args); return { content: [{ type: 'text', text: result.message.content }] }; } catch (error) { return { content: [{ type: 'text', text: `Gemini 채팅 완성 오류: ${error instanceof Error ? error.message : String(error)}` }] }; } }
  • Input schema validation for mcp_gemini_chat_completion tool, defining parameters like model, messages array, temperature, etc.
    inputSchema: { type: 'object', required: ['model', 'messages'], properties: { model: { type: 'string', description: '사용할 Gemini 모델 ID (예: gemini-pro, gemini-1.5-pro 등)', }, messages: { type: 'array', description: '대화 메시지 목록', items: { type: 'object', required: ['role', 'content'], properties: { role: { type: 'string', description: '메시지 작성자 역할 (system, user, assistant)', enum: ['system', 'user', 'assistant'], }, content: { type: 'string', description: '메시지 내용', }, }, }, }, temperature: { type: 'number', description: '생성 랜덤성 정도 (0.0 - 2.0)', default: 0.7, }, max_tokens: { type: 'number', description: '생성할 최대 토큰 수', default: 1024, }, topK: { type: 'number', description: '각 위치에서 고려할 최상위 토큰 수', default: 40, }, topP: { type: 'number', description: '확률 질량의 상위 비율을 선택하는 임계값', default: 0.95, }, }, },
  • Core implementation in GeminiService.chatCompletion: converts chat messages to Gemini contents format (handling system as user prefix), calls Gemini generateContent API, parses response into OpenAI-like format with message, model, usage.
    async chatCompletion({ model, messages, temperature = 0.7, max_tokens = 1024, topK = 40, topP = 0.95, }: { model: string; messages: Array<{ role: string; content: string }>; temperature?: number; max_tokens?: number; topK?: number; topP?: number; }) { try { const config = this.getRequestConfig(); const url = `${this.baseUrl}/models/${model}:generateContent`; // OpenAI 형식의 메시지를 Gemini 형식으로 변환 const geminiContents = []; let currentRole = null; let currentParts: any[] = []; for (const message of messages) { // 'system' 메시지는 'user' 역할로 변환하되 prefix를 추가 if (message.role === 'system') { geminiContents.push({ role: 'user', parts: [{ text: `[system] ${message.content}` }], }); continue; } // 역할이 변경되면 새 항목 시작 if (message.role !== currentRole && currentParts.length > 0) { geminiContents.push({ role: currentRole === 'assistant' ? 'model' : 'user', parts: currentParts, }); currentParts = []; } currentRole = message.role; currentParts.push({ text: message.content }); } // 마지막 메시지 추가 if (currentParts.length > 0) { geminiContents.push({ role: currentRole === 'assistant' ? 'model' : 'user', parts: currentParts, }); } const response = await axios.post( url, { contents: geminiContents, generationConfig: { temperature, maxOutputTokens: max_tokens, topK, topP, }, }, config ); // 응답에서 생성된 텍스트 추출 const generatedContent = response.data.candidates?.[0]?.content?.parts?.[0]?.text || ''; return { message: { role: 'assistant', content: generatedContent, }, model: model, usage: { completion_tokens: response.data.usageMetadata?.candidatesTokenCount || 0, prompt_tokens: response.data.usageMetadata?.promptTokenCount || 0, total_tokens: (response.data.usageMetadata?.promptTokenCount || 0) + (response.data.usageMetadata?.candidatesTokenCount || 0), }, }; } catch (error) { throw this.formatError(error); } }
  • src/index.ts:25-53 (registration)
    MCP server capabilities registration enabling the mcp_gemini_chat_completion tool.
    tools: { mcp_sparql_execute_query: true, mcp_sparql_update: true, mcp_sparql_list_repositories: true, mcp_sparql_list_graphs: true, mcp_sparql_get_resource_info: true, mcp_ollama_run: true, mcp_ollama_show: true, mcp_ollama_pull: true, mcp_ollama_list: true, mcp_ollama_rm: true, mcp_ollama_chat_completion: true, mcp_ollama_status: true, mcp_http_request: true, mcp_openai_chat: true, mcp_openai_image: true, mcp_openai_tts: true, mcp_openai_transcribe: true, mcp_openai_embedding: true, mcp_gemini_generate_text: true, mcp_gemini_chat_completion: true, mcp_gemini_list_models: true, mcp_gemini_generate_images: false, mcp_gemini_generate_image: false, mcp_gemini_generate_videos: false, mcp_gemini_generate_multimodal_content: false, mcp_imagen_generate: false, mcp_gemini_create_image: false, mcp_gemini_edit_image: false

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bigdata-coss/agent_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server