Skip to main content
Glama

mcp_gemini_chat_completion

Complete chat conversations using the Gemini AI model. Input messages and configure parameters like temperature, max tokens, and topK to generate tailored responses for diverse use cases.

Instructions

Gemini AI 모델을 사용하여 채팅 대화를 완성합니다.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
max_tokensNo생성할 최대 토큰 수
messagesYes대화 메시지 목록
modelYes사용할 Gemini 모델 ID (예: gemini-pro, gemini-1.5-pro 등)
temperatureNo생성 랜덤성 정도 (0.0 - 2.0)
topKNo각 위치에서 고려할 최상위 토큰 수
topPNo확률 질량의 상위 비율을 선택하는 임계값

Implementation Reference

  • MCP tool handler for mcp_gemini_chat_completion. Calls geminiService.chatCompletion with args, extracts the assistant message content, formats as MCP ToolResponse, and handles errors.
    async handler(args: any): Promise<ToolResponse> {
      try {
        const result = await geminiService.chatCompletion(args);
        return {
          content: [{
            type: 'text',
            text: result.message.content
          }]
        };
      } catch (error) {
        return {
          content: [{
            type: 'text',
            text: `Gemini 채팅 완성 오류: ${error instanceof Error ? error.message : String(error)}`
          }]
        };
      }
    }
  • Input schema validation for mcp_gemini_chat_completion tool, defining parameters like model, messages array, temperature, etc.
    inputSchema: {
      type: 'object',
      required: ['model', 'messages'],
      properties: {
        model: {
          type: 'string',
          description: '사용할 Gemini 모델 ID (예: gemini-pro, gemini-1.5-pro 등)',
        },
        messages: {
          type: 'array',
          description: '대화 메시지 목록',
          items: {
            type: 'object',
            required: ['role', 'content'],
            properties: {
              role: {
                type: 'string',
                description: '메시지 작성자 역할 (system, user, assistant)',
                enum: ['system', 'user', 'assistant'],
              },
              content: {
                type: 'string',
                description: '메시지 내용',
              },
            },
          },
        },
        temperature: {
          type: 'number',
          description: '생성 랜덤성 정도 (0.0 - 2.0)',
          default: 0.7,
        },
        max_tokens: {
          type: 'number',
          description: '생성할 최대 토큰 수',
          default: 1024,
        },
        topK: {
          type: 'number',
          description: '각 위치에서 고려할 최상위 토큰 수',
          default: 40,
        },
        topP: {
          type: 'number',
          description: '확률 질량의 상위 비율을 선택하는 임계값',
          default: 0.95,
        },
      },
    },
  • Core implementation in GeminiService.chatCompletion: converts chat messages to Gemini contents format (handling system as user prefix), calls Gemini generateContent API, parses response into OpenAI-like format with message, model, usage.
    async chatCompletion({
      model,
      messages,
      temperature = 0.7,
      max_tokens = 1024,
      topK = 40,
      topP = 0.95,
    }: {
      model: string;
      messages: Array<{ role: string; content: string }>;
      temperature?: number;
      max_tokens?: number;
      topK?: number;
      topP?: number;
    }) {
      try {
        const config = this.getRequestConfig();
        const url = `${this.baseUrl}/models/${model}:generateContent`;
    
        // OpenAI 형식의 메시지를 Gemini 형식으로 변환
        const geminiContents = [];
        let currentRole = null;
        let currentParts: any[] = [];
    
        for (const message of messages) {
          // 'system' 메시지는 'user' 역할로 변환하되 prefix를 추가
          if (message.role === 'system') {
            geminiContents.push({
              role: 'user',
              parts: [{ text: `[system] ${message.content}` }],
            });
            continue;
          }
    
          // 역할이 변경되면 새 항목 시작
          if (message.role !== currentRole && currentParts.length > 0) {
            geminiContents.push({
              role: currentRole === 'assistant' ? 'model' : 'user',
              parts: currentParts,
            });
            currentParts = [];
          }
    
          currentRole = message.role;
          currentParts.push({ text: message.content });
        }
    
        // 마지막 메시지 추가
        if (currentParts.length > 0) {
          geminiContents.push({
            role: currentRole === 'assistant' ? 'model' : 'user',
            parts: currentParts,
          });
        }
    
        const response = await axios.post(
          url,
          {
            contents: geminiContents,
            generationConfig: {
              temperature,
              maxOutputTokens: max_tokens,
              topK,
              topP,
            },
          },
          config
        );
    
        // 응답에서 생성된 텍스트 추출
        const generatedContent = response.data.candidates?.[0]?.content?.parts?.[0]?.text || '';
        
        return {
          message: {
            role: 'assistant',
            content: generatedContent,
          },
          model: model,
          usage: {
            completion_tokens: response.data.usageMetadata?.candidatesTokenCount || 0,
            prompt_tokens: response.data.usageMetadata?.promptTokenCount || 0,
            total_tokens: 
              (response.data.usageMetadata?.promptTokenCount || 0) + 
              (response.data.usageMetadata?.candidatesTokenCount || 0),
          },
        };
      } catch (error) {
        throw this.formatError(error);
      }
    }
  • src/index.ts:25-53 (registration)
    MCP server capabilities registration enabling the mcp_gemini_chat_completion tool.
    tools: {
      mcp_sparql_execute_query: true,
      mcp_sparql_update: true,
      mcp_sparql_list_repositories: true,
      mcp_sparql_list_graphs: true,
      mcp_sparql_get_resource_info: true,
      mcp_ollama_run: true,
      mcp_ollama_show: true,
      mcp_ollama_pull: true,
      mcp_ollama_list: true,
      mcp_ollama_rm: true,
      mcp_ollama_chat_completion: true,
      mcp_ollama_status: true,
      mcp_http_request: true,
      mcp_openai_chat: true,
      mcp_openai_image: true,
      mcp_openai_tts: true,
      mcp_openai_transcribe: true,
      mcp_openai_embedding: true,
      mcp_gemini_generate_text: true,
      mcp_gemini_chat_completion: true,
      mcp_gemini_list_models: true,
      mcp_gemini_generate_images: false,
      mcp_gemini_generate_image: false,
      mcp_gemini_generate_videos: false,
      mcp_gemini_generate_multimodal_content: false,
      mcp_imagen_generate: false,
      mcp_gemini_create_image: false,
      mcp_gemini_edit_image: false
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. '채팅 대화를 완성합니다' indicates a generative operation but doesn't disclose rate limits, authentication requirements, cost implications, response formats, or error behaviors. For a complex AI model interaction tool, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient Korean sentence that states the core purpose without any wasted words. It's appropriately sized for a tool with comprehensive schema documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex AI model interaction tool with no annotations and no output schema, the description is insufficient. It doesn't explain what 'completes chat conversations' means in practice, what the response format looks like, error conditions, or how this differs from similar chat completion tools. The 100% schema coverage helps with parameters but doesn't compensate for missing behavioral context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description adds no additional parameter context beyond what's in the schema. The baseline of 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('채팅 대화를 완성합니다' - completes chat conversations) and the resource (Gemini AI 모델). It distinguishes from image/video generation siblings but doesn't explicitly differentiate from other chat completion tools like mcp_ollama_chat_completion or mcp_openai_chat, which is why it's not a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple chat completion tools available (Gemini, Ollama, OpenAI), there's no indication of when this specific Gemini implementation should be chosen over others, nor any prerequisites or constraints mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bigdata-coss/agent_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server