Skip to main content
Glama

mcp_gemini_generate_text

Generate text using the Gemini AI model by defining prompt, model ID, and parameters like temperature, max tokens, topK, and topP for tailored outputs.

Instructions

Gemini AI 모델을 사용하여 텍스트를 생성합니다.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
max_tokensNo생성할 최대 토큰 수
modelYes사용할 Gemini 모델 ID (예: gemini-pro, gemini-1.5-pro 등)
promptYes텍스트 생성을 위한 프롬프트
temperatureNo생성 랜덤성 정도 (0.0 - 2.0)
topKNo각 위치에서 고려할 최상위 토큰 수
topPNo확률 질량의 상위 비율을 선택하는 임계값

Implementation Reference

  • The MCP tool handler function that receives arguments, calls the underlying Gemini service's generateText method, formats the response as ToolResponse, and handles errors.
    async handler(args: any): Promise<ToolResponse> { try { const result = await geminiService.generateText(args); return { content: [{ type: 'text', text: result.text }] }; } catch (error) { return { content: [{ type: 'text', text: `Gemini 텍스트 생성 오류: ${error instanceof Error ? error.message : String(error)}` }] }; } }
  • Core helper function implementing the actual API call to Gemini's generateContent endpoint, handling parameters, sending POST request via axios, extracting generated text, and returning structured response with usage info.
    async generateText({ model, prompt, temperature = 0.7, max_tokens = 1024, topK = 40, topP = 0.95, }: { model: string; prompt: string; temperature?: number; max_tokens?: number; topK?: number; topP?: number; }) { try { const config = this.getRequestConfig(); const url = `${this.baseUrl}/models/${model}:generateContent`; const response = await axios.post( url, { contents: [ { parts: [ { text: prompt, }, ], }, ], generationConfig: { temperature, maxOutputTokens: max_tokens, topK, topP, }, }, config ); // 응답에서 생성된 텍스트 추출 const generatedText = response.data.candidates?.[0]?.content?.parts?.[0]?.text || ''; return { text: generatedText, model: model, usage: { completion_tokens: response.data.usageMetadata?.candidatesTokenCount || 0, prompt_tokens: response.data.usageMetadata?.promptTokenCount || 0, total_tokens: (response.data.usageMetadata?.promptTokenCount || 0) + (response.data.usageMetadata?.candidatesTokenCount || 0), }, }; } catch (error) { throw this.formatError(error); } }
  • JSON Schema defining the input parameters, types, descriptions, defaults, and required fields for the mcp_gemini_generate_text tool.
    inputSchema: { type: 'object', required: ['model', 'prompt'], properties: { model: { type: 'string', description: '사용할 Gemini 모델 ID (예: gemini-pro, gemini-1.5-pro 등)', }, prompt: { type: 'string', description: '텍스트 생성을 위한 프롬프트', }, temperature: { type: 'number', description: '생성 랜덤성 정도 (0.0 - 2.0)', default: 0.7, }, max_tokens: { type: 'number', description: '생성할 최대 토큰 수', default: 1024, }, topK: { type: 'number', description: '각 위치에서 고려할 최상위 토큰 수', default: 40, }, topP: { type: 'number', description: '확률 질량의 상위 비율을 선택하는 임계값', default: 0.95, }, }, },
  • src/index.ts:44-44 (registration)
    MCP server capabilities registration enabling the mcp_gemini_generate_text tool.
    mcp_gemini_generate_text: true,

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bigdata-coss/agent_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server