Skip to main content
Glama

gemini_sendFunctionResult

Transmits the results of executed functions to an ongoing Gemini chat session using its sessionId, enabling the model to generate a response based on the provided function outcomes.

Instructions

Sends the result(s) of function execution(s) back to an existing Gemini chat session, identified by its sessionId. Returns the model's subsequent response.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
functionResponsesYesRequired. An array containing the results of the function calls executed by the client. Each item must include the function 'name' and its 'response' object.
generationConfigNoOptional. Per-request generation configuration settings to override session defaults for this turn.
safetySettingsNoOptional. Per-request safety settings to override session defaults for this turn.
sessionIdYesRequired. The unique identifier of the chat session.

Implementation Reference

  • Handler logic within the gemini_chat tool for the 'send_function_result' operation, which sends function execution results back to a Gemini chat session by calling the service method.
    case "send_function_result": { // Send function results to an existing chat session // Note: The service expects a string, so we stringify the array of function responses const response: GenerateContentResponse = await serviceInstance.sendFunctionResultToSession({ sessionId: typedArgs.sessionId!, functionResponse: JSON.stringify(typedArgs.functionResponses), functionCall: undefined, // Could be enhanced to pass original function call }); // Process the response return processGenerateContentResponse( response, typedArgs.sessionId!, true ); }
  • TypeScript interface defining the input structure for function responses, explicitly noted as used by the gemini_sendFunctionResult tool.
    /** * Represents the input structure for a function response sent from the client to the server. * Used by the gemini_sendFunctionResult tool. */ export interface FunctionResponseInput { /** The name of the function that was called by the model. */ name: string; /** The JSON object result returned by the function execution. */ response: Record<string, unknown>; }
  • Zod schema (functionResponseInputSchema) for validating function response inputs in the send_function_result operation of gemini_chat tool, matching the FunctionResponseInput type.
    const functionResponseInputSchema = z .object({ name: z .string() .min(1) .describe( "Required. The name of the function that was called by the model." ), response: z .record(z.unknown()) .describe( "Required. The JSON object result returned by the function execution." ), }) .describe( "Represents the result of a single function execution to be sent back to the model." );
  • Core service method that implements sending a function result to a Gemini chat session by constructing the function response content, appending to session history, and generating the model's next response via the Gemini API.
    public async sendFunctionResultToSession( params: SendFunctionResultParams ): Promise<GenerateContentResponse> { const { sessionId, functionResponse, functionCall } = params; // Get the chat session const session = this.chatSessions.get(sessionId); if (!session) { throw new GeminiApiError(`Chat session not found: ${sessionId}`); } // Create function response message const responseContent: Content = { role: "function", parts: [ { functionResponse: { name: functionCall?.name || "function", response: { content: functionResponse }, }, }, ], }; // Add the function response to the session history session.history.push(responseContent); try { // Prepare the request configuration const requestConfig: { model: string; contents: Content[]; generationConfig?: GenerationConfig; safetySettings?: SafetySetting[]; tools?: Tool[]; toolConfig?: ToolConfig; systemInstruction?: Content; cachedContent?: string; thinkingConfig?: ThinkingConfig; } = { model: session.model, contents: session.history, }; // Add configuration from the session if (session.config.systemInstruction) { requestConfig.systemInstruction = session.config.systemInstruction; } if (session.config.generationConfig) { requestConfig.generationConfig = session.config.generationConfig; // Use thinking config from session if available if (session.config.thinkingConfig) { requestConfig.thinkingConfig = processThinkingConfig( session.config.thinkingConfig ); } } if (session.config.safetySettings) { requestConfig.safetySettings = session.config.safetySettings; } if (session.config.tools) { requestConfig.tools = session.config.tools; } if (session.config.cachedContent) { requestConfig.cachedContent = session.config.cachedContent; } logger.debug( `Sending function result to session ${sessionId} using model ${session.model}` ); // Call the generateContent API directly const response = await this.genAI.models.generateContent(requestConfig); // Process the response if (response.candidates && response.candidates.length > 0) { const assistantMessage = response.candidates[0].content; if (assistantMessage) { // Add the assistant response to the session history session.history.push(assistantMessage); } } return response; } catch (error: unknown) { logger.error( `Error sending function result to session ${sessionId}:`, error ); throw new GeminiApiError( `Failed to send function result to session ${sessionId}: ${(error as Error).message}`, error ); } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bsmi021/mcp-gemini-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server