text_generation
Generate text completions using AI models via DeepInfra's API. Input prompts to create content, answer questions, or assist with writing tasks.
Instructions
Generate text completion using DeepInfra OpenAI-compatible API.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes |
Implementation Reference
- src/mcp_deepinfra/server.py:66-81 (handler)The handler function that implements the core logic of the text_generation tool, using AsyncOpenAI client to call DeepInfra's completions API with configurable model, fixed max_tokens=256 and temperature=0.7.async def text_generation(prompt: str) -> str: """Generate text completion using DeepInfra OpenAI-compatible API.""" model = DEFAULT_MODELS["text_generation"] try: response = await client.completions.create( model=model, prompt=prompt, max_tokens=256, temperature=0.7, ) if response.choices: return response.choices[0].text else: return "No text generated" except Exception as e: return f"Error generating text: {type(e).__name__}: {str(e)}"
- src/mcp_deepinfra/server.py:64-65 (registration)The conditional registration of the text_generation tool via FastMCP's @app.tool() decorator, enabled if 'all' or 'text_generation' is in ENABLED_TOOLS.if "all" in ENABLED_TOOLS or "text_generation" in ENABLED_TOOLS: @app.tool()
- src/mcp_deepinfra/server.py:33-33 (helper)Helper configuration defining the default model ID for the text_generation tool, overridable via MODEL_TEXT_GENERATION env var."text_generation": os.getenv("MODEL_TEXT_GENERATION", "meta-llama/Llama-2-7b-chat-hf"),
- src/mcp_deepinfra/server.py:66-66 (schema)Inferred schema from function signature: input 'prompt' as str, output str. Docstring provides description for MCP tool schema.async def text_generation(prompt: str) -> str: