text_generation
Generate text completions using AI models to extend prompts, create content, or answer questions through DeepInfra's API.
Instructions
Generate text completion using DeepInfra OpenAI-compatible API.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| model | No | ||
| prompt | Yes |
Implementation Reference
- src/mcp_deepinfra/server.py:64-81 (handler)Conditional registration and implementation of the 'text_generation' tool handler. Uses DeepInfra's OpenAI-compatible completions API to generate text from a prompt using a configurable model.if "all" in ENABLED_TOOLS or "text_generation" in ENABLED_TOOLS: @app.tool() async def text_generation(prompt: str) -> str: """Generate text completion using DeepInfra OpenAI-compatible API.""" model = DEFAULT_MODELS["text_generation"] try: response = await client.completions.create( model=model, prompt=prompt, max_tokens=256, temperature=0.7, ) if response.choices: return response.choices[0].text else: return "No text generated" except Exception as e: return f"Error generating text: {type(e).__name__}: {str(e)}"
- src/mcp_deepinfra/server.py:33-33 (helper)Configuration for the default model used by the text_generation tool."text_generation": os.getenv("MODEL_TEXT_GENERATION", "meta-llama/Llama-2-7b-chat-hf"),
- src/mcp_deepinfra/server.py:66-67 (schema)Function signature and docstring defining the input schema (prompt: str) and output (str) for the text_generation tool.async def text_generation(prompt: str) -> str: """Generate text completion using DeepInfra OpenAI-compatible API."""