Skip to main content
Glama

text_generation

Generate text completions using AI models via DeepInfra's API. Input prompts to create content, answer questions, or assist with writing tasks.

Instructions

Generate text completion using DeepInfra OpenAI-compatible API.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYes

Implementation Reference

  • The handler function that implements the core logic of the text_generation tool, using AsyncOpenAI client to call DeepInfra's completions API with configurable model, fixed max_tokens=256 and temperature=0.7.
    async def text_generation(prompt: str) -> str: """Generate text completion using DeepInfra OpenAI-compatible API.""" model = DEFAULT_MODELS["text_generation"] try: response = await client.completions.create( model=model, prompt=prompt, max_tokens=256, temperature=0.7, ) if response.choices: return response.choices[0].text else: return "No text generated" except Exception as e: return f"Error generating text: {type(e).__name__}: {str(e)}"
  • The conditional registration of the text_generation tool via FastMCP's @app.tool() decorator, enabled if 'all' or 'text_generation' is in ENABLED_TOOLS.
    if "all" in ENABLED_TOOLS or "text_generation" in ENABLED_TOOLS: @app.tool()
  • Helper configuration defining the default model ID for the text_generation tool, overridable via MODEL_TEXT_GENERATION env var.
    "text_generation": os.getenv("MODEL_TEXT_GENERATION", "meta-llama/Llama-2-7b-chat-hf"),
  • Inferred schema from function signature: input 'prompt' as str, output str. Docstring provides description for MCP tool schema.
    async def text_generation(prompt: str) -> str:

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/phuihock/mcp-deeinfra'

If you have feedback or need assistance with the MCP directory API, please join our Discord server