fill_mask
Complete text with missing words by predicting the most appropriate replacements using AI language models. Enter text with masked tokens to generate coherent and contextually accurate completions.
Instructions
Fill masked tokens in text using DeepInfra OpenAI-compatible API.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| model | No | ||
| text | Yes |
Implementation Reference
- src/mcp_deepinfra/server.py:266-286 (handler)The core handler function for the 'fill_mask' tool. It constructs a prompt to fill in [MASK] tokens using a language model via DeepInfra's OpenAI-compatible completions API and returns the model's response.async def fill_mask(text: str) -> str: """Fill masked tokens in text using DeepInfra OpenAI-compatible API.""" model = DEFAULT_MODELS["fill_mask"] prompt = f"""Fill in the [MASK] token in the following text with the most appropriate word. Provide the completed sentence and explain your choice. Text: {text} Response format: {{"filled_text": "completed sentence", "chosen_word": "word", "explanation": "reasoning"}}""" try: response = await client.completions.create( model=model, prompt=prompt, max_tokens=200, temperature=0.1, ) if response.choices: return response.choices[0].text else: return "Unable to fill mask" except Exception as e: return f"Error filling mask: {type(e).__name__}: {str(e)}"
- src/mcp_deepinfra/server.py:265-265 (registration)The @app.tool() decorator registers the fill_mask function as an MCP tool with FastMCP.@app.tool()
- src/mcp_deepinfra/server.py:41-41 (helper)Configuration for the default model used by the fill_mask tool, retrievable from DEFAULT_MODELS["fill_mask"]."fill_mask": os.getenv("MODEL_FILL_MASK", "microsoft/DialoGPT-medium"),
- src/mcp_deepinfra/server.py:264-264 (helper)Conditional check to enable registration and definition of the fill_mask tool based on ENABLED_TOOLS environment variable.if "all" in ENABLED_TOOLS or "fill_mask" in ENABLED_TOOLS: