call_model
Process user prompts to generate precise model responses using the FullScope-MCP server for content summarization and analysis.
Instructions
调用模型进行回答
Args:
prompt: 要发送给模型的提示词
Returns:
模型的回答
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes |
Implementation Reference
- The `call_model` tool handler function, registered via `@mcp.tool()`. It calls the OpenAI chat completions API using the `summarizer.client` with the provided `prompt`, handling errors gracefully.@mcp.tool() async def call_model(prompt: str, ctx: Context) -> str: """ 调用模型进行回答 Args: prompt: 要发送给模型的提示词 Returns: 模型的回答 """ try: response = summarizer.client.chat.completions.create( model=OPENAI_MODEL, messages=[ {"role": "user", "content": prompt} ], max_tokens=MAX_OUTPUT_TOKENS, temperature=0.7 ) return response.choices[0].message.content.strip() except Exception as e: logger.error(f"模型调用失败: {e}") return f"模型调用失败: {str(e)}"
- src/fullscope_mcp_server/server.py:277-277 (registration)The `@mcp.tool()` decorator registers the `call_model` function as an MCP tool.@mcp.tool()
- Function signature and docstring define the input schema (prompt: str, ctx: Context) and output (str).async def call_model(prompt: str, ctx: Context) -> str: """ 调用模型进行回答 Args: prompt: 要发送给模型的提示词 Returns: 模型的回答 """