Skip to main content
Glama

call_model

Process user prompts to generate precise model responses using the FullScope-MCP server for content summarization and analysis.

Instructions

调用模型进行回答 Args: prompt: 要发送给模型的提示词 Returns: 模型的回答

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYes

Implementation Reference

  • The `call_model` tool handler function, registered via `@mcp.tool()`. It calls the OpenAI chat completions API using the `summarizer.client` with the provided `prompt`, handling errors gracefully.
    @mcp.tool() async def call_model(prompt: str, ctx: Context) -> str: """ 调用模型进行回答 Args: prompt: 要发送给模型的提示词 Returns: 模型的回答 """ try: response = summarizer.client.chat.completions.create( model=OPENAI_MODEL, messages=[ {"role": "user", "content": prompt} ], max_tokens=MAX_OUTPUT_TOKENS, temperature=0.7 ) return response.choices[0].message.content.strip() except Exception as e: logger.error(f"模型调用失败: {e}") return f"模型调用失败: {str(e)}"
  • The `@mcp.tool()` decorator registers the `call_model` function as an MCP tool.
    @mcp.tool()
  • Function signature and docstring define the input schema (prompt: str, ctx: Context) and output (str).
    async def call_model(prompt: str, ctx: Context) -> str: """ 调用模型进行回答 Args: prompt: 要发送给模型的提示词 Returns: 模型的回答 """

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/yzfly/fullscope-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server