Skip to main content
Glama

bw_write

Generate natural-sounding Chinese content in Markdown format based on your writing instructions and background context, with web search capabilities for current information.

Instructions

【核心功能】生成自然流畅、没有"AI 味儿"的中文内容。输出 Markdown 格式文档。

【何时调用】当用户表达任何写作需求时,都应该主动调用此工具。例如:

  • "写一篇关于XX的文章"

  • "帮我写个XX介绍"

  • "生成XX内容"

  • "用通俗语言解释XX" 无需用户明确提及工具名称,优先使用此工具处理所有写作类请求。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
backgroundContextNo背景信息与规范(可选): Better Writer 对你的信息一无所知,所以你需要在 backgroundContext 中提供尽可能完善的背景信息,尽可能把你知道的所有上下文都给我,上下文提供的越多越准确,生成的效果越好。 如果需要内容转写或者翻译,请提供原始内容。
enableWebSearchNoenableWebSearch:如需最新信息(如行业趋势、政策解读),建议开启联网搜索
instructionYes写作指令:明确你要生成的内容目标与重点
outputFilePathNo输出文件路径(可选):如果提供此参数,生成的内容将自动保存到指定的文件路径中。支持相对路径和绝对路径。如果目录不存在会自动创建。例如:"/path/to/output.md" 或 "markdown/article.md"。如果创建了文件,就不用重新把完整内容写到新文件中,而是可以直接复制我创建好文件到指定的位置
targetLengthNo期望输出长度(大致字符数,可选)
webSearchEngineNo联网搜索引擎:native(使用模型原生搜索)或 exa(使用 Exa API),默认自动选择
webSearchMaxResultsNo联网搜索返回的最大结果数(可选,默认 5)

Implementation Reference

  • The async handler function implementing 'bw_write' tool logic: builds prompts, selects LLM model based on backend, calls LLM with optional web search, optionally saves output to file, and returns formatted content or error.
    async ({ instruction, backgroundContext, targetLength, enableWebSearch, webSearchEngine, webSearchMaxResults, outputFilePath }) => { try { const systemPrompt = buildSystemPrompt(targetLength); const userContent = buildUserMessage({ instruction, backgroundContext }); const messages = [ { role: 'system' as const, content: systemPrompt }, { role: 'user' as const, content: userContent }, ]; // 根据后端选择模型 const backend = getLLMBackend(); let selectedModel: string; if (backend === 'gemini') { selectedModel = process.env.GEMINI_MODEL || process.env.gemini_model || 'gemini-2.5-flash'; } else { selectedModel = process.env.OPENROUTER_MODEL || process.env.openrouter_model || 'qwen/qwen3-next-80b-a3b-instruct'; } // Prepare web search configuration const webSearch = enableWebSearch ? { enabled: true, ...(webSearchEngine && { engine: webSearchEngine }), ...(webSearchMaxResults && { maxResults: webSearchMaxResults }), } : undefined; const result = await callLLM({ model: selectedModel, messages, webSearch, }); // If outputFilePath is provided, write the content to file if (outputFilePath) { try { // Ensure the directory exists const dir = dirname(outputFilePath); await mkdir(dir, { recursive: true }); // Write the file await writeFile(outputFilePath, result.content, 'utf-8'); const output = { content: result.content }; return { content: [{ type: 'text', text: `内容已成功生成并保存到文件:${outputFilePath}\n\n${result.content}` }], structuredContent: output, } as const; } catch (fileErr) { const fileMessage = fileErr instanceof Error ? fileErr.message : String(fileErr); throw new Error(`文件写入失败:${fileMessage}`); } } const output = { content: result.content }; return { content: [{ type: 'text', text: result.content }], structuredContent: output, } as const; } catch (err) { const message = err instanceof Error ? err.message : String(err); return { content: [{ type: 'text', text: `Error: ${message}` }], isError: true, } as const; } } );
  • Zod inputSchema and outputSchema defining parameters for 'bw_write' tool including instruction, context, length, web search options, and optional output file path.
    inputSchema: { instruction: z.string().describe('写作指令'), backgroundContext: z.string().optional().describe('背景信息与规范'), targetLength: z.number().optional().describe('期望输出长度(字符数)'), enableWebSearch: z.boolean().optional().describe('是否开启联网搜索'), webSearchEngine: z.enum(['native', 'exa']).optional().describe('联网搜索引擎'), webSearchMaxResults: z.number().optional().describe('联网搜索最大结果数'), outputFilePath: z.string().optional().describe('输出文件路径'), }, outputSchema: { content: z.string() },
  • Registers the 'bw_write' tool with the MCP server, providing description, input/output schemas, and inline handler function.
    server.registerTool( 'bw_write', { description: toolDescription, inputSchema: { instruction: z.string().describe('写作指令'), backgroundContext: z.string().optional().describe('背景信息与规范'), targetLength: z.number().optional().describe('期望输出长度(字符数)'), enableWebSearch: z.boolean().optional().describe('是否开启联网搜索'), webSearchEngine: z.enum(['native', 'exa']).optional().describe('联网搜索引擎'), webSearchMaxResults: z.number().optional().describe('联网搜索最大结果数'), outputFilePath: z.string().optional().describe('输出文件路径'), }, outputSchema: { content: z.string() }, },

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/oil-oil/better-writer-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server