Skip to main content
Glama

bw_write

Generate natural-sounding Chinese content in Markdown format for writing tasks like articles, explanations, and translations, with options for web search and file saving.

Instructions

【核心功能】生成自然流畅、没有"AI 味儿"的中文内容。输出 Markdown 格式文档。

【何时调用】当用户表达任何写作需求时,都应该主动调用此工具。例如:

  • "写一篇关于XX的文章"

  • "帮我写个XX介绍"

  • "生成XX内容"

  • "用通俗语言解释XX" 无需用户明确提及工具名称,优先使用此工具处理所有写作类请求。

【参数说明】

  1. instruction(必填):写作指令,明确说明你要生成的内容目标与重点。

  2. backgroundContext(可选):背景信息与规范。Better Writer 对你的信息一无所知,上下文越详细,生成效果越好。

    • ⚠️ 重要:如果需要翻译或改写文章,必须在此参数中提供完整的原始内容/原文,不要只提供摘要或部分内容,否则会严重影响翻译和改写的质量和准确性。

  3. targetLength(可选):期望输出长度(大致字符数),帮助控制内容篇幅。

  4. enableWebSearch(可选):是否开启联网搜索。如需最新信息(如行业趋势、政策解读、实时数据),建议设置为 true。

  5. webSearchEngine(可选):联网搜索引擎选择,可选值为 "native"(使用模型原生搜索)或 "exa"(使用 Exa API),默认自动选择。

  6. webSearchMaxResults(可选):联网搜索返回的最大结果数,默认为 5。

  7. outputFilePath(可选):输出文件路径。如果提供此参数,生成的内容将自动保存到指定的文件路径中(支持相对路径和绝对路径,目录不存在会自动创建)。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
instructionYes写作指令
backgroundContextNo背景信息与规范
targetLengthNo期望输出长度(字符数)
enableWebSearchNo是否开启联网搜索
webSearchEngineNo联网搜索引擎
webSearchMaxResultsNo联网搜索最大结果数
outputFilePathNo输出文件路径

Implementation Reference

  • The handler function for 'bw_write' tool. Builds prompts, calls LLM (Gemini/OpenRouter with optional web search), handles file output, and returns formatted content or error.
    async ({ instruction, backgroundContext, targetLength, enableWebSearch, webSearchEngine, webSearchMaxResults, outputFilePath }) => {
      try {
        const systemPrompt = buildSystemPrompt(targetLength);
        const userContent = buildUserMessage({ instruction, backgroundContext });
        const messages = [
          { role: 'system' as const, content: systemPrompt },
          { role: 'user' as const, content: userContent },
        ];
    
        // 根据后端选择模型
        const backend = getLLMBackend();
        let selectedModel: string;
        
        if (backend === 'gemini') {
          selectedModel = process.env.GEMINI_MODEL || process.env.gemini_model || 'gemini-2.5-flash';
        } else {
          selectedModel = process.env.OPENROUTER_MODEL || process.env.openrouter_model || 'qwen/qwen3-next-80b-a3b-instruct';
        }
    
        // Prepare web search configuration
        const webSearch = enableWebSearch
          ? {
              enabled: true,
              ...(webSearchEngine && { engine: webSearchEngine }),
              ...(webSearchMaxResults && { maxResults: webSearchMaxResults }),
            }
          : undefined;
    
        const result = await callLLM({ 
          model: selectedModel, 
          messages,
          webSearch,
        });
    
        // If outputFilePath is provided, write the content to file
        if (outputFilePath) {
          try {
            // Ensure the directory exists
            const dir = dirname(outputFilePath);
            await mkdir(dir, { recursive: true });
            
            // Write the file
            await writeFile(outputFilePath, result.content, 'utf-8');
            
            const output = { content: result.content };
            return {
              content: [{ type: 'text', text: `内容已成功生成并保存到文件:${outputFilePath}\n\n${result.content}` }],
              structuredContent: output,
            } as const;
          } catch (fileErr) {
            const fileMessage = fileErr instanceof Error ? fileErr.message : String(fileErr);
            throw new Error(`文件写入失败:${fileMessage}`);
          }
        }
    
        const output = { content: result.content };
        return {
          content: [{ type: 'text', text: result.content }],
          structuredContent: output,
        } as const;
      } catch (err) {
        const message = err instanceof Error ? err.message : String(err);
        return {
          content: [{ type: 'text', text: `Error: ${message}` }],
          isError: true,
        } as const;
      }
  • Zod schemas defining input parameters and output for the 'bw_write' tool.
    inputSchema: {
      instruction: z.string().describe('写作指令'),
      backgroundContext: z.string().optional().describe('背景信息与规范'),
      targetLength: z.number().optional().describe('期望输出长度(字符数)'),
      enableWebSearch: z.boolean().optional().describe('是否开启联网搜索'),
      webSearchEngine: z.enum(['native', 'exa']).optional().describe('联网搜索引擎'),
      webSearchMaxResults: z.number().optional().describe('联网搜索最大结果数'),
      outputFilePath: z.string().optional().describe('输出文件路径'),
    },
    outputSchema: { content: z.string() },
  • Registers the 'bw_write' tool on the McpServer instance with description, schema, and handler function.
    server.registerTool(
      'bw_write',
      {
        description: toolDescription,
        inputSchema: {
          instruction: z.string().describe('写作指令'),
          backgroundContext: z.string().optional().describe('背景信息与规范'),
          targetLength: z.number().optional().describe('期望输出长度(字符数)'),
          enableWebSearch: z.boolean().optional().describe('是否开启联网搜索'),
          webSearchEngine: z.enum(['native', 'exa']).optional().describe('联网搜索引擎'),
          webSearchMaxResults: z.number().optional().describe('联网搜索最大结果数'),
          outputFilePath: z.string().optional().describe('输出文件路径'),
        },
        outputSchema: { content: z.string() },
      },
      async ({ instruction, backgroundContext, targetLength, enableWebSearch, webSearchEngine, webSearchMaxResults, outputFilePath }) => {
        try {
          const systemPrompt = buildSystemPrompt(targetLength);
          const userContent = buildUserMessage({ instruction, backgroundContext });
          const messages = [
            { role: 'system' as const, content: systemPrompt },
            { role: 'user' as const, content: userContent },
          ];
    
          // 根据后端选择模型
          const backend = getLLMBackend();
          let selectedModel: string;
          
          if (backend === 'gemini') {
            selectedModel = process.env.GEMINI_MODEL || process.env.gemini_model || 'gemini-2.5-flash';
          } else {
            selectedModel = process.env.OPENROUTER_MODEL || process.env.openrouter_model || 'qwen/qwen3-next-80b-a3b-instruct';
          }
    
          // Prepare web search configuration
          const webSearch = enableWebSearch
            ? {
                enabled: true,
                ...(webSearchEngine && { engine: webSearchEngine }),
                ...(webSearchMaxResults && { maxResults: webSearchMaxResults }),
              }
            : undefined;
    
          const result = await callLLM({ 
            model: selectedModel, 
            messages,
            webSearch,
          });
    
          // If outputFilePath is provided, write the content to file
          if (outputFilePath) {
            try {
              // Ensure the directory exists
              const dir = dirname(outputFilePath);
              await mkdir(dir, { recursive: true });
              
              // Write the file
              await writeFile(outputFilePath, result.content, 'utf-8');
              
              const output = { content: result.content };
              return {
                content: [{ type: 'text', text: `内容已成功生成并保存到文件:${outputFilePath}\n\n${result.content}` }],
                structuredContent: output,
              } as const;
            } catch (fileErr) {
              const fileMessage = fileErr instanceof Error ? fileErr.message : String(fileErr);
              throw new Error(`文件写入失败:${fileMessage}`);
            }
          }
    
          const output = { content: result.content };
          return {
            content: [{ type: 'text', text: result.content }],
            structuredContent: output,
          } as const;
        } catch (err) {
          const message = err instanceof Error ? err.message : String(err);
          return {
            content: [{ type: 'text', text: `Error: ${message}` }],
            isError: true,
          } as const;
        }
      }
    );
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/oil-oil/better-writer-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server