Skip to main content
Glama

contentrain_apply

Apply normalization operations to extract content into structured files or patch source files with replacement expressions, using a preview-first workflow to validate changes before execution.

Instructions

Apply normalize operations. Two modes: "extract" writes agent-approved strings to Contentrain content files (source untouched), "reuse" patches source files with agent-provided replacement expressions. DRY RUN (default, dry_run:true): validates inputs, resolves conflicts, and returns a full preview — NO changes to disk or git. EXECUTE (dry_run:false): writes files to disk, commits to a branch, and requires branch health check to pass. Recommended workflow: always run dry_run first, review the preview, then call again with dry_run:false to execute. Normalize operations always use review workflow (never auto-merge).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modeYesApply mode: extract (content creation) or reuse (source patching)
dry_runNoDefaults to preview mode (dry_run:true). Set dry_run:false to execute after reviewing the preview.
extractionsNoExtract mode: content extractions
scopeNoReuse mode: scope (model or domain required)
patchesNoReuse mode: patches to apply (max 100)

Implementation Reference

  • Handler for 'contentrain_apply' in "reuse" mode, which patches source files with replacement expressions.
    export async function applyReuse(
      projectRoot: string,
      input: ReuseInput,
    ): Promise<ReuseResult> {
      const config = await readConfig(projectRoot)
      if (!config) throw new Error('Project not initialized. Run contentrain_init first.')
    
      const { scope, patches, dry_run } = input
    
      // Validate scope
      if (!scope.model && !scope.domain) {
        throw new Error('Scope required: provide model or domain. Whole-project patching is not allowed.')
      }
    
      // Validate patch count
      if (patches.length > MAX_PATCHES) {
        throw new Error(`Too many patches (${patches.length}). Maximum ${MAX_PATCHES} per operation. Split into multiple calls.`)
      }
    
      // Check content exists for scope (soft warning)
      if (scope.model) {
        const model = await readModel(projectRoot, scope.model)
        if (!model) {
          throw new Error(`Model "${scope.model}" not found. Run extract phase first.`)
        }
      }
    
      const scopeWarnings: string[] = []
    
      // Guardrail #1: Scope Real Enforcement
      // Step 1: Path safety — every patch must target a valid, patchable source file
      for (const patch of patches) {
        const pathError = validatePatchPath(patch.file)
        if (pathError) {
          throw new Error(`Invalid patch path: ${pathError}`)
        }
      }
    
      // Step 2: Semantic scope — verify scope model/domain exists and cross-check patch files
      if (scope.model || scope.domain) {
        const models = await listModels(projectRoot)
        const scopeModels = scope.model
          ? models.filter(m => m.id === scope.model)
          : scope.domain
            ? models.filter(m => m.domain === scope.domain)
            : models
    
        if (scopeModels.length === 0) {
          throw new Error(`No models found for scope ${scope.model ? `model="${scope.model}"` : `domain="${scope.domain}"`}`)
        }
    
        // Step 3: Verify patch files are source files (not content/config/meta files)
        // and belong to detectable source directories (not random locations)
        const { autoDetectSourceDirs } = await import('./scan-config.js')
        const sourceDirs = await autoDetectSourceDirs(projectRoot)
    
        // Build allowed file prefixes from source dirs
        const allowedPrefixes = sourceDirs.map(d => d === '.' ? '' : d + '/')
    
        for (const patch of patches) {
          const normalizedPath = patch.file.replace(/\\/g, '/')
    
          // Reject patches targeting .contentrain/ directory (content files should never be patched by reuse)
          if (normalizedPath.startsWith('.contentrain/') || normalizedPath.includes('/.contentrain/')) {
            throw new Error(`Cannot patch content/config files directly: "${patch.file}". Reuse patches source files only.`)
          }
    
          // If source dirs were detected (not just "."), verify patch files are within them
          if (sourceDirs.length > 0 && sourceDirs[0] !== '.') {
            const inSourceDir = allowedPrefixes.some(prefix =>
              prefix === '' || normalizedPath.startsWith(prefix),
            )
            if (!inSourceDir) {
              throw new Error(
                `Patch file "${patch.file}" is outside detected source directories (${sourceDirs.join(', ')}). ` +
                `Reuse patches must target source files within the project's source tree.`,
              )
            }
          }
        }
    
        // Step 4: Semantic source→model cross-check via normalize-sources.json
        // Source map is written by extract into the review branch worktree.
        // It only exists on base after extract branch is merged.
        // If missing: dry_run proceeds with warning, execute is blocked.
        if (scope.model || scope.domain) {
          const sourcesPath = join(projectRoot, '.contentrain', 'normalize-sources.json')
          const sourcesRaw = await readText(sourcesPath)
    
          if (!sourcesRaw) {
            // Source map not found — check if scoped model is a dictionary
            // Dictionaries store all keys in one entry, so per-file source tracking
            // is not available. Allow reuse with a warning instead of blocking.
            const scopedModel = scopeModels.find(m => m.id === scope.model)
            const isDictionary = scopedModel?.kind === 'dictionary'
    
            if (isDictionary) {
              scopeWarnings.push(
                'normalize-sources.json not found. Dictionary models do not generate per-file source maps. ' +
                'Scope enforcement is based on source-tree locality only.',
              )
            } else if (dry_run !== false) {
              scopeWarnings.push(
                'normalize-sources.json not found. Semantic scope enforcement is unavailable. ' +
                'Merge the extract branch first, then reuse will have full scope protection.',
              )
            } else {
              throw new Error(
                'Cannot execute reuse: normalize-sources.json not found on base branch. ' +
                'The extract branch must be merged before reuse can execute. ' +
                'This ensures semantic scope enforcement protects against out-of-scope patching.',
              )
            }
          } else {
            const sourcesData = JSON.parse(sourcesRaw) as {
              models?: Record<string, { source_files?: string[] }>
            }
    
            // Collect all allowed source files for the scope
            let allowedSourceFiles: string[] = []
            let scopeLabel = ''
    
            if (scope.model) {
              const modelSources = sourcesData.models?.[scope.model]?.source_files
              if (modelSources) allowedSourceFiles = modelSources
              scopeLabel = `model "${scope.model}"`
            } else if (scope.domain) {
              const domainModelIds = scopeModels.map(m => m.id)
              for (const modelId of domainModelIds) {
                const modelSources = sourcesData.models?.[modelId]?.source_files
                if (modelSources) allowedSourceFiles.push(...modelSources)
              }
              allowedSourceFiles = [...new Set(allowedSourceFiles)]
              scopeLabel = `domain "${scope.domain}"`
            }
    
            if (allowedSourceFiles.length > 0) {
              const outOfScopePatches: string[] = []
              for (const patch of patches) {
                const normalizedPath = patch.file.replace(/\\/g, '/')
                if (!allowedSourceFiles.includes(normalizedPath)) {
                  outOfScopePatches.push(patch.file)
                }
              }
              if (outOfScopePatches.length > 0) {
                throw new Error(
                  `Scope enforcement: ${outOfScopePatches.length} patch file(s) are not associated with ${scopeLabel}. ` +
                  `Out-of-scope files: ${outOfScopePatches.join(', ')}. ` +
                  `Known source files: ${allowedSourceFiles.join(', ')}.`,
                )
              }
            }
          }
        }
      }
    
      // Group patches by file
      const patchesByFile = new Map<string, PatchEntry[]>()
      for (const patch of patches) {
        if (!patchesByFile.has(patch.file)) {
          patchesByFile.set(patch.file, [])
        }
        patchesByFile.get(patch.file)!.push(patch)
      }
    
      const filesToModify = [...patchesByFile.keys()]
      const importsToAdd = patches.filter(p => p.import_statement).length
    
      // Dry run — return preview only
      if (dry_run !== false) {
        return {
          dry_run: true,
          preview: {
            files_to_modify: filesToModify,
            patches_count: patches.length,
            imports_to_add: importsToAdd,
          },
          ...(scopeWarnings.length > 0 ? { scope_warnings: scopeWarnings } : {}),
          next_steps: [
            ...(scopeWarnings.length > 0 ? [`WARNING: ${scopeWarnings.length} patch file(s) not in extract source map — verify intent`] : []),
            'Review the files and patches above',
            'Call contentrain_apply with mode:reuse and dry_run:false to execute',
          ],
        }
      }
    
      // Branch health gate
      const reuseHealth = await checkBranchHealth(projectRoot)
      if (reuseHealth.blocked) {
        return {
          dry_run: false,
          error: `Branch blocked: ${reuseHealth.message}`,
          next_steps: ['Merge or delete old contentrain/* branches before executing reuse.'],
        }
      }
    
      // Execute — git transaction
      const scopeTarget = scope.model ?? scope.domain!
      const branchName = buildBranchName('normalize/reuse', scopeTarget)
      const tx = await createTransaction(projectRoot, branchName, { workflowOverride: 'review' })
    
      const filesModified: string[] = []
      let patchesApplied = 0
      let importsAdded = 0
      const patchesSkipped: Array<{ file: string; line: number; reason: string }> = []
      const frameworkWarnings: Array<{ file: string; warning: string }> = []
      const syntaxErrors: SyntaxError[] = []
    
      try {
        await tx.write(async (wt) => {
          for (const [relFile, filePatches] of patchesByFile) {
            const absPath = join(wt, relFile)
    
            if (!(await pathExists(absPath))) {
              for (const p of filePatches) {
                patchesSkipped.push({ file: relFile, line: p.line, reason: 'file not found' })
              }
              continue
            }
    
            const content = await readText(absPath)
            if (content === null) {
              for (const p of filePatches) {
                patchesSkipped.push({ file: relFile, line: p.line, reason: 'file unreadable' })
              }
              continue
            }
    
            // Sort patches by line DESC (bottom-up to avoid line shifts)
            const sorted = [...filePatches].toSorted((a, b) => b.line - a.line)
    
            const lines = content.split('\n')
            let fileModified = false
    
            for (const patch of sorted) {
              // Determine replacement context before applying
              const targetLine = lines[patch.line - 1] ?? ''
              const isTagTextContext = targetLine.includes(`>${patch.old_value}<`)
              const replacementContext = isTagTextContext ? 'tag_text' as const : 'other' as const
    
              // Guardrail #2: Framework-aware validation — only warn for tag text replacements
              if (isTagTextContext) {
                const fwWarning = validateFrameworkExpression(relFile, patch.new_expression, replacementContext)
                if (fwWarning) {
                  frameworkWarnings.push({ file: relFile, warning: fwWarning })
                }
              }
    
              const applied = applyPatchToLines(lines, patch)
              if (applied) {
                patchesApplied++
                fileModified = true
              } else {
                patchesSkipped.push({ file: relFile, line: patch.line, reason: 'old_value not found at or near specified line' })
              }
            }
    
            // Add imports (deduplicate)
            const importStatements = new Set(
              filePatches
                .filter(p => p.import_statement)
                .map(p => p.import_statement!),
            )
    
            if (importStatements.size > 0) {
              const added = addImportsToLines(lines, importStatements)
              importsAdded += added
              if (added > 0) fileModified = true
            }
    
            if (fileModified) {
              const newContent = lines.join('\n')
              await writeText(absPath, newContent)
              filesModified.push(relFile)
    
              // Guardrail #5: Syntax check after patching
              const syntaxError = checkSyntax(relFile, newContent)
              if (syntaxError) {
                syntaxErrors.push({ file: relFile, error: syntaxError })
              }
            }
          }
    
          // Update context
          await writeContext(wt, {
            tool: 'contentrain_apply',
            model: scopeTarget,
            locale: config.locales.default,
          })
        })
    
        if (filesModified.length === 0) {
          await tx.cleanup()
          return {
            dry_run: false,
            results: {
              files_modified: [],
              patches_applied: 0,
              patches_skipped: patchesSkipped,
              imports_added: 0,
              framework_warnings: frameworkWarnings.length > 0 ? frameworkWarnings : undefined,
            },
            next_steps: ['No files were modified. Check patch definitions and try again.'],
          }
        }
    
        const commitMsg = `[contentrain] normalize: reuse ${scopeTarget} — patch ${filesModified.length} files (${patchesApplied} replacements)`
        await tx.commit(commitMsg)
    
        const gitResult = { branch: branchName, action: 'pending-review', commit: '' }
        try {
          const completed = await tx.complete()
          gitResult.action = completed.action
          gitResult.commit = completed.commit
        } catch {
          gitResult.action = 'pending-review'
        } finally {
          await tx.cleanup()
        }
    
        return {
          dry_run: false,
          results: {
            files_modified: filesModified,
            patches_applied: patchesApplied,
            patches_skipped: patchesSkipped,
            imports_added: importsAdded,
            framework_warnings: frameworkWarnings.length > 0 ? frameworkWarnings : undefined,
            syntax_errors: syntaxErrors.length > 0 ? syntaxErrors : undefined,
          },
          ...(scopeWarnings.length > 0 ? { scope_warnings: scopeWarnings } : {}),
          git: gitResult,
          next_steps: [
            'Run contentrain_validate to verify the patched files',
            patchesSkipped.length > 0 ? `${patchesSkipped.length} patches were skipped — review and retry if needed` : '',
            syntaxErrors.length > 0 ? `WARNING: ${syntaxErrors.length} file(s) may have syntax errors after patching — review manually` : '',
            scopeWarnings.length > 0 ? `NOTE: ${scopeWarnings.length} patch file(s) not in extract source map` : '',
            'Run contentrain_submit to push the branch for review',
          ].filter(Boolean),
        }
      } catch (error) {
        await tx.cleanup()
        throw error
      }
    }
  • Handler for 'contentrain_apply' in "extract" mode, which writes strings to content files.
    export async function applyExtract(
      projectRoot: string,
      input: ExtractionInput,
    ): Promise<ExtractionResult> {
      const config = await readConfig(projectRoot)
      if (!config) throw new Error('Project not initialized. Run contentrain_init first.')
    
      const { extractions, dry_run } = input
    
      // Analyze what will happen
      const existingModels = await listModels(projectRoot)
      const existingIds = new Set(existingModels.map(m => m.id))
    
      const modelsToCreate: string[] = []
      const modelsToUpdate: string[] = []
      const contentFiles: string[] = []
      let totalEntries = 0
    
      const validationErrors: string[] = []
    
      for (const ext of extractions) {
        // Validate model definition with same rules as model_save
        const modelErrors = validateModelDefinition({
          id: ext.model,
          kind: ext.kind,
          fields: ext.fields as Record<string, unknown> | undefined,
        })
        if (modelErrors.length > 0) {
          validationErrors.push(...modelErrors.map(e => `[${ext.model}] ${e}`))
        }
    
        for (const entry of ext.entries) {
          if (ext.kind === 'dictionary') {
            if (entry.data['id'] !== undefined || entry.data['slug'] !== undefined) {
              validationErrors.push(`[${ext.model}] Dictionary entries should not have id or slug`)
            }
            for (const [key, val] of Object.entries(entry.data)) {
              if (typeof val !== 'string') {
                validationErrors.push(`[${ext.model}] Dictionary entry value for key "${key}" must be a string, got ${typeof val}`)
              }
            }
          } else if (ext.kind === 'document') {
            if (!entry.slug && !entry.data['slug']) {
              validationErrors.push(`[${ext.model}] Document entries must have a slug`)
            }
          } else if (ext.kind === 'collection') {
            if (entry.slug !== undefined || entry.data['slug'] !== undefined) {
              validationErrors.push(`[${ext.model}] Collection entries should not have slug`)
            }
          } else if (ext.kind === 'singleton') {
            if (entry.data['id'] !== undefined || entry.slug !== undefined || entry.data['slug'] !== undefined) {
              validationErrors.push(`[${ext.model}] Singleton entries should not have id or slug`)
            }
          }
        }
    
        if (existingIds.has(ext.model)) {
          modelsToUpdate.push(ext.model)
        } else {
          modelsToCreate.push(ext.model)
        }
        totalEntries += ext.entries.length
    
        // Guardrail #3: Preview-Execute Parity — use real model metadata if it exists
        let previewModel: ModelDefinition
        if (existingIds.has(ext.model)) {
          const real = await readModel(projectRoot, ext.model)
          if (real) {
            previewModel = real
          } else {
            previewModel = { id: ext.model, kind: ext.kind, domain: ext.domain, i18n: ext.i18n ?? true } as ModelDefinition
          }
        } else {
          previewModel = { id: ext.model, kind: ext.kind, domain: ext.domain, i18n: ext.i18n ?? true } as ModelDefinition
        }
    
        const cDir = resolveContentDir(projectRoot, previewModel)
        for (const entry of ext.entries) {
          const locale = entry.locale ?? config.locales.default
          if (ext.kind === 'document' && entry.slug) {
            contentFiles.push(resolveMdFilePath(cDir, previewModel, locale, entry.slug))
          } else {
            contentFiles.push(resolveJsonFilePath(cDir, previewModel, locale))
          }
        }
      }
    
      const preview: ExtractionPreview = {
        models_to_create: modelsToCreate,
        models_to_update: modelsToUpdate,
        total_entries: totalEntries,
        content_files: [...new Set(contentFiles)],
      }
    
      // Dry run — return preview only (include validation errors if any)
      if (dry_run !== false) {
        return {
          dry_run: true,
          preview,
          ...(validationErrors.length > 0 ? { validation_errors: validationErrors } : {}),
          next_steps: [
            ...(validationErrors.length > 0
              ? [`WARNING: ${validationErrors.length} validation error(s) found — fix before executing`]
              : []),
            'Review the preview above',
            'Call contentrain_apply with mode:extract and dry_run:false to execute',
          ],
        }
      }
    
      // Block execute if validation errors exist
      if (validationErrors.length > 0) {
        return {
          dry_run: false,
          error: 'Model validation failed — cannot execute extract with invalid model definitions',
          validation_errors: validationErrors,
          next_steps: ['Fix the validation errors and retry'],
        }
      }
    
      // Branch health gate
      const health = await checkBranchHealth(projectRoot)
      if (health.blocked) {
        return {
          error: health.message,
          action: 'blocked' as const,
          hint: 'Merge or delete old contentrain/* branches before executing normalize.',
        } as unknown as ExtractionResult
      }
    
      // Execute — git transaction (always review mode for normalize)
      const branchName = buildBranchName('normalize', 'extract')
      const tx = await createTransaction(projectRoot, branchName, { workflowOverride: 'review' })
      const sourceMap: Array<{ model: string; locale: string; value: string; file: string; line: number }> = []
      const modelsCreated: string[] = []
      const modelsUpdated: string[] = []
      let entriesWritten = 0
    
      try {
        await tx.write(async (wt) => {
          for (const ext of extractions) {
            // Create or merge model
            const existing = await readModel(wt, ext.model)
            if (existing) {
              // Merge fields: add new, keep existing
              if (ext.fields) {
                const merged = { ...existing.fields, ...ext.fields }
                // Only overwrite if new fields were actually added
                const newFieldNames = Object.keys(ext.fields).filter(k => !(k in (existing.fields ?? {})))
                if (newFieldNames.length > 0) {
                  existing.fields = merged
                  await writeModel(wt, existing)
                  modelsUpdated.push(ext.model)
                }
              }
            } else {
              // Create new model
              const newModel: ModelDefinition = {
                id: ext.model,
                name: ext.model.split('-').map(w => w.charAt(0).toUpperCase() + w.slice(1)).join(' '),
                kind: ext.kind,
                domain: ext.domain,
                i18n: ext.i18n ?? true,
                fields: ext.fields,
              }
              await writeModel(wt, newModel)
              modelsCreated.push(ext.model)
            }
    
            // Write content entries
            const model = (await readModel(wt, ext.model))!
            const entries: ContentEntry[] = ext.entries.map(e => ({
              locale: e.locale,
              slug: e.slug,
              data: e.data,
            }))
    
            const wtConfig = await readConfig(wt) ?? config
            await writeContent(wt, model, entries, wtConfig)
            entriesWritten += entries.length
    
            // Track source map
            for (const entry of ext.entries) {
              // For dictionary entries with per-key source tracking
              if (ext.kind === 'dictionary' && entry.sources) {
                for (const s of entry.sources) {
                  sourceMap.push({
                    model: ext.model,
                    locale: entry.locale ?? config.locales.default,
                    value: s.value,
                    file: s.file,
                    line: s.line,
                  })
                }
              } else if (entry.source) {
                sourceMap.push({
                  model: ext.model,
                  locale: entry.locale ?? config.locales.default,
                  value: entry.source.value,
                  file: entry.source.file,
                  line: entry.source.line,
                })
              }
            }
          }
    
          // Write source map for reuse scope enforcement
          if (sourceMap.length > 0) {
            const sourcesByModel: Record<string, { source_files: string[]; entry_count: number }> = {}
            for (const s of sourceMap) {
              if (!sourcesByModel[s.model]) {
                sourcesByModel[s.model] = { source_files: [], entry_count: 0 }
              }
              const modelEntry = sourcesByModel[s.model]!
              if (!modelEntry.source_files.includes(s.file)) {
                modelEntry.source_files.push(s.file)
              }
              modelEntry.entry_count++
            }
            const sourcesJson = JSON.stringify({
              version: 1,
              created_at: new Date().toISOString(),
              models: sourcesByModel,
            }, null, 2) + '\n'
            // Write to worktree only — merge brings it to main
            // Phase 2 (reuse) must wait for extract branch to be merged first
            await writeText(join(wt, '.contentrain', 'normalize-sources.json'), sourcesJson)
          }
    
          // Update context
          await writeContext(wt, {
            tool: 'contentrain_apply',
            model: extractions.map(e => e.model).join(','),
            locale: config.locales.default,
            entries: extractions.flatMap(e => e.entries.map(en => en.slug ?? 'entry')),
          })
        })
    
        const commitMsg = `[contentrain] normalize: extract ${entriesWritten} entries to ${extractions.length} models`
        await tx.commit(commitMsg)
    
        const gitResult = { branch: branchName, action: 'pending-review', commit: '' }
        try {
          const completed = await tx.complete()
          gitResult.action = completed.action
          gitResult.commit = completed.commit
        } catch {
          gitResult.action = 'pending-review'
        } finally {
          await tx.cleanup()
        }
    
        return {
          dry_run: false,
          results: {
            models_created: modelsCreated,
            models_updated: modelsUpdated,
            entries_written: entriesWritten,
            source_map: sourceMap,
          },
          git: gitResult,
          context_updated: true,
          next_steps: [
            'Run contentrain_validate to check the extracted content',
            'Run contentrain_submit to push the branch for review',
            'After review, proceed with mode:reuse to patch source files',
          ],
        }
      } catch (error) {
        await tx.cleanup()
        throw error
      }
    }
  • Registration of the 'contentrain_apply' tool, which routes to the appropriate handler (applyExtract or applyReuse).
    // ─── contentrain_apply ───
    server.tool(
      'contentrain_apply',
      'Apply normalize operations. Two modes: "extract" writes agent-approved strings to Contentrain content files (source untouched), "reuse" patches source files with agent-provided replacement expressions. DRY RUN (default, dry_run:true): validates inputs, resolves conflicts, and returns a full preview — NO changes to disk or git. EXECUTE (dry_run:false): writes files to disk, commits to a branch, and requires branch health check to pass. Recommended workflow: always run dry_run first, review the preview, then call again with dry_run:false to execute. Normalize operations always use review workflow (never auto-merge).',
      {
        mode: z.enum(['extract', 'reuse']).describe('Apply mode: extract (content creation) or reuse (source patching)'),
        dry_run: z.boolean().optional().default(true).describe('Defaults to preview mode (dry_run:true). Set dry_run:false to execute after reviewing the preview.'),
    
        // Extract mode fields
        extractions: z.array(z.object({
          model: z.string().describe('Model ID (e.g. "ui-texts", "hero-section")'),
          kind: z.enum(['singleton', 'collection', 'dictionary', 'document']).describe('Model kind'),
          domain: z.string().describe('Content domain (e.g. "marketing", "app")'),
          i18n: z.boolean().optional().describe('Enable i18n. Default: true'),
          fields: fieldDefZodSchema.optional().describe('Field definitions — shared schema with model_save for full parity'),
          entries: z.array(z.object({
            locale: z.string().optional().describe('Locale. Default: project default'),
            slug: z.string().optional().describe('Document slug'),
            data: z.record(z.any()).describe('Content data'),
            source: z.object({
              file: z.string().describe('Source file path'),
              line: z.number().describe('Source line number'),
              value: z.string().describe('Original string value'),
            }).optional().describe('Source tracking for traceability'),
            sources: z.array(z.object({
              file: z.string().describe('Source file path'),
              line: z.number().describe('Source line number'),
              key: z.string().describe('Dictionary key this source maps to'),
              value: z.string().describe('Original string value'),
            })).optional().describe('Per-key source tracking for dictionary models'),
          })).describe('Content entries to create'),
        })).optional().describe('Extract mode: content extractions'),
    
        // Reuse mode fields
        scope: z.object({
          model: z.string().optional().describe('Target model ID'),
          domain: z.string().optional().describe('Target domain'),
        }).optional().describe('Reuse mode: scope (model or domain required)'),
        patches: z.array(z.object({
          file: z.string().describe('Relative file path to patch'),
          line: z.number().describe('Line number hint (±10 line search)'),
          old_value: z.string().describe('Original string to replace'),
          new_expression: z.string().describe('Replacement expression (agent-determined)'),
          import_statement: z.string().optional().describe('Import to add if needed'),
        })).optional().describe('Reuse mode: patches to apply (max 100)'),
      },
      async (input) => {
        const config = await readConfig(projectRoot)
        if (!config) {
          return {
            content: [{ type: 'text' as const, text: JSON.stringify({ error: 'Project not initialized. Run contentrain_init first.' }) }],
            isError: true,
          }
        }
    
        try {
          switch (input.mode) {
            case 'extract': {
              if (!input.extractions || input.extractions.length === 0) {
                return {
                  content: [{ type: 'text' as const, text: JSON.stringify({ error: 'Extract mode requires extractions array' }) }],
                  isError: true,
                }
              }
    
              const result = await applyExtract(projectRoot, {
                extractions: input.extractions.map(e => ({
                  ...e,
                  fields: e.fields as Record<string, FieldDef> | undefined,
                })),
                dry_run: input.dry_run,
              })
    
              // Check for branch-blocked response
              if (result.error !== undefined) {
                return {
                  content: [{ type: 'text' as const, text: JSON.stringify({
                    mode: 'extract',
                    ...result,
                  }, null, 2) }],
                  isError: true,
                }
              }
    
              return {
                content: [{ type: 'text' as const, text: JSON.stringify({
                  mode: 'extract',
                  ...result,
                }, null, 2) }],
              }
            }
    
            case 'reuse': {
              if (!input.scope) {
                return {
                  content: [{ type: 'text' as const, text: JSON.stringify({ error: 'Reuse mode requires scope (model or domain)' }) }],
                  isError: true,
                }
              }
              if (!input.patches || input.patches.length === 0) {
                return {
                  content: [{ type: 'text' as const, text: JSON.stringify({ error: 'Reuse mode requires patches array' }) }],
                  isError: true,
                }
              }
    
              const result = await applyReuse(projectRoot, {
                scope: input.scope,
                patches: input.patches,
                dry_run: input.dry_run,
              })
    
              // Check for branch-blocked response
              if (result.error !== undefined) {
                return {
                  content: [{ type: 'text' as const, text: JSON.stringify({
                    mode: 'reuse',
                    ...result,
                  }, null, 2) }],
                  isError: true,
                }
              }
    
              return {
                content: [{ type: 'text' as const, text: JSON.stringify({
                  mode: 'reuse',
                  ...result,
                }, null, 2) }],
              }
            }
    
            default:
              return {
                content: [{ type: 'text' as const, text: JSON.stringify({ error: `Unknown mode: ${input.mode}` }) }],
                isError: true,
              }
          }
        } catch (error) {
          return {
            content: [{ type: 'text' as const, text: JSON.stringify({
              error: `Apply failed: ${error instanceof Error ? error.message : String(error)}`,
            }) }],
            isError: true,
          }
        }
      },
    )
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden and excels: specifies dry_run defaults to true, explains that execute mode 'writes files to disk, commits to a branch,' discloses the 'review workflow (never auto-merge)' constraint, and clarifies safety mechanisms (conflict resolution in dry run, branch health checks required).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Information-dense with four well-structured sentences covering modes, dry-run behavior, execute behavior, and workflow recommendations. The DRY RUN/EXECUTE labeling aids scanability. Minor quibble: 'normalize operations' should be 'normalization operations,' but overall efficiently packed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with nested objects and 5 parameters, the description comprehensively covers mode behaviors, safety mechanisms, and workflow. Minor gap: while it mentions 'returns a full preview,' it doesn't describe the output structure or error scenarios given the lack of output schema, though this is partially mitigated by the detailed behavioral descriptions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100% (baseline 3), the description adds valuable semantic context: maps 'extractions' parameter to 'extract' mode and 'patches'/'scope' to 'reuse' mode, explains the dry_run parameter's role in the two-phase workflow, and clarifies that entries in extractions mode leave source files untouched while patches modify them.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines the tool's purpose with specific verbs ('writes', 'patches') and distinguishes two distinct modes ('extract' and 'reuse'). It differentiates from siblings like contentrain_content_save by specifying this is for 'normalize operations' with specific behaviors (source untouched vs source patching).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: 'always run dry_run first, review the preview, then call again with dry_run:false to execute.' Clearly distinguishes when to use DRY RUN (validation/preview) versus EXECUTE (writing to disk), including the prerequisite that branch health checks must pass for execution.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Contentrain/ai'

If you have feedback or need assistance with the MCP directory API, please join our Discord server