Skip to main content
Glama

update_project

Update project indexing with incremental RAG capabilities to maintain current semantic search across codebases by processing modified files and adjusting embeddings.

Instructions

Mettre à jour l'indexation d'un projet (indexation incrémentale) avec options RAG

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_pathYesChemin absolu vers le projet à mettre à jour
file_patternsNoPatterns de fichiers à inclure
recursiveNoParcourir les sous-dossiers récursivement
embedding_providerNoFournisseur d'embeddings (fake, ollama, sentence-transformers)fake
embedding_modelNoModèle d'embeddings (pour Ollama: 'nomic-embed-text', 'all-minilm', etc.)nomic-embed-text
chunk_sizeNoTaille des chunks pour le découpage (en tokens)
chunk_overlapNoChevauchement entre les chunks (en tokens)

Implementation Reference

  • Registers the update_project tool (along with other RAG tools) using toolRegistry.register(updateProjectTool, updateProjectHandler). This is called during initialization.
    export function registerRagTools() {
        console.log("📝 Enregistrement des outils RAG...");
        // Enregistrer index_project
        try {
            toolRegistry.register(indexProjectTool, indexProjectHandler);
            console.log(`✅ Outil enregistré: ${indexProjectTool.name}`);
        }
        catch (error) {
            console.error(`❌ Erreur lors de l'enregistrement de ${indexProjectTool.name}:`, error);
        }
        // Enregistrer search_code
        try {
            toolRegistry.register(searchCodeTool, searchCodeHandler);
            console.log(`✅ Outil enregistré: ${searchCodeTool.name}`);
        }
        catch (error) {
            console.error(`❌ Erreur lors de l'enregistrement de ${searchCodeTool.name}:`, error);
        }
        // Enregistrer manage_projects
        try {
            toolRegistry.register(manageProjectsTool, manageProjectsHandler);
            console.log(`✅ Outil enregistré: ${manageProjectsTool.name}`);
        }
        catch (error) {
            console.error(`❌ Erreur lors de l'enregistrement de ${manageProjectsTool.name}:`, error);
        }
        // Enregistrer update_project
        try {
            toolRegistry.register(updateProjectTool, updateProjectHandler);
            console.log(`✅ Outil enregistré: ${updateProjectTool.name}`);
        }
        catch (error) {
            console.error(`❌ Erreur lors de l'enregistrement de ${updateProjectTool.name}:`, error);
        }
        console.log(`🎉 Outils RAG terminés. Outils enregistrés au total: ${toolRegistry.size()}`);
    }
  • Tool schema definition for 'update_project', including input schema with project_path (required), file_patterns, and recursive options.
    export const updateProjectTool = {
        name: "update_project",
        description: "Mettre à jour l'indexation d'un projet (indexation incrémentale) avec options RAG",
        inputSchema: {
            type: "object",
            properties: {
                project_path: {
                    type: "string",
                    description: "Chemin absolu vers le projet à mettre à jour"
                },
                file_patterns: {
                    type: "array",
                    items: { type: "string" },
                    description: "Patterns de fichiers à inclure",
                    default: ["**/*.{js,ts,py,md,txt,json,yaml,yml,html,css,scss}"]
                },
                recursive: {
                    type: "boolean",
                    description: "Parcourir les sous-dossiers récursivement",
                    default: true
                }
            },
            required: ["project_path"]
        },
    };
  • Main handler for the update_project tool. Validates project_path, loads RAG config defaults and limits, configures embeddings, calls core updateProject, and formats response as MCP content.
    export const updateProjectHandler = async (args) => {
        if (!args.project_path || typeof args.project_path !== 'string') {
            throw new Error("The 'project_path' parameter is required and must be a string");
        }
        // Charger la configuration
        const configManager = getRagConfigManager();
        const defaults = configManager.getDefaults();
        // Utiliser les valeurs par défaut de la configuration
        const file_patterns = args.file_patterns || defaults.file_patterns;
        const recursive = args.recursive !== undefined ? args.recursive : defaults.recursive;
        const embedding_provider = defaults.embedding_provider;
        const embedding_model = defaults.embedding_model;
        // Appliquer les limites aux valeurs numériques de la configuration
        const chunk_size = configManager.applyLimits('chunk_size', defaults.chunk_size);
        const chunk_overlap = configManager.applyLimits('chunk_overlap', defaults.chunk_overlap);
        // Configurer le fournisseur d'embeddings
        setEmbeddingProvider(embedding_provider, embedding_model);
        const options = {
            filePatterns: file_patterns,
            recursive: recursive,
            chunkSize: chunk_size,
            chunkOverlap: chunk_overlap
        };
        try {
            const result = await updateProject(args.project_path, options);
            return {
                content: [{
                        type: "text",
                        text: JSON.stringify({
                            ...result,
                            config_used: {
                                embedding_provider,
                                embedding_model,
                                chunk_size,
                                chunk_overlap,
                                recursive,
                                file_patterns_count: file_patterns.length
                            }
                        }, null, 2)
                    }]
            };
        }
        catch (error) {
            console.error("Error in update_project tool:", error);
            throw error;
        }
    };
  • Core helper function implementing incremental project update using Git status for changed/added/deleted files, intelligent chunking, embedding, and vector store updates/deletes.
    export async function updateProject(projectPath, options = {}) {
        const { filePatterns = ["**/*.{js,ts,py,md,txt,json,yaml,yml,html,css,scss}"], recursive = true, chunkSize = 1000, chunkOverlap = 200, } = options;
        const stats = {
            totalFiles: 0,
            indexedFiles: 0,
            ignoredFiles: 0,
            errors: 0,
            chunksCreated: 0,
            modifiedFiles: 0,
            deletedFiles: 0,
            unchangedFiles: 0,
        };
        try {
            // Vérifier que le projet existe
            if (!fs.existsSync(projectPath)) {
                throw new Error(`Project path does not exist: ${projectPath}`);
            }
            // Vérifier si c'est un dépôt Git
            const isGitRepo = await isGitRepository(projectPath);
            if (!isGitRepo) {
                console.error(`Project ${projectPath} is not a Git repository, performing full reindex`);
                const fullStats = await indexProject(projectPath, options);
                return {
                    ...fullStats,
                    modifiedFiles: fullStats.indexedFiles,
                    deletedFiles: 0,
                    unchangedFiles: 0,
                };
            }
            // Récupérer les fichiers modifiés depuis le dernier commit
            const changedFiles = await getChangedFiles(projectPath);
            // Récupérer tous les fichiers du projet
            const allFiles = await fg(filePatterns, {
                cwd: projectPath,
                absolute: true,
                dot: false,
                onlyFiles: true,
                followSymbolicLinks: false,
                ...(recursive ? {} : { deep: 1 }),
            });
            stats.totalFiles = allFiles.length;
            // Traiter les fichiers supprimés
            const deletedFiles = changedFiles.deleted || [];
            for (const filePath of deletedFiles) {
                try {
                    await deleteFileFromIndex(projectPath, filePath);
                    stats.deletedFiles++;
                    console.error(`Deleted from index: ${filePath}`);
                }
                catch (error) {
                    console.error(`Error deleting file ${filePath} from index:`, error);
                    stats.errors++;
                }
            }
            // Traiter les fichiers modifiés et ajoutés
            const filesToProcess = [...(changedFiles.modified || []), ...(changedFiles.added || [])];
            for (const filePath of filesToProcess) {
                try {
                    // Vérifier si le fichier doit être ignoré
                    if (shouldIgnoreFile(filePath, projectPath)) {
                        stats.ignoredFiles++;
                        continue;
                    }
                    // Vérifier si le fichier existe toujours
                    if (!fs.existsSync(filePath)) {
                        stats.deletedFiles++;
                        await deleteFileFromIndex(projectPath, filePath);
                        continue;
                    }
                    // Lire le contenu du fichier
                    const content = fs.readFileSync(filePath, "utf8");
                    // Ignorer les fichiers vides ou trop petits
                    if (content.trim().length < 10) {
                        stats.ignoredFiles++;
                        continue;
                    }
                    // Détecter le type de contenu et le langage
                    const detection = detectContentType(filePath, content);
                    const contentType = detection.contentType;
                    const language = detection.language;
                    // Découper en chunks de manière intelligente
                    const chunks = chunkSize > 0
                        ? await chunkIntelligently(content, filePath, contentType, language, chunkSize, chunkOverlap)
                        : [content];
                    // Supprimer les anciens chunks de ce fichier
                    await deleteFileFromIndex(projectPath, filePath);
                    // Stocker chaque chunk dans le vector store avec métadonnées
                    for (let i = 0; i < chunks.length; i++) {
                        const chunk = chunks[i];
                        const chunkFilePath = chunks.length > 1 ? `${filePath}#chunk${i}` : filePath;
                        await embedAndStore(projectPath, chunkFilePath, chunk, {
                            chunkIndex: i,
                            totalChunks: chunks.length,
                            contentType: contentType,
                            language: language,
                            fileExtension: filePath.split('.').pop() || undefined,
                            linesCount: chunk.split('\n').length,
                            role: contentType === 'code' ? 'core' :
                                contentType === 'doc' ? 'example' :
                                    contentType === 'config' ? 'template' : 'other'
                        });
                        stats.chunksCreated++;
                    }
                    stats.indexedFiles++;
                    stats.modifiedFiles++;
                    // Log progress
                    if (stats.indexedFiles % 10 === 0) {
                        console.error(`Indexed ${stats.indexedFiles}/${filesToProcess.length} changed files, ${stats.chunksCreated} chunks...`);
                    }
                }
                catch (error) {
                    console.error(`Error processing file ${filePath}:`, error);
                    stats.errors++;
                }
            }
            // Compter les fichiers inchangés
            stats.unchangedFiles = stats.totalFiles - (stats.modifiedFiles + stats.deletedFiles + stats.ignoredFiles);
            console.error(`Incremental reindex completed for ${projectPath}`);
            console.error(`  Total files: ${stats.totalFiles}`);
            console.error(`  Modified/added: ${stats.modifiedFiles}`);
            console.error(`  Deleted: ${stats.deletedFiles}`);
            console.error(`  Unchanged: ${stats.unchangedFiles}`);
            console.error(`  Chunks created: ${stats.chunksCreated}`);
            console.error(`  Ignored: ${stats.ignoredFiles}`);
            console.error(`  Errors: ${stats.errors}`);
            return stats;
        }
        catch (error) {
            console.error(`Error updating project ${projectPath}:`, error);
            throw error;
        }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers minimal behavioral insight. 'Mettre à jour' implies a mutation operation, but it doesn't disclose whether this requires specific permissions, what 'indexation incrémentale' entails practically, how long it might take, whether it's idempotent, or what happens on failure. The mention of RAG options is vague without explaining their impact.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that states the core purpose. There's no fluff or redundancy. However, it could be more front-loaded with critical behavioral context given the mutation nature and lack of annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with 7 parameters, no annotations, and no output schema, the description is inadequate. It doesn't explain what 'indexation incrémentale' means operationally, what RAG options do, what the tool returns, or error conditions. The agent lacks sufficient context to use this tool confidently without trial and error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 7 parameters. The description adds no parameter-specific information beyond what's in the schema—it doesn't explain relationships between parameters (e.g., how 'embedding_provider' interacts with 'embedding_model') or provide usage examples. Baseline 3 is appropriate when the schema does all the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Mettre à jour l'indexation d'un projet') and specifies it's incremental indexing with RAG options. It distinguishes from siblings like 'manage_projects' or 'search_code' by focusing on indexing rather than general management or search. However, it doesn't explicitly differentiate from potential similar indexing tools if they existed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, when incremental indexing is appropriate, or how it differs from other project-related tools like 'manage_projects' or 'injection_rag'. The agent must infer usage from the name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ali-48/rag-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server