Skip to main content
Glama
smat-dev

Jinni: Bring Your Project Into Context

by smat-dev

read_context

Extract and analyze project context by focusing on specified files or directories within a root path. Provides a static view of relevant files, using default exclusions or custom rules for precise filtering.

Instructions

Reads context from a specified project root directory (absolute path). Focuses on the specified target files/directories within that root. Returns a static view of files with paths relative to the project root. Assume the user wants to read in context for the whole project unless otherwise specified - do not ask the user for clarification if just asked to read context. If the user just says 'jinni', interpret that as read_context. If the user asks to list context, use the list_only argument. Both targets and rules accept a JSON array of strings. The project_root, targets, and rules arguments are mandatory. You can ignore the other arguments by default. IMPORTANT NOTE ON RULES: Ensure you understand the rule syntax (details available via the usage tool) before providing specific rules. Using rules=[] is recommended if unsure, as this uses sensible defaults.

Guidance for AI Model Usage

When requesting context using this tool:

  • Default Behavior: If you provide an empty rules list ([]), Jinni uses sensible default exclusions (like .git, node_modules, __pycache__, common binary types) combined with any project-specific .contextfiles. This usually provides the "canonical context" - files developers typically track in version control. Assume this is what the users wants if they just ask to read context.

  • Targeting Specific Files: If you have a list of specific files you need (e.g., ["src/main.py", "README.md"]), provide them in the targets list. This is efficient and precise, quicker than reading one by one.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
debug_explainNo
exclusionsNoOptional exclusion configuration. Object with 'global' (list of keywords), 'scoped' (object mapping paths to keyword lists), and 'patterns' (list of file patterns) fields.
list_onlyNo
project_rootYes**MUST BE ABSOLUTE PATH**. The absolute path to the project root directory.
rulesYes**Mandatory**. List of inline filtering rules. Provide `[]` if no specific rules are needed (uses defaults). It is strongly recommended to consult the `usage` tool documentation before providing a non-empty list.
size_limit_mbNo
targetsYes**Mandatory**. List of paths (absolute or relative to CWD) to specific files or directories within the project root to process. Must be a JSON array of strings. If empty (`[]`), the entire `project_root` is processed.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • MCP tool handler for 'read_context'. This async function defines the tool logic, input validation using Pydantic Fields (schema), path translation, exclusion handling, and delegates to core_read_context from core_logic.py. Registered via @server.tool decorator.
    @server.tool(description=(
        "Reads context from a specified project root directory (absolute path). "
        "Focuses on the specified target files/directories within that root. "
        "Returns a static view of files with paths relative to the project root. "
        "Assume the user wants to read in context for the whole project unless otherwise specified - "
        "do not ask the user for clarification if just asked to read context. "
        "If the user just says 'jinni', interpret that as read_context. "
        "If the user asks to list context, use the list_only argument. "
        "Both `targets` and `rules` accept a JSON array of strings. "
        "The `project_root`, `targets`, and `rules` arguments are mandatory. "
        "You can ignore the other arguments by default. "
        "IMPORTANT NOTE ON RULES: Ensure you understand the rule syntax (details available via the `usage` tool) before providing specific rules. "
        "Using `rules=[]` is recommended if unsure, as this uses sensible defaults.\n\n"
        "**Guidance for AI Model Usage**\n\n"
        "When requesting context using this tool:\n"
        "*   **Default Behavior:** If you provide an empty `rules` list (`[]`), Jinni uses sensible default exclusions (like `.git`, `node_modules`, `__pycache__`, common binary types) combined with any project-specific `.contextfiles`. This usually provides the \"canonical context\" - files developers typically track in version control. Assume this is what the users wants if they just ask to read context.\n"
        "*   **Targeting Specific Files:** If you have a list of specific files you need (e.g., `[\"src/main.py\", \"README.md\"]`), provide them in the `targets` list. This is efficient and precise, quicker than reading one by one.\n"
    ))
    
    
    
    
    async def read_context(
        project_root: str = Field(description="**MUST BE ABSOLUTE PATH**. The absolute path to the project root directory."),
        targets: List[str] = Field(description="**Mandatory**. List of paths (absolute or relative to CWD) to specific files or directories within the project root to process. Must be a JSON array of strings. If empty (`[]`), the entire `project_root` is processed."),
        rules: List[str] = Field(description="**Mandatory**. List of inline filtering rules. Provide `[]` if no specific rules are needed (uses defaults). It is strongly recommended to consult the `usage` tool documentation before providing a non-empty list."),
        list_only: bool = False,
        size_limit_mb: Optional[int] = None,
        debug_explain: bool = False,
        exclusions: Optional[dict] = Field(default=None, description="Optional exclusion configuration. Object with 'global' (list of keywords), 'scoped' (object mapping paths to keyword lists), and 'patterns' (list of file patterns) fields."),
    ) -> str:
        logger.info("--- read_context tool invoked ---")
        # Translate incoming paths *before* any validation or Path object creation
        translated_project_root = _translate_wsl_path(project_root)
        translated_targets = [_translate_wsl_path(t) for t in targets]
        logger.debug(f"Original paths: project_root='{project_root}', targets='{targets}'")
        logger.debug(f"Translated paths: project_root='{translated_project_root}', targets='{translated_targets}'")
    
        # Defensive NUL check on all incoming paths
        ensure_no_nul(translated_project_root, "project_root")
        for t in translated_targets:
            ensure_no_nul(t, "target path")
    
        logger.debug(f"Processing read_context request: project_root(orig)='{project_root}', targets(orig)='{targets}', list_only={list_only}, rules={rules}, debug_explain={debug_explain}")
        """
        Generates a concatenated view of relevant code files for a given target path.
    
        The 'project_root' argument must always be an absolute path.
        The optional 'targets' argument, if provided, must be a list of paths (JSON array of strings).
        Each path must be absolute or relative to the current working directory, and must resolve to a location
        *inside* the 'project_root'.
    
        If the server was started with a --root argument, the provided 'project_root' must be
        within that server root directory.
        
        Args:
            project_root: See Field description.
            targets: See Field description.
            rules: See Field description.
            list_only: Only list file paths found. Defaults to False.
            size_limit_mb: Override the maximum total context size in MB. Defaults to None (uses core_logic default).
            debug_explain: Print detailed explanation for file/directory inclusion/exclusion to server's stderr. Defaults to False.
        """
        # --- Input Validation ---
        # Use the translated project_root for validation
        if not os.path.isabs(translated_project_root):
             raise ValueError(f"Tool 'project_root' argument must be absolute (after translation), received: '{translated_project_root}' from original '{project_root}'")
        resolved_project_root_path = Path(translated_project_root).resolve()
        if not resolved_project_root_path.is_dir():
             raise ValueError(f"Tool 'project_root' path does not exist or is not a directory: {resolved_project_root_path} (translated from '{project_root}')")
        resolved_project_root_path_str = str(resolved_project_root_path) # Store translated path as string
        logger.debug(f"Using project_root (translated): {resolved_project_root_path_str}")
    
        # Validate mandatory targets list (can be empty)
        # No need for `is None` check, Pydantic/FastMCP ensures it's a list.
    
        resolved_target_paths_str: List[str] = []
        effective_targets_set: Set[str] = set() # Use set to handle duplicates implicitly
    
        # Process the provided targets list if it's not empty
        if translated_targets:
            logger.debug(f"Processing translated targets list: {translated_targets}")
            for idx, single_target in enumerate(translated_targets):
                if not isinstance(single_target, str):
                     raise TypeError(f"Tool 'targets' item at index {idx} must be a string, got {type(single_target)}")
    
                # Check if target is absolute. If not, resolve relative to project_root.
                target_path_obj = Path(single_target)
                if target_path_obj.is_absolute():
                    resolved_target_path = target_path_obj.resolve()
                else:
                    # Resolve relative path against the project root
                    resolved_target_path = (resolved_project_root_path / target_path_obj).resolve()
                    logger.debug(f"Resolved relative target '{single_target}' to '{resolved_target_path}' using project root '{resolved_project_root_path}'")
                if not resolved_target_path.exists():
                     raise FileNotFoundError(f"Tool 'targets' path '{single_target}' (resolved to {resolved_target_path}) does not exist.")
                # Check if target is within project_root AFTER resolving
                try:
                    resolved_target_path.relative_to(resolved_project_root_path)
                except ValueError:
                     raise ValueError(f"Tool 'targets' path '{resolved_target_path}' is outside the specified project root '{resolved_project_root_path}'")
    
                resolved_path_str = str(resolved_target_path)
                if resolved_path_str not in effective_targets_set:
                     resolved_target_paths_str.append(resolved_path_str)
                     effective_targets_set.add(resolved_path_str)
                     logger.debug(f"Validated target path from targets[{idx}]: {resolved_path_str}")
                else:
                     logger.debug(f"Skipping duplicate target path from targets[{idx}]: {resolved_path_str}")
    
        # If the initial targets list was empty OR it resulted in an empty list after validation,
        # default to processing the project root.
        if not resolved_target_paths_str:
            logger.debug("Targets list is empty or resulted in no valid paths. Defaulting to project root.")
            resolved_target_paths_str = [resolved_project_root_path_str]
    
        # Validate mandatory rules list (can be empty, but must be provided)
        if rules is None: # Should not happen if Pydantic enforces mandatory, but good practice
            raise ValueError("Tool 'rules' argument is mandatory. Provide an empty list [] if no specific rules are needed.")
        if not isinstance(rules, list):
            raise TypeError(f"Tool 'rules' argument must be a list, got {type(rules)}")
        for idx, rule in enumerate(rules):
            if not isinstance(rule, str):
                raise TypeError(f"Tool 'rules' item at index {idx} must be a string, got {type(rule)}")
        logger.debug(f"Using provided rules: {rules}")
    
    
        # --- Validate against Server Root (if set) ---
        # The *project_root* provided by the client must be within the server's root (if set)
        if SERVER_ROOT_PATH:
            logger.debug(f"Server root is set: {SERVER_ROOT_PATH}")
            try:
                resolved_project_root_path.relative_to(SERVER_ROOT_PATH)
                logger.debug(f"Client project_root {resolved_project_root_path} is within server root {SERVER_ROOT_PATH}")
            except ValueError:
                 raise ValueError(f"Tool project_root '{resolved_project_root_path}' is outside the allowed server root '{SERVER_ROOT_PATH}'")
    
        # --- Process Exclusions ---
        exclusion_parser = None
        exclusion_patterns = []
        if exclusions:
            from jinni.exclusion_parser import ExclusionParser
            
            # Extract exclusion components
            global_keywords = exclusions.get('global', [])
            scoped_exclusions = exclusions.get('scoped', {})
            file_patterns = exclusions.get('patterns', [])
            
            # Convert scoped dict to list format expected by ExclusionParser
            scoped_list = []
            for path, keywords in scoped_exclusions.items():
                if isinstance(keywords, list):
                    scoped_list.append(f"{path}:{','.join(keywords)}")
            
            # Create exclusion parser
            parser = ExclusionParser()
            
            # Parse different exclusion types
            exclusion_patterns.extend(parser.parse_not(global_keywords))
            exclusion_patterns.extend(parser.parse_not_in(scoped_list))
            exclusion_patterns.extend(parser.parse_not_files(file_patterns))
            
            if exclusion_patterns:
                exclusion_parser = parser
                logger.info(f"Configured {len(exclusion_patterns)} exclusion patterns")
    
        logger.info(f"Processing project_root: {resolved_project_root_path_str}")
        # Log the final list of targets being processed
        # Log the final list of targets being processed
        # Log the final list of targets being processed
        logger.info(f"Focusing on target(s): {resolved_target_paths_str}")
        # --- Call Core Logic ---
        log_capture_buffer = None
        temp_handler = None
        loggers_to_capture = []
        debug_output = ""
    
        try:
            if debug_explain:
                # Setup temporary handler to capture debug logs
                log_capture_buffer = io.StringIO()
                temp_handler = logging.StreamHandler(log_capture_buffer)
                temp_handler.setLevel(logging.DEBUG)
                # Simple formatter for captured logs
                formatter = logging.Formatter('%(name)s:%(levelname)s: %(message)s')
                temp_handler.setFormatter(formatter)
    
                # Add handler to relevant core logic loggers
                loggers_to_capture = [
                    logging.getLogger(name) for name in
                    ["jinni.core_logic", "jinni.context_walker", "jinni.file_processor", "jinni.config_system", "jinni.utils"]
                ]
                for core_logger in loggers_to_capture:
                    # Explicitly set level to DEBUG *before* adding handler
                    # This ensures messages are generated for the handler to capture
                    original_level = core_logger.level
                    core_logger.setLevel(logging.DEBUG)
                    core_logger.addHandler(temp_handler)
                    # Store original level? Not strictly necessary for this hack,
                    # as we remove the handler later, but good practice if we were restoring level.
    
    
            # Pass the validated list of target paths (or the project root if no target was given)
            # The variable resolved_target_paths_str already holds the correct list.
            effective_target_paths_str = resolved_target_paths_str
            
            # Combine rules with exclusion patterns
            effective_rules = rules.copy() if rules else []
            if exclusion_patterns:
                effective_rules.extend(exclusion_patterns)
            
            # Call the core logic function
            result_content = core_read_context(
                target_paths_str=effective_target_paths_str,
                project_root_str=resolved_project_root_path_str, # Pass the translated, validated root
                override_rules=effective_rules,
                list_only=list_only,
                size_limit_mb=size_limit_mb,
                debug_explain=debug_explain, # Pass flag down
                # include_size_in_list is False by default in core_logic if not passed
                exclusion_parser=exclusion_parser # Pass exclusion parser for scoped exclusions
            )
            logger.debug(f"Finished processing project_root: {resolved_project_root_path_str}, targets(s): {resolved_target_paths_str}. Result length: {len(result_content)}")
    
            if debug_explain and log_capture_buffer:
                debug_output = log_capture_buffer.getvalue()
    
            # Combine result and debug output if necessary
            if debug_output:
                return f"{result_content}\n\n--- DEBUG LOG ---\n{debug_output}"
            else:
                return result_content
    
        except (FileNotFoundError, ContextSizeExceededError, ValueError, DetailedContextSizeError) as e:
            # Let FastMCP handle converting these known errors
            logger.error(f"Error during read_context call for project_root='{resolved_project_root_path_str}', targets(s)='{resolved_target_paths_str}': {type(e).__name__} - {e}")
            raise e # Re-raise for FastMCP
        except Exception as e:
            # Log unexpected errors before FastMCP potentially converts to a generic 500
            logger.exception(f"Unexpected error processing project_root='{resolved_project_root_path_str}', targets(s)='{resolved_target_paths_str}': {type(e).__name__} - {e}")
            raise e
        finally:
            # --- Cleanup: Remove temporary handler ---
            if temp_handler and loggers_to_capture:
                logger.debug("Removing temporary debug log handler.")
                for core_logger in loggers_to_capture:
                    core_logger.removeHandler(temp_handler)
                temp_handler.close()
  • Core implementation logic delegated by the MCP handler. Orchestrates context processing: input validation, root determination, size limits, override rules, directory walking, file processing, and output formatting.
    def read_context(
        target_paths_str: List[str], # List of targets from CLI or constructed by Server
        project_root_str: Optional[str] = None, # Optional from CLI, Mandatory from Server (used as base)
        override_rules: Optional[List[str]] = None,
        list_only: bool = False,
        size_limit_mb: Optional[int] = None,
        debug_explain: bool = False,
        include_size_in_list: bool = False,
        exclusion_parser: Optional[Any] = None  # ExclusionParser instance for scoped exclusions
    ) -> str:
        """
        Orchestrates the context reading process, handling flexible inputs.
    
        Validates inputs, determines the effective roots for rule discovery and output,
        resolves targets, and delegates processing to file_processor or context_walker.
    
        Args:
            target_paths_str: List of target file/directory paths (relative or absolute).
            project_root_str: Optional path to the project root. If provided, it's used as the
                              base for rule discovery and output relativity. If None, it's
                              inferred from the common ancestor of targets.
            override_rules: Optional list of rule strings to use instead of .contextfiles.
            list_only: If True, only return a list of relative file paths.
            size_limit_mb: Optional override for the size limit in MB.
            debug_explain: If True, log inclusion/exclusion reasons.
            include_size_in_list: If True and list_only, prepend size to path.
    
        Returns:
            A formatted string (concatenated content or file list).
    
        Raises:
            FileNotFoundError: If any target path does not exist.
            ValueError: If paths have issues (e.g., target outside explicit root).
            DetailedContextSizeError: If context size limit is exceeded.
            ImportError: If pathspec is required but not installed.
        """
        # --- Initial Setup & Validation ---
    
        # Validate project_root_str FIRST if provided, and set roots
        output_rel_root: Path
        rule_discovery_root: Path
        project_root_path: Optional[Path] = None # Store resolved explicit project_root
    
        if project_root_str:
            project_root_path = Path(project_root_str).resolve()
            if not project_root_path.is_dir():
                # Raise ValueError immediately if explicit root is invalid
                raise ValueError(f"Provided project root '{project_root_str}' does not exist or is not a directory.")
            output_rel_root = project_root_path
            rule_discovery_root = project_root_path
            logger.debug(f"Using provided project root for output relativity and rule discovery boundary: {output_rel_root}")
        # else: Roots will be determined after resolving targets
    
        # Resolve target paths (relative to CWD by default)
        target_paths: List[Path] = []
        if not target_paths_str:
             # Handle case where CLI provides no paths (defaults to ['.'])
             # or Server provides no target (meaning process root)
             if project_root_str:
                  # If root is given but no targets, process the root
                  target_paths_str = [project_root_str]
                  logger.debug("No specific targets provided; processing project root.")
             else:
                  # If no root and no targets, default to current dir '.'
                  target_paths_str = ['.']
                  logger.debug("No specific targets or project root provided; processing current directory '.'")
    
        for p_str in target_paths_str:
            p = Path(p_str).resolve() # Resolve paths here to ensure they are absolute
            if not p.exists():
                raise FileNotFoundError(f"Target path does not exist: {p_str} (resolved to {p})")
            target_paths.append(p)
    
        if not target_paths:
            logger.warning("No valid target paths could be determined.")
            return ""
    
        # Determine roots IF project_root wasn't provided explicitly
        if not project_root_path:
            try:
                common_ancestor = Path(os.path.commonpath([str(p) for p in target_paths]))
                calculated_root = common_ancestor if common_ancestor.is_dir() else common_ancestor.parent
            except ValueError:
                logger.warning("Could not find common ancestor for targets. Using CWD as root.")
                calculated_root = Path.cwd().resolve()
            output_rel_root = calculated_root
            rule_discovery_root = calculated_root
            logger.debug(f"Using common ancestor/CWD as output relativity and rule discovery boundary root: {output_rel_root}")
        # else: Roots were already set from the valid project_root_path
    
        # Ensure roots are set (safeguard)
        if 'output_rel_root' not in locals() or 'rule_discovery_root' not in locals():
             logger.error("Critical error: Output/Rule discovery root could not be determined.")
             raise ValueError("Could not determine a root directory.")
    
        # Validate targets are within explicit root (if provided) AFTER resolving roots
        if project_root_path:
            for tp in target_paths:
                try:
                    # Use is_relative_to for Python 3.9+ or fallback
                    if sys.version_info >= (3, 9):
                        if not tp.is_relative_to(project_root_path):
                            raise ValueError(f"Target path {tp} is outside the specified project root {project_root_path}")
                    else:
                        tp.relative_to(project_root_path) # Check raises ValueError if not relative
                except ValueError:
                    raise ValueError(f"Target path {tp} is outside the specified project root {project_root_path}")
    
        # (Logic moved above)
    
        # --- Size Limit (Moved up slightly, no functional change) ---
        limit_mb_str = os.environ.get(ENV_VAR_SIZE_LIMIT)
        try:
            effective_limit_mb = size_limit_mb if size_limit_mb is not None \
                                 else int(limit_mb_str) if limit_mb_str else DEFAULT_SIZE_LIMIT_MB
        except ValueError:
            logger.warning(f"Invalid value for {ENV_VAR_SIZE_LIMIT} ('{limit_mb_str}'). Using default {DEFAULT_SIZE_LIMIT_MB}MB.")
            effective_limit_mb = DEFAULT_SIZE_LIMIT_MB
        size_limit_bytes = effective_limit_mb * 1024 * 1024
        logger.debug(f"Effective size limit: {effective_limit_mb}MB ({size_limit_bytes} bytes)")
    
        # --- Override Handling ---
        # Override rules are used only if the list is provided AND non-empty
        use_overrides = bool(override_rules) # bool([]) is False, bool(['rule']) is True
        override_spec: Optional['pathspec.PathSpec'] = None
        if use_overrides:
            if pathspec is None:
                 raise ImportError("pathspec library is required for override rules but not installed.")
            logger.info("Override rules provided as high-priority additions to normal rules.")
            # Store the override rules for later use in the walker
            override_spec = compile_spec_from_rules(override_rules, "Overrides")
            if debug_explain: logger.debug(f"Compiled override spec with {len(override_spec.patterns)} patterns.")
    
        # --- Processing State ---
        output_parts: List[str] = []
        processed_files_set: Set[Path] = set()
        total_size_bytes: int = 0
        # Use the resolved target_paths as the initial set for "always include" logic within walker/processor
        initial_target_paths_set: Set[Path] = set(target_paths)
    
        # --- Compute a root for each target ---
        roots_for_target: Dict[Path, Path] = {}
        
        # Determine the base for comparison (project root or CWD)
        comparison_base = project_root_path if project_root_path else Path.cwd().resolve()
        
        for tp in target_paths:
            try:
                # Check if target is within the comparison base
                if tp.is_relative_to(comparison_base):
                    root = comparison_base  # Use project root (or CWD) for targets within it
                else:
                    root = tp if tp.is_dir() else tp.parent  # External targets are self-contained
            except AttributeError:  # Python < 3.9 fallback
                if str(tp).startswith(str(comparison_base)):
                    root = comparison_base
                else:
                    root = tp if tp.is_dir() else tp.parent
            roots_for_target[tp] = root
            if debug_explain: logger.debug(f"Target {tp} will use rule root: {root}")
    
        # --- Delegate Processing ---
        try:
            # Group targets by their root and walk once per root
            for root in set(roots_for_target.values()):
                # Collect targets that belong to this root
                grouped = [t for t, r in roots_for_target.items() if r == root]
                
                # Build an "always include" set for this root
                initial_set = set(grouped)
                
                for current_target_path in grouped:
                    # Skip if already processed (e.g., listed twice or handled by a previous dir walk)
                    if current_target_path in processed_files_set:
                         if debug_explain: logger.debug(f"Skipping target {current_target_path} as it was already processed.")
                         continue
    
                    if current_target_path.is_file():
                        if debug_explain: logger.debug(f"Processing file target: {current_target_path}")
                        file_output, file_size_added = process_file(
                            file_path=current_target_path,
                            output_rel_root=output_rel_root,  # Use the original output root
                            size_limit_bytes=size_limit_bytes,
                            total_size_bytes=total_size_bytes,
                            list_only=list_only,
                            include_size_in_list=include_size_in_list,
                            debug_explain=debug_explain
                        )
                        if file_output is not None:
                            # Check if adding this file *content* exceeds limit (only if not list_only)
                            if not list_only and (total_size_bytes + file_size_added > size_limit_bytes):
                                 # Check if file alone exceeds limit
                                 if file_size_added > size_limit_bytes and total_size_bytes == 0:
                                      logger.warning(f"File {current_target_path} ({file_size_added} bytes) content exceeds size limit of {effective_limit_mb}MB. Skipping.")
                                      continue # Skip this file
                                 else:
                                      # Adding this file pushes over the limit
                                      raise ContextSizeExceededError(effective_limit_mb, total_size_bytes + file_size_added, current_target_path)
    
                            output_parts.append(file_output)
                            processed_files_set.add(current_target_path)
                            total_size_bytes += file_size_added # Add size only if content included
    
                    elif current_target_path.is_dir():
                        if debug_explain: logger.debug(f"Processing directory target: {current_target_path}")
                        dir_output_parts, dir_total_size, dir_processed_files = walk_and_process(
                            walk_target_path=current_target_path,
                            rule_root=root,  # Pass the rule root for this target
                            output_rel_root=output_rel_root, # Keep original output root for consistent paths
                            initial_target_paths_set=initial_set, # Pass initial targets for this root
                            use_overrides=use_overrides,
                            override_spec=override_spec,
                            size_limit_bytes=size_limit_bytes - total_size_bytes, # Pass remaining budget
                            list_only=list_only,
                            include_size_in_list=include_size_in_list,
                            debug_explain=debug_explain,
                            exclusion_parser=exclusion_parser # Pass exclusion parser for scoped exclusions
                        )
                        output_parts.extend(dir_output_parts)
                        processed_files_set.update(dir_processed_files)
                        total_size_bytes += dir_total_size # Accumulate size from walker
                    else:
                         logger.warning(f"Target path is neither a file nor a directory: {current_target_path}")
    
            # --- Final Output Formatting ---
            final_output = "\n".join(output_parts) if list_only else SEPARATOR.join(output_parts)
            logger.info(f"Processed {len(processed_files_set)} files, total size: {total_size_bytes} bytes.")
            return final_output
    
        except ContextSizeExceededError as e:
                # Catch error, format with large files list, and raise Detailed error
                logger.error(f"Context size limit exceeded: {e}")
                # Use output_rel_root as the base for finding large files
                large_files = get_large_files(str(output_rel_root))
                error_message = (
                    f"Error: Context size limit of {e.limit_mb}MB exceeded.\n"
                    f"Processing stopped near file: {e.file_path}\n\n"
                    "Consider excluding large files or directories using a `.contextfiles` file.\n"
                    "Consult README.md (or use `jinni doc`) for more details on exclusion rules.\n\n"
                    "Potential large files found (relative to project root):\n"
                )
                if large_files:
                    for fname, fsize in large_files:
                        size_mb = fsize / (1024 * 1024)
                        error_message += f" - {fname} ({size_mb:.2f} MB)\n"
                else:
                    error_message += " - Could not identify specific large files.\n"
    
                raise DetailedContextSizeError(error_message) from e
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool returns a 'static view' (implying read-only, non-destructive), uses 'sensible default exclusions' when rules=[], and provides guidance on default behavior and targeting efficiency. However, it doesn't explicitly mention permission requirements, rate limits, or error handling, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately front-loaded with core functionality, but it contains some redundancy (e.g., repeating that targets and rules accept JSON arrays) and includes implementation details like 'You can ignore the other arguments by default' that could be streamlined. The 'Guidance for AI Model Usage' section is helpful but adds length. Overall, it's informative but could be more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, 57% schema coverage, no annotations, but with an output schema), the description is mostly complete. It covers the core purpose, usage guidelines, parameter semantics for key inputs, and behavioral context. The output schema exists, so return values needn't be explained. However, it lacks details on less critical parameters like 'debug_explain' and 'size_limit_mb', and doesn't mention error cases or performance implications.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 57%, so the description must compensate. It adds significant value beyond the schema: it explains that 'targets' and 'rules' accept JSON arrays, clarifies that empty rules ([]) use sensible defaults, provides examples of default exclusions, and gives practical guidance on when to use specific targets versus processing the entire root. However, it doesn't fully explain all 7 parameters, particularly 'debug_explain' and 'size_limit_mb'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Reads context from a specified project root directory' and 'Returns a static view of files with paths relative to the project root.' It specifies the verb (read), resource (context/files), and scope (project root directory), distinguishing it from the sibling 'usage' tool which provides documentation rather than file reading.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Assume the user wants to read in context for the whole project unless otherwise specified' and 'If the user just says 'jinni', interpret that as read_context.' It also specifies when to use the list_only argument: 'If the user asks to list context, use the list_only argument.' This gives clear usage rules and context for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/smat-dev/jinni'

If you have feedback or need assistance with the MCP directory API, please join our Discord server