Skip to main content
Glama
lamaalrajih

KiCad MCP Server

by lamaalrajih

analyze_bom

Analyze KiCad project Bill of Materials to identify component counts, categories, and cost estimates for electronic design verification.

Instructions

Analyze a KiCad project's Bill of Materials.

This tool will look for BOM files related to a KiCad project and provide analysis including component counts, categories, and cost estimates if available.

Args: project_path: Path to the KiCad project file (.kicad_pro) ctx: MCP context for progress reporting

Returns: Dictionary with BOM analysis results

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_pathYes
ctxYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The @mcp.tool()-decorated async handler function that implements the core logic of the 'analyze_bom' tool. It finds BOM files in the project, parses them, analyzes components, and summarizes counts, categories, and costs.
    @mcp.tool()
    async def analyze_bom(project_path: str, ctx: Context | None) -> Dict[str, Any]:
        """Analyze a KiCad project's Bill of Materials.
        
        This tool will look for BOM files related to a KiCad project and provide
        analysis including component counts, categories, and cost estimates if available.
        
        Args:
            project_path: Path to the KiCad project file (.kicad_pro)
            ctx: MCP context for progress reporting
            
        Returns:
            Dictionary with BOM analysis results
        """
        print(f"Analyzing BOM for project: {project_path}")
        
        if not os.path.exists(project_path):
            print(f"Project not found: {project_path}")
            if ctx:
                ctx.info(f"Project not found: {project_path}")
            return {"success": False, "error": f"Project not found: {project_path}"}
        
        # Report progress
        if ctx:
            await ctx.report_progress(10, 100)
            ctx.info(f"Looking for BOM files related to {os.path.basename(project_path)}")
        
        # Get all project files
        files = get_project_files(project_path)
        
        # Look for BOM files
        bom_files = {}
        for file_type, file_path in files.items():
            if "bom" in file_type.lower() or file_path.lower().endswith(".csv"):
                bom_files[file_type] = file_path
                print(f"Found potential BOM file: {file_path}")
        
        if not bom_files:
            print("No BOM files found for project")
            if ctx:
                ctx.info("No BOM files found for project")
            return {
                "success": False, 
                "error": "No BOM files found. Export a BOM from KiCad first.",
                "project_path": project_path
            }
        
        if ctx:
            await ctx.report_progress(30, 100)
        
        # Analyze each BOM file
        results = {
            "success": True,
            "project_path": project_path,
            "bom_files": {},
            "component_summary": {}
        }
        
        total_unique_components = 0
        total_components = 0
        
        for file_type, file_path in bom_files.items():
            try:
                if ctx:
                    ctx.info(f"Analyzing {os.path.basename(file_path)}")
                
                # Parse the BOM file
                bom_data, format_info = parse_bom_file(file_path)
                
                if not bom_data or len(bom_data) == 0:
                    print(f"Failed to parse BOM file: {file_path}")
                    continue
                
                # Analyze the BOM data
                analysis = analyze_bom_data(bom_data, format_info)
                
                # Add to results
                results["bom_files"][file_type] = {
                    "path": file_path,
                    "format": format_info,
                    "analysis": analysis
                }
                
                # Update totals
                total_unique_components += analysis["unique_component_count"]
                total_components += analysis["total_component_count"]
                
                print(f"Successfully analyzed BOM file: {file_path}")
                
            except Exception as e:
                print(f"Error analyzing BOM file {file_path}: {str(e)}", exc_info=True)
                results["bom_files"][file_type] = {
                    "path": file_path,
                    "error": str(e)
                }
        
        if ctx:
            await ctx.report_progress(70, 100)
        
        # Generate overall component summary
        if total_components > 0:
            results["component_summary"] = {
                "total_unique_components": total_unique_components,
                "total_components": total_components
            }
            
            # Calculate component categories across all BOMs
            all_categories = {}
            for file_type, file_info in results["bom_files"].items():
                if "analysis" in file_info and "categories" in file_info["analysis"]:
                    for category, count in file_info["analysis"]["categories"].items():
                        if category not in all_categories:
                            all_categories[category] = 0
                        all_categories[category] += count
            
            results["component_summary"]["categories"] = all_categories
            
            # Calculate total cost if available
            total_cost = 0.0
            cost_available = False
            for file_type, file_info in results["bom_files"].items():
                if "analysis" in file_info and "total_cost" in file_info["analysis"]:
                    if file_info["analysis"]["total_cost"] > 0:
                        total_cost += file_info["analysis"]["total_cost"]
                        cost_available = True
            
            if cost_available:
                results["component_summary"]["total_cost"] = round(total_cost, 2)
                currency = next((
                    file_info["analysis"].get("currency", "USD") 
                    for file_type, file_info in results["bom_files"].items() 
                    if "analysis" in file_info and "currency" in file_info["analysis"]
                ), "USD")
                results["component_summary"]["currency"] = currency
        
        if ctx:
            await ctx.report_progress(100, 100)
            ctx.info(f"BOM analysis complete: found {total_components} components")
        
        return results
  • The call to register_bom_tools(mcp) in the main server setup, which registers the analyze_bom tool among others.
    register_bom_tools(mcp)
  • The register_bom_tools function definition that contains the @mcp.tool() decorators for BOM tools including analyze_bom.
    def register_bom_tools(mcp: FastMCP) -> None:
  • The analyze_bom_data helper function that performs detailed analysis on parsed BOM data, computing unique counts, categories, costs, etc.
    def analyze_bom_data(components: List[Dict[str, Any]], format_info: Dict[str, Any]) -> Dict[str, Any]:
        """Analyze component data from a BOM file.
        
        Args:
            components: List of component dictionaries
            format_info: Dictionary with format information
            
        Returns:
            Dictionary with analysis results
        """
        print(f"Analyzing {len(components)} components")
        
        # Initialize results
        results = {
            "unique_component_count": 0,
            "total_component_count": 0,
            "categories": {},
            "has_cost_data": False
        }
        
        if not components:
            return results
        
        # Try to convert to pandas DataFrame for easier analysis
        try:
            df = pd.DataFrame(components)
            
            # Clean up column names
            df.columns = [str(col).strip().lower() for col in df.columns]
            
            # Try to identify key columns based on format
            ref_col = None
            value_col = None
            quantity_col = None
            footprint_col = None
            cost_col = None
            category_col = None
            
            # Check for reference designator column
            for possible_col in ['reference', 'designator', 'references', 'designators', 'refdes', 'ref']:
                if possible_col in df.columns:
                    ref_col = possible_col
                    break
            
            # Check for value column
            for possible_col in ['value', 'component', 'comp', 'part', 'component value', 'comp value']:
                if possible_col in df.columns:
                    value_col = possible_col
                    break
            
            # Check for quantity column
            for possible_col in ['quantity', 'qty', 'count', 'amount']:
                if possible_col in df.columns:
                    quantity_col = possible_col
                    break
            
            # Check for footprint column
            for possible_col in ['footprint', 'package', 'pattern', 'pcb footprint']:
                if possible_col in df.columns:
                    footprint_col = possible_col
                    break
            
            # Check for cost column
            for possible_col in ['cost', 'price', 'unit price', 'unit cost', 'cost each']:
                if possible_col in df.columns:
                    cost_col = possible_col
                    break
            
            # Check for category column
            for possible_col in ['category', 'type', 'group', 'component type', 'lib']:
                if possible_col in df.columns:
                    category_col = possible_col
                    break
            
            # Count total components
            if quantity_col:
                # Try to convert quantity to numeric
                df[quantity_col] = pd.to_numeric(df[quantity_col], errors='coerce').fillna(1)
                results["total_component_count"] = int(df[quantity_col].sum())
            else:
                # If no quantity column, assume each row is one component
                results["total_component_count"] = len(df)
            
            # Count unique components
            results["unique_component_count"] = len(df)
            
            # Calculate categories
            if category_col:
                # Use provided category column
                categories = df[category_col].value_counts().to_dict()
                results["categories"] = {str(k): int(v) for k, v in categories.items()}
            elif footprint_col:
                # Use footprint as category
                categories = df[footprint_col].value_counts().to_dict()
                results["categories"] = {str(k): int(v) for k, v in categories.items()}
            elif ref_col:
                # Try to extract categories from reference designators (R=resistor, C=capacitor, etc.)
                def extract_prefix(ref):
                    if isinstance(ref, str):
                        import re
                        match = re.match(r'^([A-Za-z]+)', ref)
                        if match:
                            return match.group(1)
                    return "Other"
                
                if isinstance(df[ref_col].iloc[0], str) and ',' in df[ref_col].iloc[0]:
                    # Multiple references in one cell
                    all_refs = []
                    for refs in df[ref_col]:
                        all_refs.extend([r.strip() for r in refs.split(',')])
                    
                    categories = {}
                    for ref in all_refs:
                        prefix = extract_prefix(ref)
                        categories[prefix] = categories.get(prefix, 0) + 1
                    
                    results["categories"] = categories
                else:
                    # Single reference per row
                    categories = df[ref_col].apply(extract_prefix).value_counts().to_dict()
                    results["categories"] = {str(k): int(v) for k, v in categories.items()}
            
            # Map common reference prefixes to component types
            category_mapping = {
                'R': 'Resistors',
                'C': 'Capacitors',
                'L': 'Inductors',
                'D': 'Diodes',
                'Q': 'Transistors',
                'U': 'ICs',
                'SW': 'Switches',
                'J': 'Connectors',
                'K': 'Relays',
                'Y': 'Crystals/Oscillators',
                'F': 'Fuses',
                'T': 'Transformers'
            }
            
            mapped_categories = {}
            for cat, count in results["categories"].items():
                if cat in category_mapping:
                    mapped_name = category_mapping[cat]
                    mapped_categories[mapped_name] = mapped_categories.get(mapped_name, 0) + count
                else:
                    mapped_categories[cat] = count
            
            results["categories"] = mapped_categories
            
            # Calculate cost if available
            if cost_col:
                try:
                    # Try to extract numeric values from cost field
                    df[cost_col] = df[cost_col].astype(str).str.replace('$', '').str.replace(',', '')
                    df[cost_col] = pd.to_numeric(df[cost_col], errors='coerce')
                    
                    # Remove NaN values
                    df_with_cost = df.dropna(subset=[cost_col])
                    
                    if not df_with_cost.empty:
                        results["has_cost_data"] = True
                        
                        if quantity_col:
                            total_cost = (df_with_cost[cost_col] * df_with_cost[quantity_col]).sum()
                        else:
                            total_cost = df_with_cost[cost_col].sum()
                        
                        results["total_cost"] = round(float(total_cost), 2)
                        
                        # Try to determine currency
                        # Check first row that has cost for currency symbols
                        for _, row in df.iterrows():
                            cost_str = str(row.get(cost_col, ''))
                            if '$' in cost_str:
                                results["currency"] = "USD"
                                break
                            elif '€' in cost_str:
                                results["currency"] = "EUR"
                                break
                            elif '£' in cost_str:
                                results["currency"] = "GBP"
                                break
                        
                        if "currency" not in results:
                            results["currency"] = "USD"  # Default
                except:
                    print("Failed to parse cost data")
            
            # Add extra insights
            if ref_col and value_col:
                # Check for common components by value
                value_counts = df[value_col].value_counts()
                most_common = value_counts.head(5).to_dict()
                results["most_common_values"] = {str(k): int(v) for k, v in most_common.items()}
        
        except Exception as e:
            print(f"Error analyzing BOM data: {str(e)}", exc_info=True)
            # Fallback to basic analysis
            results["unique_component_count"] = len(components)
            results["total_component_count"] = len(components)
        
        return results
  • The parse_bom_file helper function that detects BOM format and parses CSV, XML, JSON files into standardized component dictionaries.
    def parse_bom_file(file_path: str) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:
        """Parse a BOM file and detect its format.
        
        Args:
            file_path: Path to the BOM file
            
        Returns:
            Tuple containing:
                - List of component dictionaries
                - Dictionary with format information
        """
        print(f"Parsing BOM file: {file_path}")
        
        # Check file extension
        _, ext = os.path.splitext(file_path)
        ext = ext.lower()
        
        # Dictionary to store format detection info
        format_info = {
            "file_type": ext,
            "detected_format": "unknown",
            "header_fields": []
        }
        
        # Empty list to store component data
        components = []
        
        try:
            if ext == '.csv':
                # Try to parse as CSV
                with open(file_path, 'r', encoding='utf-8-sig') as f:
                    # Read a few lines to analyze the format
                    sample = ''.join([f.readline() for _ in range(10)])
                    f.seek(0)  # Reset file pointer
                    
                    # Try to detect the delimiter
                    if ',' in sample:
                        delimiter = ','
                    elif ';' in sample:
                        delimiter = ';'
                    elif '\t' in sample:
                        delimiter = '\t'
                    else:
                        delimiter = ','  # Default
                    
                    format_info["delimiter"] = delimiter
                    
                    # Read CSV
                    reader = csv.DictReader(f, delimiter=delimiter)
                    format_info["header_fields"] = reader.fieldnames if reader.fieldnames else []
                    
                    # Detect BOM format based on header fields
                    header_str = ','.join(format_info["header_fields"]).lower()
                    
                    if 'reference' in header_str and 'value' in header_str:
                        format_info["detected_format"] = "kicad"
                    elif 'designator' in header_str:
                        format_info["detected_format"] = "altium"
                    elif 'part number' in header_str or 'manufacturer part' in header_str:
                        format_info["detected_format"] = "generic"
                    
                    # Read components
                    for row in reader:
                        components.append(dict(row))
            
            elif ext == '.xml':
                # Basic XML parsing with security protection
                from defusedxml.ElementTree import parse as safe_parse
                tree = safe_parse(file_path)
                root = tree.getroot()
                
                format_info["detected_format"] = "xml"
                
                # Try to extract components based on common XML BOM formats
                component_elements = root.findall('.//component') or root.findall('.//Component')
                
                if component_elements:
                    for elem in component_elements:
                        component = {}
                        for attr in elem.attrib:
                            component[attr] = elem.attrib[attr]
                        for child in elem:
                            component[child.tag] = child.text
                        components.append(component)
            
            elif ext == '.json':
                # Parse JSON
                with open(file_path, 'r') as f:
                    data = json.load(f)
                
                format_info["detected_format"] = "json"
                
                # Try to find components array in common JSON formats
                if isinstance(data, list):
                    components = data
                elif 'components' in data:
                    components = data['components']
                elif 'parts' in data:
                    components = data['parts']
            
            else:
                # Unknown format, try generic CSV parsing as fallback
                try:
                    with open(file_path, 'r', encoding='utf-8-sig') as f:
                        reader = csv.DictReader(f)
                        format_info["header_fields"] = reader.fieldnames if reader.fieldnames else []
                        format_info["detected_format"] = "unknown_csv"
                        
                        for row in reader:
                            components.append(dict(row))
                except:
                    print(f"Failed to parse unknown file format: {file_path}")
                    return [], {"detected_format": "unsupported"}
        
        except Exception as e:
            print(f"Error parsing BOM file: {str(e)}", exc_info=True)
            return [], {"error": str(e)}
        
        # Check if we actually got components
        if not components:
            print(f"No components found in BOM file: {file_path}")
        else:
            print(f"Successfully parsed {len(components)} components from {file_path}")
            
            # Add a sample of the fields found
            if components:
                format_info["sample_fields"] = list(components[0].keys())
        
        return components, format_info
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the tool will 'look for BOM files' and 'provide analysis', but doesn't specify whether this is read-only or has side effects, what permissions are needed, error handling for missing files, or performance characteristics. For a tool that reads and analyzes project files, this leaves significant behavioral questions unanswered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It opens with a clear purpose statement, provides specific details about what the analysis includes, and has separate sections for Args and Returns. Every sentence adds value, though the 'ctx' explanation could be slightly more specific about what progress gets reported.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (analyzing BOM files), no annotations, and the existence of an output schema (which handles return values), the description is minimally adequate. It covers the basic purpose and parameters but lacks important context about behavioral traits, error conditions, and differentiation from sibling tools. The output schema existence prevents a lower score, but the description should do more given the tool's analytical nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for both parameters beyond the schema's 0% coverage. It explains that 'project_path' should point to a '.kicad_pro' file and that 'ctx' is for 'progress reporting'. This provides practical guidance that the schema lacks, though it doesn't detail what specific progress information gets reported or format requirements for the path.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: analyzing a KiCad project's Bill of Materials. It specifies the action ('analyze'), resource ('KiCad project's Bill of Materials'), and scope (component counts, categories, cost estimates). However, it doesn't explicitly differentiate from sibling tools like 'export_bom_csv' or 'extract_project_netlist', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools related to BOMs and project analysis (export_bom_csv, extract_project_netlist, analyze_project_circuit_patterns, etc.), there's no indication of when this analysis tool is preferred over exporting or extracting tools. The description only states what it does, not when to choose it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lamaalrajih/kicad-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server