sheetsdata-mcp
Server Details
Electronic component datasheets for AI agents — specs, pinouts, package data on demand.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- octoco-ltd/sheetsdata-mcp
- GitHub Stars
- 5
- Server Listing
- SheetsData MCP
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
10 toolsanalyze_imageARead-onlyIdempotentInspect
Analyze an image from a component's datasheet using vision AI. Use this when read_datasheet returns a section containing images and you need to extract data from a graph, package drawing, pin diagram, or circuit schematic. Pass the image_key from the read_datasheet response (the storage path in the image URL). Optionally pass a specific question to focus the analysis.
IMPORTANT: For precise numeric values (electrical specs, max ratings), prefer read_datasheet text tables first — they are more reliable than vision-extracted graph data. Use analyze_image for visual information not available in text: package dimensions from drawings, pin assignments from diagrams, graph trends, and approximate values from characteristic curves.
Examples:
analyze_image(part_number='IRFZ44N', image_key='images/abc123.png') -> classifies and describes the image
analyze_image(part_number='IRFZ44N', image_key='images/abc123.png', question='What is the drain current at Vgs=5V?')
| Name | Required | Description | Default |
|---|---|---|---|
| question | No | Optional specific question about the image (e.g. 'What are the package dimensions?') | |
| image_key | Yes | Image storage path from read_datasheet output (e.g. 'images/abc123.png') | |
| part_number | Yes | MPN of the component |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/destructive status, so the safety profile is covered. The description adds crucial operational context: vision AI's reliability limitations for precise numeric values, the dependency chain (image_key comes from read_datasheet output), and what types of visual content it handles (graphs, schematics, diagrams). It also implies idempotency through the examples showing consistent analysis.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure: purpose statement first, specific usage conditions second, important limitations highlighted with 'IMPORTANT:' callout, concrete examples last. Every sentence advances understanding. No tautology or repetition of schema fields. Length appropriate for a tool with complex sibling dependencies.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description adequately implies return value through examples ('classifies and describes the image'). The sibling relationship with read_datasheet is fully explained. Minor gap: could explicitly state output format (structured text vs raw description), but examples provide sufficient clues for an agent to proceed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds value by explaining the semantic relationship between tools: image_key is 'from the read_datasheet response (the storage path in the image URL)'—critical context for correct invocation. It also clarifies the question parameter's purpose ('to focus the analysis') with concrete examples showing how to formulate technical queries.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a precise action statement ('Analyze an image from a component's datasheet using vision AI'), identifying the specific resource (datasheet images) and mechanism (vision AI). It clearly distinguishes from sibling tool read_datasheet by stating exactly when to use each: analyze_image is for when read_datasheet 'returns a section containing images.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use conditions ('Use this when read_datasheet returns a section containing images') and when-not-to-use guidance ('IMPORTANT: For precise numeric values... prefer read_datasheet text tables first'). Lists specific appropriate use cases (package dimensions, pin assignments, graph trends) and explicitly names the alternative tool for text data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_design_fitARead-onlyInspect
Validate whether a component will work within your operating conditions. Compares your design parameters against the datasheet's absolute maximum ratings and recommended operating conditions. Returns PASS/FAIL/WARNING per parameter with margin percentages.
Parameter mapping by component type:
Buck/boost converters: input_voltage, output_voltage, output_current, ambient_temp
MOSFETs: supply_voltage=VDS, output_current=ID (drain current), ambient_temp
LDOs: input_voltage, output_voltage, output_current, ambient_temp
Logic ICs: supply_voltage=VCC, ambient_temp
Result semantics:
PASS: user value is more than 10% below the datasheet limit — comfortable margin.
WARNING: user value is within 10% of the limit — part will work but with thin margin for part-to-part variation, temperature drift, and transients. Consider derating.
FAIL: user value exceeds the limit — part is out of spec and will be stressed or damaged.
Behavior:
Two-tier validation. For parameters in our structured database (Vin, Iout, operating temp, etc.), returns instantly and free of LLM cost. For parameters only found in the datasheet text, falls back to an LLM read of the absolute-max and recommended-operating-conditions sections. The 'validation_method' field in the response tells you which path was used.
If the part hasn't been extracted yet and the LLM fallback is needed, this call triggers extraction (30s-2min). Returns status='extracting' if so — poll check_extraction_status and retry.
When NOT to use:
You need power dissipation or junction-temperature rise — this tool only checks nameplate limits. Pull RthJA from read_datasheet and calculate yourself.
You need SOA (safe-operating-area) curve checks for MOSFETs — use analyze_image on the SOA graph.
You're checking a passive or mechanical part with no abs-max table — there's nothing for this tool to compare against.
Example: check_design_fit('TPS54302', input_voltage=24, output_current=2.5, ambient_temp=70)
| Name | Required | Description | Default |
|---|---|---|---|
| part_number | Yes | Specific manufacturer part number to validate. Not a value or description. | |
| ambient_temp | No | Ambient temperature TA (°C). | |
| input_voltage | No | Input voltage / VIN (V). For converters. | |
| output_current | No | Output current / IOUT (A). For MOSFETs, this is drain current ID. | |
| output_voltage | No | Output voltage / VOUT (V). For converters/LDOs. | |
| supply_voltage | No | Supply voltage / VCC / VDD (V). For MOSFETs, this is VDS (drain-source voltage). NOT used for power dissipation — compared against VDS(max) only. | |
| switching_frequency | No | Switching frequency FSW (kHz). For converters. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds substantial context beyond annotations: explains it checks 'absolute maximum ratings and recommended operating conditions' (specific validation logic), and discloses output format ('PASS/FAIL/WARNING per parameter with margins') which compensates for missing output schema. Consistent with readOnlyHint=true.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four efficiently structured sentences: purpose upfront, methodology, output format, and concrete example. Every sentence earns its place with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description fully explains return structure (pass/fail/warning with margins) and validation logic. Example provides complete usage pattern. Adequate for a 7-parameter validation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Baseline is 3 given 100% schema description coverage. The concrete example invocation ('TPS54302', input_voltage=24...) adds practical context for how electrical parameters map to real component validation, elevating above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb (validate) and resource (component against operating conditions) clearly stated. Distinguished from siblings like 'get_part_details' (retrieval) or 'compare_parts' (comparison) by focusing specifically on design validation against datasheet ratings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through 'operating conditions' and design parameters, but lacks explicit guidance on when to choose this over 'get_part_details' for simple spec lookup or how it relates to 'read_datasheet'. No alternatives or exclusions named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_extraction_statusARead-onlyIdempotentInspect
Check the extraction status of one or more parts. Free. Returns per-part status: 'ready' (extracted and ready for read_datasheet), 'extracting' (in progress), 'pending' (queued), 'failed' (extraction failed — includes error reason), or 'not_extracted' (unknown part or never queued). Each entry also includes the current extraction step, elapsed seconds, and document ID.
Use after prefetch_datasheets or after read_datasheet triggers a new extraction.
Recommended polling cadence: every 5-10 seconds. Extraction typically takes 30s-2min for new parts, so polling faster than every 5s wastes calls. Stop polling once status is 'ready' or 'failed'.
| Name | Required | Description | Default |
|---|---|---|---|
| part_numbers | Yes | 1-20 MPNs to check. Must be specific manufacturer part numbers, not values or descriptions. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds substantial value: enumerates all possible status values with definitions ('ready', 'extracting', etc.), discloses additional return fields (step, elapsed time, document ID), and clarifies cost. Does not mention rate limits or polling frequency recommendations, preventing a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, perfectly structured: purpose → return values → additional fields → usage guidance. Every sentence carries unique information. No redundancy with schema or annotations. 'Free' prefix in final sentence efficiently signals cost before giving workflow instruction.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description compensates by detailing the status enum, tracking fields, and workflow context. Annotations are present and clear. Could be improved by mentioning error handling or rate limiting for the polling pattern, but adequate for tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions ('List of MPNs to check (max 20)'). Description mentions 'one or more parts' which aligns with the parameter but adds no semantic detail beyond what the schema already provides. Baseline 3 is appropriate given complete schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Check') + resource ('extraction status of one or more parts'). Explicitly distinguishes from siblings 'prefetch_datasheets' and 'read_datasheet' by stating this polls progress after those operations, establishing the workflow relationship.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit when-to-use guidance ('use this to poll progress after prefetch_datasheets or read_datasheet') and cost signal ('Free'). Names specific sibling tools as workflow predecessors, making the orchestration pattern unambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_partsARead-onlyIdempotentInspect
Compare 2-5 electronic components side by side in a single call. For each part, returns merged provider data (pricing, stock, structured parameters, package) plus the cached datasheet summary if one exists, plus datasheet_status ('ready', 'extracting', or 'not_extracted').
Use this instead of calling get_part_details in a loop — it fans out provider queries in parallel and merges by MPN. For discovering candidates, use search_parts or find_alternative first; compare_parts assumes you already know which MPNs you want to compare.
Behavior:
Uses only cached datasheet summaries — does not trigger extraction. Call prefetch_datasheets first if you need summaries for parts that haven't been extracted yet.
Validates every MPN upfront. If any input is not a real part number (value, description, reference designator), the whole call is rejected with a 'rejected' map listing the offenders — other parts are not compared. Filter your list before calling.
If a valid MPN is not found at any provider, that part still appears in the response with an 'error' field; the other parts are compared normally.
IMPORTANT — part_numbers must be specific manufacturer part numbers (e.g. 'TPS54302DDCR', 'STM32F446RCT6') or LCSC numbers (e.g. 'C2837938'). Do NOT pass component values ('100nF', '10K'), descriptions ('buck converter'), or reference designators ('U3', 'R1').
Example: compare_parts(['TPS54302', 'LM2596', 'MP2359'])
| Name | Required | Description | Default |
|---|---|---|---|
| part_numbers | Yes | 2-5 specific manufacturer part numbers or LCSC numbers to compare head-to-head. Must be real MPNs — the call is rejected upfront if any entry is a value, description, or reference designator. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent context beyond annotations: discloses data sources ('merged details from all providers'), caching behavior ('cached datasheet summaries'), and specific return fields ('datasheet_status', 'specs, pricing, availability'). Complements readOnlyHint by explaining what data is returned without contradicting safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences followed by a concrete example. Front-loaded with critical constraints (2-5 parts). No redundancy—each sentence adds distinct information (purpose, data sources/return value, specific fields).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a single-parameter comparison tool. Compensates for missing output_schema by describing return contents (merged provider data, specs, pricing, availability). Safety profile covered by annotations. Minor gap: doesn't mention error cases (e.g., invalid part numbers).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds value by reinforcing the '2-5' constraint in natural language and providing a concrete example (['TPS54302', 'LM2596', 'MP2359']) that clarifies expected input format beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Compare') + resource ('electronic components') with explicit scope ('2-5', 'side by side'). Distinguishes from siblings like get_part_details (single part) and search_parts (discovery) by emphasizing comparison of specific known part numbers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit 'when to use vs alternatives' guidance, but implies usage through the '2-5 specific part numbers' constraint and concrete example. Lacks explicit guidance like 'use search_parts to find parts first, then use this to compare specific candidates'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_alternativeARead-onlyIdempotentInspect
Find alternative / second-source components for a given MPN. Returns parts ranked by how closely their specs, package, and category match the reference part, with a match_score and match_notes explaining each match.
When to use:
Original part is out of stock or end-of-life.
You need a cheaper equivalent with similar specs.
You need a second source for supply-chain resilience.
You need a drop-in replacement on an existing PCB (use constraints=['same_package']).
When NOT to use:
You're browsing a category from scratch — use search_parts.
You already have candidates and want a head-to-head comparison — use compare_parts directly.
Behavior:
Eval boards, dev kits, and breakout modules are filtered out — results are IC-level alternatives only.
'same_package' uses fuzzy matching, so SOT-23 ≡ SOT23, SOIC-8 ≡ SOP-8, LQFP64 ≡ TQFP64, etc. Exact string match is not required.
If provider search returns few candidates, the tool falls back to semantic search across extracted datasheets to surface niche alternatives.
The response includes a 'hint' field pointing to compare_parts / read_datasheet for drilling into promising alternatives.
Example: find_alternative('TPS54302', constraints=['same_package', 'in_stock'])
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max alternatives to return (default 5, max 20). | |
| constraints | No | Optional filters to narrow alternatives. 'in_stock' = only parts with non-zero stock at any provider. 'same_package' = package footprint must match (fuzzy). 'jlcpcb' = must be available at JLCPCB (useful for JLCPCB-assembled boards — also restricts the provider search to JLCPCB). | |
| part_number | Yes | The MPN to find alternatives for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover the safety profile (readOnly, non-destructive, idempotent) and open-world nature. The description adds valuable behavioral context not in annotations: it specifies the search logic ('similar specs, same package, or compatible pinout') and confirms the cross-provider/external search implied by openWorldHint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tight sentences followed by a concrete example. Information is front-loaded (purpose → mechanism → usage context), with zero redundancy. Every sentence earns its place without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 3-parameter schema, strong annotations, and lack of output schema, the description provides sufficient context: clear purpose, usage scenarios, matching criteria, and an example call. A minor gap is lack of indication about return value structure, though this is acceptable for a read-only search tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents all three parameters including the constraints syntax. The description provides a concrete usage example, but this illustrates rather than adds semantic meaning beyond the comprehensive schema documentation. Baseline 3 is appropriate given the schema carries the burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Find alternative/substitute components') and resource (components for a given part number). It distinguishes from siblings like 'search_parts' (general discovery) and 'compare_parts' (side-by-side comparison) by emphasizing the substitute-finding use case and cross-provider search scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear 'when to use' guidance ('Useful when a part is out of stock, too expensive, or you need a second source'). However, it does not explicitly name sibling tools as alternatives (e.g., when to use 'compare_parts' vs this tool), which would provide complete guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_part_detailsARead-onlyInspect
Get full details for a specific electronic component by manufacturer part number (MPN) or LCSC number. Returns specs, pricing, and stock from all configured providers, plus the cached datasheet summary if available. Includes datasheet_status ('ready', 'extracting', or 'not_extracted') and available_sections when ready. Set prefetch_datasheet=true to trigger background extraction — no extra charge. Use after search_parts to drill into a specific result.
The part_number must be a specific manufacturer part number (e.g. 'TPS54302DDCR', 'STM32F446RCT6') or LCSC number (e.g. 'C2837938'). Do NOT pass bare component values ('100nF', '10K'), descriptions ('buck converter'), or reference designators ('R1', 'U3').
| Name | Required | Description | Default |
|---|---|---|---|
| provider | No | Which provider to query: 'all' (default), 'jlcpcb', 'mouser', or 'digikey' | all |
| part_number | Yes | Specific manufacturer part number (MPN) or LCSC number (e.g. 'C2837938'). Not a value or description. | |
| prefetch_datasheet | No | Trigger background datasheet extraction (no extra charge). Default false. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish read-only safety (readOnlyHint: true, destructiveHint: false). The description adds substantial behavioral context beyond these hints: specific return fields (specs, pricing, stock, datasheet_status with enum values 'ready'/'extracting'/'not_extracted', available_sections), and the side-effect behavior of prefetch_datasheet ('trigger background extraction', 'no extra charge').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two dense paragraphs with zero waste. The first sentence establishes purpose and return value; the second covers workflow context; the third paragraph provides critical input validation constraints. Every sentence conveys unique information not duplicated in the schema or annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description comprehensively details the return structure (specs, pricing, stock, datasheet summary, status fields) and behavioral states. Deducting one point only for lack of error handling or rate limit disclosure, which would be necessary for a complete 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds significant value by providing expanded positive examples ('TPS54302DDCR', 'STM32F446RCT6') and explicit negative constraints ('100nF', '10K', 'buck converter') for part_number that clarify the expected input format beyond the schema's basic example.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get full details'), resource ('electronic component'), and key inputs ('manufacturer part number or LCSC number'). It explicitly distinguishes from sibling tool 'search_parts' by stating 'Use after search_parts to drill into a specific result,' establishing the tool's position in the workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use ('Use after search_parts to drill into a specific result') and clear when-not-to-use guidance ('Do NOT pass bare component values... descriptions... or reference designators'). The negative examples ('100nF', 'R1') prevent common misuse patterns, effectively differentiating from the broad search capability of sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
prefetch_datasheetsARead-onlyInspect
Trigger background datasheet extraction for multiple parts at once (up to 20). Non-blocking — returns immediately with the status of each part: 'ready' (already extracted), 'queued' (extraction started), or 'error'. Use this to warm up datasheets for a BOM before calling read_datasheet. Example: prefetch_datasheets(['TPS54302', 'ADS1115', 'LP5907'])
IMPORTANT — only pass specific manufacturer part numbers (MPNs). Before calling, verify each part number:
Must be a real MPN like 'STM32F446RCT6', 'TPS54302DDCR', 'GRM155R71C104KA88D' — NOT a description or value.
Do NOT pass bare values like '100nF', '10K', '4.7uF', '300ohm' — these are component values, not part numbers.
Do NOT pass generic descriptions like 'LED red', 'capacitor 100nF', 'resistor 0603'.
Do NOT pass BOM reference designators like 'R1', 'C5', 'U3'.
Do NOT pass test points (TP1, TP_GPIO_1), fiducials (FID1), mounting holes (MH1), or other PCB features.
Do NOT pass board/module names like 'CM5IO_ComputeModule5' — these are boards, not components.
Do NOT concatenate MPNs with LCSC numbers (e.g. 'SMAJ24A-C148222'). Pass 'SMAJ24A' or 'C148222' separately.
If the BOM only has values/descriptions without MPNs, use search_parts first to find the actual MPN.
Passives from BOMs often lack MPNs — skip them rather than prefetching a bare value.
Focus on ICs, semiconductors, and components with substantial datasheets. Simple mechanical parts (headers, connectors, standoffs) rarely have useful datasheets.
| Name | Required | Description | Default |
|---|---|---|---|
| part_numbers | Yes | List of MPNs to prefetch (max 20). Must be specific manufacturer part numbers, not values or descriptions. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (readOnlyHint=true), description discloses non-blocking behavior ('returns immediately'), specific return statuses ('ready', 'queued', 'error'), and side effects ('extraction started'). Missing rate limits or retry behavior prevents a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with logical flow: purpose → return values → example → critical warnings. Bullet points efficiently convey validation rules. Slightly verbose but necessary given the complexity of MPN validation requirements.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with full schema coverage and no output schema, description is comprehensive: covers functionality, return semantics, workflow integration, input validation edge cases, and handling of passives without MPNs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), description adds substantial semantic value with concrete examples of valid MPNs ('STM32F446RCT6') vs invalid inputs ('100nF', 'R1', 'LED red'), and workflow guidance to use search_parts first if MPNs are missing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Trigger background datasheet extraction' with clear resource scope 'multiple parts at once (up to 20)' and distinguishes from sibling read_datasheet by explaining this is for warming up cache before reading.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('warm up datasheets for a BOM before calling read_datasheet') and names specific sibling alternatives (search_parts for finding MPNs, read_datasheet for actual reading). Detailed negative constraints specify when NOT to use (bare values, reference designators, descriptions).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
read_datasheetARead-onlyInspect
Read from a component's datasheet. Two modes:
Section mode (default): Returns a named section. Start with section='summary' to get an overview and a list of available_sections. Then request specific sections by name. Section names are dynamic — any heading in the actual datasheet works (e.g. 'register_map', 'i2c_interface', 'power_management'). If a section name isn't found, automatically falls back to search mode.
Search mode: Semantic search within the part's datasheet. Best for targeted questions (register bit fields, I2C config, specific specs). Use when you need to find specific information rather than a whole section.
First call for a new part triggers extraction (30s-2min). Subsequent calls are cached.
Datasheet vs Reference Manual: Manufacturer datasheets cover high-level specs, pinout, absolute maximum ratings, and package info. For microcontrollers (STM32, nRF52, RP2040), register-level programming details (I2C CR1/CR2, DMA config, interrupt bits) are in a separate Reference Manual, not the datasheet. The summary's available_sections will show what's actually present.
The part_number must be a specific manufacturer part number (e.g. 'TPS54302', 'STM32F446RCT6') or LCSC number (e.g. 'C2837938'). Do NOT pass bare component values ('100nF', '10K'), descriptions, or reference designators.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Reading mode: 'section' (default) returns a named section, 'search' does semantic search | section |
| limit | No | Max search results for search mode (default 10). Each result includes adjacent context chunks for continuity. | |
| query | No | Search query for search mode (e.g. 'charge voltage register', 'I2C address') | |
| section | No | Section name for section mode. Start with 'summary' to discover available sections. Common: summary, pinout, electrical, abs_max, register_map, timing, package. Any heading in the datasheet works (slugified). | summary |
| part_number | Yes | Specific manufacturer part number (MPN) or LCSC number. Not a value or description. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds crucial behavioral details beyond annotations: mentions extraction latency (30s-2min) and caching behavior, explains automatic fallback from section mode to search mode when section not found, and notes that section names are dynamic. Annotations only cover read-only/destructive hints; description supplies operational timing and fallback logic.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear visual hierarchy using bold headers for modes. Information-dense without redundancy. Front-loaded with core purpose, followed by mode explanations, workflow guidance, and input constraints. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a read operation with no output schema. Covers both modes thoroughly, explains the extraction/caching lifecycle, and documents expected latency. Could briefly mention return format (string vs object), but completeness is strong given the schema and annotations provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3), but description adds significant value with concrete examples: section names ('register_map', 'i2c_interface'), query examples ('charge voltage register'), and valid/invalid part_number formats ('TPS54302' vs '100nF'). Also explains the workflow logic (start with summary, then specific sections).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Read') and resource ('component's datasheet'). Distinguishes two operational modes (section vs search) and implicitly differentiates from sibling tools like 'search_datasheets' (which likely searches across parts) by emphasizing this reads from a specific part's datasheet.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance: instructs to 'Start with section='summary' to get an overview', states when to use search mode ('Best for targeted questions...Use when you need to find specific information'), and provides explicit negative constraint ('Do NOT pass bare component values'). Includes latency expectations for first calls.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_datasheetsARead-onlyIdempotentInspect
Semantic search across all extracted datasheets. Finds components matching natural language queries about specifications, features, or capabilities. Best for broad spec-based discovery across all parts (e.g. 'low-noise LDO with PSRR above 70dB'). Only searches datasheets that have been previously extracted — not all parts that exist. For finding specific parts by number, use search_parts instead.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 15) | |
| query | Yes | Natural language search query | |
| section_type | No | Optional: limit search to a specific section type | all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly/destructive), while description adds critical behavioral context about data availability ('Only searches datasheets that have been previously extracted — not all parts that exist'). Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences efficiently cover purpose, methodology, use case with example, scope limitations, and sibling alternatives. No redundant information; every sentence earns its place and content is well front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter search tool with full schema coverage and annotations present, the description adequately covers scope constraints, search methodology, sibling differentiation, and practical examples without requiring output schema documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds value by providing a concrete example query that illustrates the expected natural language format for technical specifications, enhancing understanding of the 'query' parameter semantics beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly identifies the action (semantic search), resource (extracted datasheets), and differentiates from siblings by contrasting with search_parts for specific part number lookups.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('Best for broad spec-based discovery'), provides a concrete example query ('low-noise LDO with PSRR above 70dB'), and explicitly names the alternative tool for different use cases ('use search_parts instead').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_partsARead-onlyIdempotentInspect
Search for electronic components by part number, description, or keyword. Start here — this is the best entry point for finding components. Queries all configured providers in parallel. Results are merged by MPN with indicative pricing and stock from each source. Each result includes datasheet_status ('ready', 'extracting', or 'not_extracted') so you know which parts have datasheets available for read_datasheet. Best with specific part numbers or keywords (e.g. 'STM32F103', 'buck converter 3A'). For spec-based discovery in natural language, use search_datasheets instead.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results per provider (default 20). With providers='all', total results can be up to 3× this value (one set per provider). | |
| query | Yes | Search query (part number or keyword) | |
| providers | No | Which providers to query: 'all' (default), 'jlcpcb', 'mouser', or 'digikey' | all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable execution context beyond annotations: explains parallel provider querying, result merging logic ('merged by MPN'), and output field semantics ('datasheet_status'). Annotations cover safety profile (readOnlyHint, destructiveHint), so description wisely focuses on operational behavior rather than repeating safety metadata.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Seven sentences with zero redundancy: purpose statement, entry-point claim, technical execution details, result structure, output field documentation, usage examples, and sibling differentiation. Information density is high with optimal front-loading.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description adequately compensates by documenting key result fields (datasheet_status, merged pricing/stock) and multi-provider behavior. Covers all necessary invocation context for a search tool with good input schema coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage (baseline 3), the description adds substantive value through concrete query examples ('STM32F103') and behavioral context ('Queries all configured providers in parallel') that clarifies the providers parameter's runtime behavior beyond the static enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb and resource ('Search for electronic components'), delineates searchable fields ('part number, description, or keyword'), and explicitly distinguishes itself from siblings like search_datasheets ('For spec-based discovery... use search_datasheets instead') and hints at integration with read_datasheet via datasheet_status field.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit entry-point guidance ('Start here — this is the best entry point'), clear alternative routing ('use search_datasheets instead'), and concrete query examples ('STM32F103', 'buck converter 3A') that clarify expected input patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.