Skip to main content
Glama

Server Details

Electronic component datasheets for AI agents — specs, pinouts, package data on demand.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
octoco-ltd/sheetsdata-mcp
GitHub Stars
5
Server Listing
SheetsData MCP

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 10 of 10 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between search_parts and search_datasheets, as both handle component discovery with different scopes (provider-based vs. datasheet-based). Additionally, check_design_fit and analyze_image both involve datasheet analysis but target different aspects (limits vs. visual data). The descriptions clarify these boundaries, reducing confusion.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with snake_case, such as analyze_image, check_design_fit, and search_parts. There are no deviations in naming conventions, making the set predictable and easy to understand.

Tool Count5/5

With 10 tools, the server is well-scoped for electronic component and datasheet management. Each tool serves a specific role in the workflow, from discovery (search_parts) to analysis (read_datasheet) and validation (check_design_fit), without redundancy or obvious gaps in count.

Completeness5/5

The tool set provides comprehensive coverage for the domain, including component search, comparison, datasheet reading, image analysis, design validation, and extraction management. It supports full lifecycle tasks from discovery to validation, with no significant gaps that would hinder agent workflows.

Available Tools

12 tools
analyze_imageA
Read-onlyIdempotent
Inspect

Analyze an image from a component's datasheet using vision AI. Use this when read_datasheet returns a section containing images and you need to extract data from a graph, package drawing, pin diagram, or circuit schematic. Pass the image_key from the read_datasheet response (the storage path in the image URL). Optionally pass a specific question to focus the analysis.

IMPORTANT: For precise numeric values (electrical specs, max ratings), prefer read_datasheet text tables first — they are more reliable than vision-extracted graph data. Use analyze_image for visual information not available in text: package dimensions from drawings, pin assignments from diagrams, graph trends, and approximate values from characteristic curves.

Examples:

  • analyze_image(part_number='IRFZ44N', image_key='images/abc123.png') -> classifies and describes the image

  • analyze_image(part_number='IRFZ44N', image_key='images/abc123.png', question='What is the drain current at Vgs=5V?')

ParametersJSON Schema
NameRequiredDescriptionDefault
questionNoOptional specific question about the image (e.g. 'What are the package dimensions?')
image_keyYesImage storage path from read_datasheet output (e.g. 'images/abc123.png')
part_numberYesMPN of the component
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent/destructive status, so the safety profile is covered. The description adds crucial operational context: vision AI's reliability limitations for precise numeric values, the dependency chain (image_key comes from read_datasheet output), and what types of visual content it handles (graphs, schematics, diagrams). It also implies idempotency through the examples showing consistent analysis.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure: purpose statement first, specific usage conditions second, important limitations highlighted with 'IMPORTANT:' callout, concrete examples last. Every sentence advances understanding. No tautology or repetition of schema fields. Length appropriate for a tool with complex sibling dependencies.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description adequately implies return value through examples ('classifies and describes the image'). The sibling relationship with read_datasheet is fully explained. Minor gap: could explicitly state output format (structured text vs raw description), but examples provide sufficient clues for an agent to proceed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by explaining the semantic relationship between tools: image_key is 'from the read_datasheet response (the storage path in the image URL)'—critical context for correct invocation. It also clarifies the question parameter's purpose ('to focus the analysis') with concrete examples showing how to formulate technical queries.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a precise action statement ('Analyze an image from a component's datasheet using vision AI'), identifying the specific resource (datasheet images) and mechanism (vision AI). It clearly distinguishes from sibling tool read_datasheet by stating exactly when to use each: analyze_image is for when read_datasheet 'returns a section containing images.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use conditions ('Use this when read_datasheet returns a section containing images') and when-not-to-use guidance ('IMPORTANT: For precise numeric values... prefer read_datasheet text tables first'). Lists specific appropriate use cases (package dimensions, pin assignments, graph trends) and explicitly names the alternative tool for text data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_design_fitA
Read-only
Inspect

Validate whether a component will work within your operating conditions. Compares your design parameters against the datasheet's absolute maximum ratings and recommended operating conditions. Returns PASS/FAIL/WARNING per parameter with margin percentages.

Parameter mapping by component type:

  • Buck/boost converters: input_voltage, output_voltage, output_current, ambient_temp

  • MOSFETs: supply_voltage=VDS, output_current=ID (drain current), ambient_temp

  • LDOs: input_voltage, output_voltage, output_current, ambient_temp

  • Logic ICs: supply_voltage=VCC, ambient_temp

Result semantics:

  • PASS: user value is more than 10% below the datasheet limit — comfortable margin.

  • WARNING: user value is within 10% of the limit — part will work but with thin margin for part-to-part variation, temperature drift, and transients. Consider derating.

  • FAIL: user value exceeds the limit — part is out of spec and will be stressed or damaged.

Behavior:

  • Two-tier validation. For parameters in our structured database (Vin, Iout, operating temp, etc.), returns instantly and free of LLM cost. For parameters only found in the datasheet text, falls back to an LLM read of the absolute-max and recommended-operating-conditions sections. The 'validation_method' field in the response tells you which path was used.

  • If the part hasn't been extracted yet and the LLM fallback is needed, this call triggers extraction (30s-2min). Returns status='extracting' if so — poll check_extraction_status and retry.

When NOT to use:

  • You need power dissipation or junction-temperature rise — this tool only checks nameplate limits. Pull RthJA from read_datasheet and calculate yourself.

  • You need SOA (safe-operating-area) curve checks for MOSFETs — use analyze_image on the SOA graph.

  • You're checking a passive or mechanical part with no abs-max table — there's nothing for this tool to compare against.

Example: check_design_fit('TPS54302', input_voltage=24, output_current=2.5, ambient_temp=70)

ParametersJSON Schema
NameRequiredDescriptionDefault
part_numberYesSpecific manufacturer part number to validate. Not a value or description.
ambient_tempNoAmbient temperature TA (°C).
input_voltageNoInput voltage / VIN (V). For converters.
output_currentNoOutput current / IOUT (A). For MOSFETs, this is drain current ID.
output_voltageNoOutput voltage / VOUT (V). For converters/LDOs.
supply_voltageNoSupply voltage / VCC / VDD (V). For MOSFETs, this is VDS (drain-source voltage). NOT used for power dissipation — compared against VDS(max) only.
switching_frequencyNoSwitching frequency FSW (kHz). For converters.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds substantial context beyond annotations: explains it checks 'absolute maximum ratings and recommended operating conditions' (specific validation logic), and discloses output format ('PASS/FAIL/WARNING per parameter with margins') which compensates for missing output schema. Consistent with readOnlyHint=true.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four efficiently structured sentences: purpose upfront, methodology, output format, and concrete example. Every sentence earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, description fully explains return structure (pass/fail/warning with margins) and validation logic. Example provides complete usage pattern. Adequate for a 7-parameter validation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Baseline is 3 given 100% schema description coverage. The concrete example invocation ('TPS54302', input_voltage=24...) adds practical context for how electrical parameters map to real component validation, elevating above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb (validate) and resource (component against operating conditions) clearly stated. Distinguished from siblings like 'get_part_details' (retrieval) or 'compare_parts' (comparison) by focusing specifically on design validation against datasheet ratings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context through 'operating conditions' and design parameters, but lacks explicit guidance on when to choose this over 'get_part_details' for simple spec lookup or how it relates to 'read_datasheet'. No alternatives or exclusions named.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_extraction_statusA
Read-onlyIdempotent
Inspect

Check the extraction status of one or more parts. Free. Each entry includes the current extraction step, elapsed seconds, and document ID.

Use after prefetch_datasheets or after read_datasheet triggers a new extraction.

Recommended polling cadence: every 5-10 seconds. Extraction typically takes 30s-2min for new parts, so polling faster than every 5s wastes calls. Stop polling once status is 'ready', 'failed', 'no_source', or 'unsupported'.

DATASHEET STATUS VALUES:

  • 'ready' — extracted and indexed; call read_datasheet, search_datasheets, or analyze_image.

  • 'extracting' / 'in_progress' / 'queued' / 'pending' — extraction running or scheduled. Poll check_extraction_status every 5-10s until 'ready' or 'failed'. Typical time: 30s-2min.

  • 'not_extracted' — known part but datasheet hasn't been fetched yet. Trigger it via prefetch_datasheets (cheapest) or by calling read_datasheet (auto-triggers on first read).

  • 'no_source' — we couldn't find a public datasheet URL for this MPN. First, retry prefetch_datasheets in 10-30s (the URL resolver re-runs and often finds a source on the second pass). If still 'no_source', the agent can upload the PDF manually via request_datasheet_upload + confirm_datasheet_upload (see those tools). Org-uploaded datasheets are private to the org.

  • 'unsupported' — PDF exists but can't be extracted (scanned image-only, encrypted, or corrupted). Upload a clean text-based PDF via request_datasheet_upload to override.

  • 'failed' / 'error' — extraction errored. The response includes the error reason. Retry via prefetch_datasheets or escalate to support.

  • 'rejected' — input wasn't a real MPN (bare value like '100nF', description, or reference designator). Fix the input and re-call.

  • 'deduplicated' — another part in the family already has this datasheet; same content is returned under the primary MPN.

ParametersJSON Schema
NameRequiredDescriptionDefault
part_numbersYes1-20 MPNs to check. Must be specific manufacturer part numbers, not values or descriptions.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds substantial value: enumerates all possible status values with definitions ('ready', 'extracting', etc.), discloses additional return fields (step, elapsed time, document ID), and clarifies cost. Does not mention rate limits or polling frequency recommendations, preventing a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, perfectly structured: purpose → return values → additional fields → usage guidance. Every sentence carries unique information. No redundancy with schema or annotations. 'Free' prefix in final sentence efficiently signals cost before giving workflow instruction.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description compensates by detailing the status enum, tracking fields, and workflow context. Annotations are present and clear. Could be improved by mentioning error handling or rate limiting for the polling pattern, but adequate for tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions ('List of MPNs to check (max 20)'). Description mentions 'one or more parts' which aligns with the parameter but adds no semantic detail beyond what the schema already provides. Baseline 3 is appropriate given complete schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Check') + resource ('extraction status of one or more parts'). Explicitly distinguishes from siblings 'prefetch_datasheets' and 'read_datasheet' by stating this polls progress after those operations, establishing the workflow relationship.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit when-to-use guidance ('use this to poll progress after prefetch_datasheets or read_datasheet') and cost signal ('Free'). Names specific sibling tools as workflow predecessors, making the orchestration pattern unambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_partsA
Read-onlyIdempotent
Inspect

Compare 2-5 electronic components side by side in a single call. For each part, returns merged provider data (pricing, stock, structured parameters, package) plus the cached datasheet summary if one exists, plus datasheet_status ('ready', 'extracting', or 'not_extracted').

Use this instead of calling get_part_details in a loop — it fans out provider queries in parallel and merges by MPN. For discovering candidates, use search_parts or find_alternative first; compare_parts assumes you already know which MPNs you want to compare.

Behavior:

  • Uses only cached datasheet summaries — does not trigger extraction. Call prefetch_datasheets first if you need summaries for parts that haven't been extracted yet.

  • Validates every MPN upfront. If any input is not a real part number (value, description, reference designator), the whole call is rejected with a 'rejected' map listing the offenders — other parts are not compared. Filter your list before calling.

  • If a valid MPN is not found at any provider, that part still appears in the response with an 'error' field; the other parts are compared normally.

IMPORTANT — part_numbers must be specific manufacturer part numbers (e.g. 'TPS54302DDCR', 'STM32F446RCT6') or LCSC numbers (e.g. 'C2837938'). Do NOT pass component values ('100nF', '10K'), descriptions ('buck converter'), or reference designators ('U3', 'R1').

Example: compare_parts(['TPS54302', 'LM2596', 'MP2359'])

DATASHEET STATUS VALUES:

  • 'ready' — extracted and indexed; call read_datasheet, search_datasheets, or analyze_image.

  • 'extracting' / 'in_progress' / 'queued' / 'pending' — extraction running or scheduled. Poll check_extraction_status every 5-10s until 'ready' or 'failed'. Typical time: 30s-2min.

  • 'not_extracted' — known part but datasheet hasn't been fetched yet. Trigger it via prefetch_datasheets (cheapest) or by calling read_datasheet (auto-triggers on first read).

  • 'no_source' — we couldn't find a public datasheet URL for this MPN. First, retry prefetch_datasheets in 10-30s (the URL resolver re-runs and often finds a source on the second pass). If still 'no_source', the agent can upload the PDF manually via request_datasheet_upload + confirm_datasheet_upload (see those tools). Org-uploaded datasheets are private to the org.

  • 'unsupported' — PDF exists but can't be extracted (scanned image-only, encrypted, or corrupted). Upload a clean text-based PDF via request_datasheet_upload to override.

  • 'failed' / 'error' — extraction errored. The response includes the error reason. Retry via prefetch_datasheets or escalate to support.

  • 'rejected' — input wasn't a real MPN (bare value like '100nF', description, or reference designator). Fix the input and re-call.

  • 'deduplicated' — another part in the family already has this datasheet; same content is returned under the primary MPN.

ParametersJSON Schema
NameRequiredDescriptionDefault
part_numbersYes2-5 specific manufacturer part numbers or LCSC numbers to compare head-to-head. Must be real MPNs — the call is rejected upfront if any entry is a value, description, or reference designator.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Excellent context beyond annotations: discloses data sources ('merged details from all providers'), caching behavior ('cached datasheet summaries'), and specific return fields ('datasheet_status', 'specs, pricing, availability'). Complements readOnlyHint by explaining what data is returned without contradicting safety annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences followed by a concrete example. Front-loaded with critical constraints (2-5 parts). No redundancy—each sentence adds distinct information (purpose, data sources/return value, specific fields).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a single-parameter comparison tool. Compensates for missing output_schema by describing return contents (merged provider data, specs, pricing, availability). Safety profile covered by annotations. Minor gap: doesn't mention error cases (e.g., invalid part numbers).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds value by reinforcing the '2-5' constraint in natural language and providing a concrete example (['TPS54302', 'LM2596', 'MP2359']) that clarifies expected input format beyond the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Compare') + resource ('electronic components') with explicit scope ('2-5', 'side by side'). Distinguishes from siblings like get_part_details (single part) and search_parts (discovery) by emphasizing comparison of specific known part numbers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit 'when to use vs alternatives' guidance, but implies usage through the '2-5 specific part numbers' constraint and concrete example. Lacks explicit guidance like 'use search_parts to find parts first, then use this to compare specific candidates'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

confirm_datasheet_uploadA
Destructive
Inspect

Confirm a datasheet upload started via request_datasheet_upload. Pass the upload_token you got back from the request step. The server downloads the uploaded bytes, re-hashes to verify integrity, validates that it's a real PDF with the MPN on the first page, creates the private Document + Component records, charges the upload fee (50¢), and queues extraction.

Success response: document_id, mpn, sha256, file_size_bytes, status='pending'. Poll check_extraction_status with the MPN to wait for extraction to finish (30s-2min typically).

Failure modes:

  • 'upload_not_found' — no bytes at the upload URL yet. Retry your curl upload.

  • 'sha256_mismatch' — uploaded bytes hash differs from expected_sha256. Re-compute the hash and re-request.

  • 'invalid_pdf' — bytes aren't a parseable PDF. No charge.

  • 'mpn_not_in_pdf' — MPN (or its stem) isn't on the first page. Either you uploaded the wrong file or it's a scanned image-only PDF. No charge.

  • 'token_expired' — upload token is older than 15 minutes. Restart via request_datasheet_upload.

ParametersJSON Schema
NameRequiredDescriptionDefault
upload_tokenYesOpaque token returned by request_datasheet_upload.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false (mutation) and destructiveHint=false (no destruction). The description goes far beyond annotations, detailing the entire server-side process: downloading bytes, re-hashing, validating PDF, charging fee, and queuing extraction. It also lists all possible failure modes and their meanings, providing full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections for success response and failure modes. It is concise yet comprehensive, with every sentence providing necessary information. No unnecessary fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (multi-step process with validation and charging), the description is complete. It covers all aspects: prerequisites, process, success output, and all failure modes. No output schema exists, but the success fields are described. All failure modes are enumerated, ensuring the agent can handle each case.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter upload_token having a clear description. The description adds value by stating that the token comes from request_datasheet_upload, which reinforces usage context. A slight improvement could be noting the format or length, but the current information is sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the purpose: confirming a datasheet upload after the request step. It specifies the action (confirm), the resource (datasheet upload), and distinguishes it from the sibling request_datasheet_upload by requiring the upload_token from that step.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly explains when to use this tool (after request_datasheet_upload, by passing the upload_token). It does not need to mention alternatives as the sibling set is distinct. The description also provides detailed failure modes and how to handle each, guiding correct usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_alternativeA
Read-onlyIdempotent
Inspect

Find alternative / second-source components for a given MPN. Returns parts ranked by how closely their specs, package, and category match the reference part, with a match_score and match_notes explaining each match.

When to use:

  • Original part is out of stock or end-of-life.

  • You need a cheaper equivalent with similar specs.

  • You need a second source for supply-chain resilience.

  • You need a drop-in replacement on an existing PCB (use constraints=['same_package']).

When NOT to use:

  • You're browsing a category from scratch — use search_parts.

  • You already have candidates and want a head-to-head comparison — use compare_parts directly.

Behavior:

  • Eval boards, dev kits, and breakout modules are filtered out — results are IC-level alternatives only.

  • 'same_package' uses fuzzy matching, so SOT-23 ≡ SOT23, SOIC-8 ≡ SOP-8, LQFP64 ≡ TQFP64, etc. Exact string match is not required.

  • If provider search returns few candidates, the tool falls back to semantic search across extracted datasheets to surface niche alternatives.

  • The response includes a 'hint' field pointing to compare_parts / read_datasheet for drilling into promising alternatives.

Example: find_alternative('TPS54302', constraints=['same_package', 'in_stock'])

DATASHEET STATUS VALUES:

  • 'ready' — extracted and indexed; call read_datasheet, search_datasheets, or analyze_image.

  • 'extracting' / 'in_progress' / 'queued' / 'pending' — extraction running or scheduled. Poll check_extraction_status every 5-10s until 'ready' or 'failed'. Typical time: 30s-2min.

  • 'not_extracted' — known part but datasheet hasn't been fetched yet. Trigger it via prefetch_datasheets (cheapest) or by calling read_datasheet (auto-triggers on first read).

  • 'no_source' — we couldn't find a public datasheet URL for this MPN. First, retry prefetch_datasheets in 10-30s (the URL resolver re-runs and often finds a source on the second pass). If still 'no_source', the agent can upload the PDF manually via request_datasheet_upload + confirm_datasheet_upload (see those tools). Org-uploaded datasheets are private to the org.

  • 'unsupported' — PDF exists but can't be extracted (scanned image-only, encrypted, or corrupted). Upload a clean text-based PDF via request_datasheet_upload to override.

  • 'failed' / 'error' — extraction errored. The response includes the error reason. Retry via prefetch_datasheets or escalate to support.

  • 'rejected' — input wasn't a real MPN (bare value like '100nF', description, or reference designator). Fix the input and re-call.

  • 'deduplicated' — another part in the family already has this datasheet; same content is returned under the primary MPN.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax alternatives to return (default 5, max 20).
constraintsNoOptional filters to narrow alternatives. 'in_stock' = only parts with non-zero stock at any provider. 'same_package' = package footprint must match (fuzzy). 'jlcpcb' = must be available at JLCPCB (useful for JLCPCB-assembled boards — also restricts the provider search to JLCPCB).
part_numberYesThe MPN to find alternatives for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover the safety profile (readOnly, non-destructive, idempotent) and open-world nature. The description adds valuable behavioral context not in annotations: it specifies the search logic ('similar specs, same package, or compatible pinout') and confirms the cross-provider/external search implied by openWorldHint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tight sentences followed by a concrete example. Information is front-loaded (purpose → mechanism → usage context), with zero redundancy. Every sentence earns its place without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 3-parameter schema, strong annotations, and lack of output schema, the description provides sufficient context: clear purpose, usage scenarios, matching criteria, and an example call. A minor gap is lack of indication about return value structure, though this is acceptable for a read-only search tool of this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents all three parameters including the constraints syntax. The description provides a concrete usage example, but this illustrates rather than adds semantic meaning beyond the comprehensive schema documentation. Baseline 3 is appropriate given the schema carries the burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Find alternative/substitute components') and resource (components for a given part number). It distinguishes from siblings like 'search_parts' (general discovery) and 'compare_parts' (side-by-side comparison) by emphasizing the substitute-finding use case and cross-provider search scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear 'when to use' guidance ('Useful when a part is out of stock, too expensive, or you need a second source'). However, it does not explicitly name sibling tools as alternatives (e.g., when to use 'compare_parts' vs this tool), which would provide complete guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_part_detailsA
Read-only
Inspect

Get full details for a specific electronic component by manufacturer part number (MPN) or LCSC number. Returns specs, pricing, and stock from all configured providers, plus the cached datasheet summary if available. Includes datasheet_status and available_sections when ready. Set prefetch_datasheet=true to trigger background extraction — no extra charge. Use after search_parts to drill into a specific result.

The part_number must be a specific manufacturer part number (e.g. 'TPS54302DDCR', 'STM32F446RCT6') or LCSC number (e.g. 'C2837938'). Do NOT pass bare component values ('100nF', '10K'), descriptions ('buck converter'), or reference designators ('R1', 'U3').

DATASHEET STATUS VALUES:

  • 'ready' — extracted and indexed; call read_datasheet, search_datasheets, or analyze_image.

  • 'extracting' / 'in_progress' / 'queued' / 'pending' — extraction running or scheduled. Poll check_extraction_status every 5-10s until 'ready' or 'failed'. Typical time: 30s-2min.

  • 'not_extracted' — known part but datasheet hasn't been fetched yet. Trigger it via prefetch_datasheets (cheapest) or by calling read_datasheet (auto-triggers on first read).

  • 'no_source' — we couldn't find a public datasheet URL for this MPN. First, retry prefetch_datasheets in 10-30s (the URL resolver re-runs and often finds a source on the second pass). If still 'no_source', the agent can upload the PDF manually via request_datasheet_upload + confirm_datasheet_upload (see those tools). Org-uploaded datasheets are private to the org.

  • 'unsupported' — PDF exists but can't be extracted (scanned image-only, encrypted, or corrupted). Upload a clean text-based PDF via request_datasheet_upload to override.

  • 'failed' / 'error' — extraction errored. The response includes the error reason. Retry via prefetch_datasheets or escalate to support.

  • 'rejected' — input wasn't a real MPN (bare value like '100nF', description, or reference designator). Fix the input and re-call.

  • 'deduplicated' — another part in the family already has this datasheet; same content is returned under the primary MPN.

ParametersJSON Schema
NameRequiredDescriptionDefault
providerNoWhich provider to query: 'all' (default), 'jlcpcb', 'mouser', or 'digikey'all
part_numberYesSpecific manufacturer part number (MPN) or LCSC number (e.g. 'C2837938'). Not a value or description.
prefetch_datasheetNoTrigger background datasheet extraction (no extra charge). Default false.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only safety (readOnlyHint: true, destructiveHint: false). The description adds substantial behavioral context beyond these hints: specific return fields (specs, pricing, stock, datasheet_status with enum values 'ready'/'extracting'/'not_extracted', available_sections), and the side-effect behavior of prefetch_datasheet ('trigger background extraction', 'no extra charge').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense paragraphs with zero waste. The first sentence establishes purpose and return value; the second covers workflow context; the third paragraph provides critical input validation constraints. Every sentence conveys unique information not duplicated in the schema or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description comprehensively details the return structure (specs, pricing, stock, datasheet summary, status fields) and behavioral states. Deducting one point only for lack of error handling or rate limit disclosure, which would be necessary for a complete 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds significant value by providing expanded positive examples ('TPS54302DDCR', 'STM32F446RCT6') and explicit negative constraints ('100nF', '10K', 'buck converter') for part_number that clarify the expected input format beyond the schema's basic example.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get full details'), resource ('electronic component'), and key inputs ('manufacturer part number or LCSC number'). It explicitly distinguishes from sibling tool 'search_parts' by stating 'Use after search_parts to drill into a specific result,' establishing the tool's position in the workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use ('Use after search_parts to drill into a specific result') and clear when-not-to-use guidance ('Do NOT pass bare component values... descriptions... or reference designators'). The negative examples ('100nF', 'R1') prevent common misuse patterns, effectively differentiating from the broad search capability of sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

prefetch_datasheetsA
Read-only
Inspect

Trigger background datasheet extraction for multiple parts at once (up to 20). Non-blocking — returns immediately with the status of each part. Use this to warm up datasheets for a BOM before calling read_datasheet. Example: prefetch_datasheets(['TPS54302', 'ADS1115', 'LP5907'])

If a part comes back 'no_source' on the first call, retry prefetch for that MPN once after 10-30s — the URL resolver is retriable and often finds a source on the second pass. If still 'no_source', use request_datasheet_upload + confirm_datasheet_upload to attach your own PDF (org-private).

Part numbers must be specific MPNs (e.g. 'STM32F446RCT6', 'TPS54302DDCR') or LCSC numbers (e.g. 'C2837938'). Do NOT pass bare values ('100nF', '10K'), descriptions, BOM reference designators, test points, or board/module names — see the server instructions for the full rule set. When a BOM has values-only rows, use search_parts first to resolve each to an MPN.

DATASHEET STATUS VALUES:

  • 'ready' — extracted and indexed; call read_datasheet, search_datasheets, or analyze_image.

  • 'extracting' / 'in_progress' / 'queued' / 'pending' — extraction running or scheduled. Poll check_extraction_status every 5-10s until 'ready' or 'failed'. Typical time: 30s-2min.

  • 'not_extracted' — known part but datasheet hasn't been fetched yet. Trigger it via prefetch_datasheets (cheapest) or by calling read_datasheet (auto-triggers on first read).

  • 'no_source' — we couldn't find a public datasheet URL for this MPN. First, retry prefetch_datasheets in 10-30s (the URL resolver re-runs and often finds a source on the second pass). If still 'no_source', the agent can upload the PDF manually via request_datasheet_upload + confirm_datasheet_upload (see those tools). Org-uploaded datasheets are private to the org.

  • 'unsupported' — PDF exists but can't be extracted (scanned image-only, encrypted, or corrupted). Upload a clean text-based PDF via request_datasheet_upload to override.

  • 'failed' / 'error' — extraction errored. The response includes the error reason. Retry via prefetch_datasheets or escalate to support.

  • 'rejected' — input wasn't a real MPN (bare value like '100nF', description, or reference designator). Fix the input and re-call.

  • 'deduplicated' — another part in the family already has this datasheet; same content is returned under the primary MPN.

ParametersJSON Schema
NameRequiredDescriptionDefault
part_numbersYesList of MPNs to prefetch (max 20). Must be specific manufacturer part numbers, not values or descriptions.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnlyHint=true), description discloses non-blocking behavior ('returns immediately'), specific return statuses ('ready', 'queued', 'error'), and side effects ('extraction started'). Missing rate limits or retry behavior prevents a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with logical flow: purpose → return values → example → critical warnings. Bullet points efficiently convey validation rules. Slightly verbose but necessary given the complexity of MPN validation requirements.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with full schema coverage and no output schema, description is comprehensive: covers functionality, return semantics, workflow integration, input validation edge cases, and handling of passives without MPNs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage (baseline 3), description adds substantial semantic value with concrete examples of valid MPNs ('STM32F446RCT6') vs invalid inputs ('100nF', 'R1', 'LED red'), and workflow guidance to use search_parts first if MPNs are missing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Trigger background datasheet extraction' with clear resource scope 'multiple parts at once (up to 20)' and distinguishes from sibling read_datasheet by explaining this is for warming up cache before reading.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('warm up datasheets for a BOM before calling read_datasheet') and names specific sibling alternatives (search_parts for finding MPNs, read_datasheet for actual reading). Detailed negative constraints specify when NOT to use (bare values, reference designators, descriptions).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

read_datasheetA
Read-only
Inspect

Read from a component's datasheet. Two modes:

Section mode (default): Returns a named section. Start with section='summary' to get an overview and a list of available_sections. Then request specific sections by name. Section names are dynamic — any heading in the actual datasheet works (e.g. 'register_map', 'i2c_interface', 'power_management'). If a section name isn't found, automatically falls back to search mode.

Search mode: Semantic search within the part's datasheet. Best for targeted questions (register bit fields, I2C config, specific specs). Use when you need to find specific information rather than a whole section.

First call for a new part triggers extraction (30s-2min). Subsequent calls are cached.

Datasheet vs Reference Manual: Manufacturer datasheets cover high-level specs, pinout, absolute maximum ratings, and package info. For microcontrollers (STM32, nRF52, RP2040), register-level programming details (I2C CR1/CR2, DMA config, interrupt bits) are in a separate Reference Manual, not the datasheet. The summary's available_sections will show what's actually present.

The part_number must be a specific manufacturer part number (e.g. 'TPS54302', 'STM32F446RCT6') or LCSC number (e.g. 'C2837938'). Do NOT pass bare component values ('100nF', '10K'), descriptions, or reference designators.

DATASHEET STATUS VALUES:

  • 'ready' — extracted and indexed; call read_datasheet, search_datasheets, or analyze_image.

  • 'extracting' / 'in_progress' / 'queued' / 'pending' — extraction running or scheduled. Poll check_extraction_status every 5-10s until 'ready' or 'failed'. Typical time: 30s-2min.

  • 'not_extracted' — known part but datasheet hasn't been fetched yet. Trigger it via prefetch_datasheets (cheapest) or by calling read_datasheet (auto-triggers on first read).

  • 'no_source' — we couldn't find a public datasheet URL for this MPN. First, retry prefetch_datasheets in 10-30s (the URL resolver re-runs and often finds a source on the second pass). If still 'no_source', the agent can upload the PDF manually via request_datasheet_upload + confirm_datasheet_upload (see those tools). Org-uploaded datasheets are private to the org.

  • 'unsupported' — PDF exists but can't be extracted (scanned image-only, encrypted, or corrupted). Upload a clean text-based PDF via request_datasheet_upload to override.

  • 'failed' / 'error' — extraction errored. The response includes the error reason. Retry via prefetch_datasheets or escalate to support.

  • 'rejected' — input wasn't a real MPN (bare value like '100nF', description, or reference designator). Fix the input and re-call.

  • 'deduplicated' — another part in the family already has this datasheet; same content is returned under the primary MPN.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoReading mode: 'section' (default) returns a named section, 'search' does semantic searchsection
limitNoMax search results for search mode (default 10). Each result includes adjacent context chunks for continuity.
queryNoSearch query — REQUIRED when mode='search', ignored when mode='section'. Example: 'charge voltage register', 'I2C address'.
sectionNoSection name for section mode. Start with 'summary' to discover available sections. Common: summary, pinout, electrical, abs_max, register_map, timing, package. Any heading in the datasheet works (slugified).summary
part_numberYesSpecific manufacturer part number (MPN) or LCSC number. Not a value or description.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds crucial behavioral details beyond annotations: mentions extraction latency (30s-2min) and caching behavior, explains automatic fallback from section mode to search mode when section not found, and notes that section names are dynamic. Annotations only cover read-only/destructive hints; description supplies operational timing and fallback logic.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear visual hierarchy using bold headers for modes. Information-dense without redundancy. Front-loaded with core purpose, followed by mode explanations, workflow guidance, and input constraints. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a read operation with no output schema. Covers both modes thoroughly, explains the extraction/caching lifecycle, and documents expected latency. Could briefly mention return format (string vs object), but completeness is strong given the schema and annotations provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3), but description adds significant value with concrete examples: section names ('register_map', 'i2c_interface'), query examples ('charge voltage register'), and valid/invalid part_number formats ('TPS54302' vs '100nF'). Also explains the workflow logic (start with summary, then specific sections).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Read') and resource ('component's datasheet'). Distinguishes two operational modes (section vs search) and implicitly differentiates from sibling tools like 'search_datasheets' (which likely searches across parts) by emphasizing this reads from a specific part's datasheet.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent explicit guidance: instructs to 'Start with section='summary' to get an overview', states when to use search mode ('Best for targeted questions...Use when you need to find specific information'), and provides explicit negative constraint ('Do NOT pass bare component values'). Includes latency expectations for first calls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

request_datasheet_uploadAInspect

Request a signed URL to upload a datasheet PDF for a component whose datasheet we don't have. Use this when search_parts / get_part_details / prefetch_datasheets return datasheet_status='no_source' (and a retry didn't help) or 'unsupported'. Free — the upload fee is only charged on confirm_datasheet_upload after we validate the file.

Flow (3 steps):

  1. Call request_datasheet_upload with the MPN, the file's SHA-256, and its byte size. You get back an upload_url, upload_method ('PUT'), upload_headers, and an opaque upload_token.

  2. Upload the PDF directly to the returned URL with curl: curl -X PUT -H 'Content-Type: application/pdf' --data-binary @file.pdf "$UPLOAD_URL" (add any headers from upload_headers).

  3. Call confirm_datasheet_upload with the upload_token. Server verifies the bytes, re-hashes, checks for the MPN on the first page, charges the upload fee (50¢), and queues extraction. Returns document_id + status='pending'.

Validation rules (checked at confirm time, refunded on failure):

  • File must be a valid PDF (magic bytes + parseable).

  • Actual SHA-256 must match expected_sha256.

  • Actual byte size must match size_bytes (±0).

  • MPN or its core stem must appear in the first page text (catches wrong-file uploads). Scanned image-only PDFs will fail this check — upload a text-based PDF.

  • Max 50MB per file. No dev-kit manuals / BOB schematics / app-notes as datasheets — use the matching MPN's actual datasheet.

Uploaded datasheets are scoped to your organization (private). They satisfy read_datasheet, search_datasheets, check_design_fit, and analyze_image for your org's tokens only.

Tokens expire after 15 minutes. If upload fails or times out, just call request_datasheet_upload again.

ParametersJSON Schema
NameRequiredDescriptionDefault
size_bytesYesSize of the PDF in bytes. Must be ≤ 50_000_000 (50MB) and > 1024.
part_numberYesManufacturer part number the PDF belongs to. Must be a real MPN, not a value or description.
manufacturerNoOptional. Manufacturer name (e.g. 'Texas Instruments') if the MPN alone doesn't disambiguate.
expected_sha256YesLowercase hex SHA-256 of the PDF bytes (64 chars). Used both as the storage key (content-addressed) and to detect tampering. Compute with `shasum -a 256 file.pdf`.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint: false, destructiveHint: false) imply mutation but not destruction. The description adds key behavioral details: tokens expire after 15 minutes, uploads are scoped to organization, validation rules are explained, and the fee structure is clarified. However, it could mention rate limits or token refresh specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with flow steps, bullet-like lists for validation rules, and front-loaded purpose. Every sentence provides value; no redundancy. Length is appropriate for complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description fully explains the return values (upload_url, upload_method, upload_headers, upload_token) and the subsequent flow. It also addresses prerequisites, constraints, and edge cases. Complete for this tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for all parameters. The description adds value by explaining the computed SHA-256 command (`shasum -a 256 file.pdf`) and reinforces constraints like max size and valid MPN. Slight deduction because parameter semantics are mostly covered in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Request a signed URL to upload a datasheet PDF for a component whose datasheet we don't have.' It specifies the resource (datasheet PDF) and the action (request upload URL), and distinguishes from siblings like confirm_datasheet_upload.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use this when search_parts / get_part_details / prefetch_datasheets return datasheet_status='no_source'...' It also explains the broader context as part of a 3-step flow, including what happens after and when to retry.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_datasheetsA
Read-onlyIdempotent
Inspect

Semantic search across all extracted datasheets. Finds components matching natural language queries about specifications, features, or capabilities. Best for broad spec-based discovery across all parts (e.g. 'low-noise LDO with PSRR above 70dB'). Only searches datasheets that have been previously extracted — not all parts that exist. For finding specific parts by number, use search_parts instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 15)
queryYesNatural language search query
section_typeNoOptional: limit search to a specific section typeall
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly/destructive), while description adds critical behavioral context about data availability ('Only searches datasheets that have been previously extracted — not all parts that exist'). Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five sentences efficiently cover purpose, methodology, use case with example, scope limitations, and sibling alternatives. No redundant information; every sentence earns its place and content is well front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter search tool with full schema coverage and annotations present, the description adequately covers scope constraints, search methodology, sibling differentiation, and practical examples without requiring output schema documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds value by providing a concrete example query that illustrates the expected natural language format for technical specifications, enhancing understanding of the 'query' parameter semantics beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly identifies the action (semantic search), resource (extracted datasheets), and differentiates from siblings by contrasting with search_parts for specific part number lookups.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('Best for broad spec-based discovery'), provides a concrete example query ('low-noise LDO with PSRR above 70dB'), and explicitly names the alternative tool for different use cases ('use search_parts instead').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_partsA
Read-onlyIdempotent
Inspect

Search for electronic components by part number, description, or keyword. Start here — this is the best entry point for finding components. Queries all configured providers in parallel. Results are merged by MPN with indicative pricing and stock from each source. Each result includes datasheet_status so you know which parts have datasheets available for read_datasheet. Best with specific part numbers or keywords (e.g. 'STM32F103', 'buck converter 3A'). For spec-based discovery in natural language, use search_datasheets instead. When the calling org has a private parts library, matching org-uploaded parts are appended to the results with source='private_library' and any tags the team has applied — including private parts whose MPN, manufacturer, description, type, category, or tag matches the query.

DATASHEET STATUS VALUES:

  • 'ready' — extracted and indexed; call read_datasheet, search_datasheets, or analyze_image.

  • 'extracting' / 'in_progress' / 'queued' / 'pending' — extraction running or scheduled. Poll check_extraction_status every 5-10s until 'ready' or 'failed'. Typical time: 30s-2min.

  • 'not_extracted' — known part but datasheet hasn't been fetched yet. Trigger it via prefetch_datasheets (cheapest) or by calling read_datasheet (auto-triggers on first read).

  • 'no_source' — we couldn't find a public datasheet URL for this MPN. First, retry prefetch_datasheets in 10-30s (the URL resolver re-runs and often finds a source on the second pass). If still 'no_source', the agent can upload the PDF manually via request_datasheet_upload + confirm_datasheet_upload (see those tools). Org-uploaded datasheets are private to the org.

  • 'unsupported' — PDF exists but can't be extracted (scanned image-only, encrypted, or corrupted). Upload a clean text-based PDF via request_datasheet_upload to override.

  • 'failed' / 'error' — extraction errored. The response includes the error reason. Retry via prefetch_datasheets or escalate to support.

  • 'rejected' — input wasn't a real MPN (bare value like '100nF', description, or reference designator). Fix the input and re-call.

  • 'deduplicated' — another part in the family already has this datasheet; same content is returned under the primary MPN.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results per provider (default 20). With providers='all', total results can be up to 3× this value (one set per provider).
queryYesSearch query (part number or keyword)
providersNoWhich providers to query: 'all' (default), 'jlcpcb', 'mouser', or 'digikey'all
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable execution context beyond annotations: explains parallel provider querying, result merging logic ('merged by MPN'), and output field semantics ('datasheet_status'). Annotations cover safety profile (readOnlyHint, destructiveHint), so description wisely focuses on operational behavior rather than repeating safety metadata.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Seven sentences with zero redundancy: purpose statement, entry-point claim, technical execution details, result structure, output field documentation, usage examples, and sibling differentiation. Information density is high with optimal front-loading.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description adequately compensates by documenting key result fields (datasheet_status, merged pricing/stock) and multi-provider behavior. Covers all necessary invocation context for a search tool with good input schema coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage (baseline 3), the description adds substantive value through concrete query examples ('STM32F103') and behavioral context ('Queries all configured providers in parallel') that clarifies the providers parameter's runtime behavior beyond the static enum values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb and resource ('Search for electronic components'), delineates searchable fields ('part number, description, or keyword'), and explicitly distinguishes itself from siblings like search_datasheets ('For spec-based discovery... use search_datasheets instead') and hints at integration with read_datasheet via datasheet_status field.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit entry-point guidance ('Start here — this is the best entry point'), clear alternative routing ('use search_datasheets instead'), and concrete query examples ('STM32F103', 'buck converter 3A') that clarify expected input patterns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.