Skip to main content
Glama
shyshlakov

pci-dss-mcp

scan_pan_data

Scan a Go project directory for credit card data (PAN, CVV) exposure compliant with PCI DSS v4.0.1. Filter by severity, rule, or exclude patterns. Supports paginated results and taint analysis.

Instructions

Default: returns response_shape "summary" with by_severity counts, a capped by_rule histogram (top 10 + more_rules), and top 3 per severity findings - plus a pagination.next_cursor for drill-down. Prefer this for mixed queries; min_severity / rule_filter drop to response_shape "flat" but still carry summary.by_severity + summary.by_rule for full-scan context. Follow the cursor for the full paginated list. Use include_tests / exclude_patterns for a filtered flat response. Maps findings to PCI DSS 3.3.1, 3.4.1, 3.5.1.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pathYesrequired,Path to the Go project directory to scan for PAN/CVV data exposure
exclude_patternsNoOptional glob patterns to exclude. Supports directory patterns (vendor/) and file globs (*.pb.go). Default: vendor/ generated/ *.pb.go testdata/ mocks/
include_testsNoInclude _test.go files in scan results. Default false excludes test files per industry SAST consensus
include_untrackedNoScan all files including .gitignored. Default false scans only git-tracked files
include_taintNoEnable flow-based severity adjustment using go/packages type analysis. When true PAN-KEYWORD/PAN-TYPE findings on transit-only struct fields are downgraded or suppressed. Adds 5-30 seconds. Default false (opt-in for accuracy vs speed)
cursorNoOpaque cursor token from a prior scan_pan_data response. When set resumes pagination from the stored session cache (10-minute TTL). Leave empty for a fresh scan.
limitNoMaximum number of findings to return per call. Default 0 (summary-first response with next_cursor). To fetch more findings than fit in one response, follow next_cursor; do NOT raise this value to fetch all at once (server caps at the per-tool page size and rejects with LIMIT_EXCEEDS_PAGE_SIZE).
min_severityNoFilter by minimum severity (CRITICAL/HIGH/MEDIUM/LOW/INFO). Setting this forces the flat response shape.
rule_filterNoFilter by rule ID, comma list or /regex/. Setting this forces the flat response shape.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses the two response shapes, the effect of parameters like min_severity/rule_filter, pagination mechanics (session cache TTL, cursor handling), performance implications of include_taint, and the server-enforced cap on limit. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a dense single paragraph that packs a lot of information. While efficient, it could benefit from bullet points or section breaks for easier scanning. Nonetheless, every sentence is meaningful and earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of 9 parameters and an output schema, the description covers all key aspects: default vs. flat response, filtering, pagination, performance trade-offs, and mapping to PCI DSS. It is complete for an AI agent to select and invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds significant value by explaining default behavior, conditional response shapes, pagination details, and performance costs. For example, it clarifies that include_taint adds 5-30 seconds and that limit exceeding the page size is rejected.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool scans for PAN/CVV data exposure and provides a detailed breakdown of the default response shape (summary with counts, histogram, top findings, and pagination cursor). It distinguishes itself from sibling tools by its specific focus and behavior.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives explicit guidance on when to use default vs. filtered modes, how to follow cursors for pagination, and how to use include_tests/exclude_patterns for flat responses. It also mentions mapping to PCI DSS, providing context for compliance use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/shyshlakov/pci-dss-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server