Skip to main content
Glama

lint_files

Inspect specified files for hardcoded secrets. Optionally fix by replacing each with process.env.KEY and storing the value in the keyring. Read-only mode reports findings without changes.

Instructions

[scan] Inspect a specific list of files for hardcoded secrets and, when fix is true, replace each finding with process.env.KEY while storing the extracted value into the keyring. Use to migrate a known set of files (e.g. just-changed files in a pre-commit hook) into q-ring; prefer scan_codebase_for_secrets for a whole-tree audit and import_dotenv to ingest an existing .env. With fix: false this is read-only. With fix: true this MUTATES the listed source files in place (review with git diff!) and writes one new secret per finding to the keyring. Returns a JSON array of { file, line, key, value, kind } findings, or 'No hardcoded secrets found in the specified files.'.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filesYesAbsolute or relative paths to lint. Non-existent paths surface as scan errors.
fixNoIf true, rewrite the source files to read `process.env.KEY` and store the extracted value in the keyring. If false (default), only report findings.
scopeNoWhere the secret lives. 'global' = user keyring (default if omitted on reads), 'project' = scoped to projectPath, 'team' = team-shared (needs teamId), 'org' = org-shared (needs orgId).
projectPathNoAbsolute path to the project root for project-scoped secrets and policy resolution. Defaults to the MCP server's current working directory when omitted.
teamIdNoTeam identifier for team-scoped secrets. Required only when scope='team'. Example: 'acme-platform'.
orgIdNoOrganization identifier for org-scoped secrets. Required only when scope='org'. Example: 'acme-corp'.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It clearly discloses that with fix:true the tool 'MUTATES the listed source files in place' and writes to keyring, and that with fix:false it is read-only. The return format is also described. Lacks mention of error handling or idempotency, but overall solid.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph but efficiently packs purpose, usage, behavior, and return info. It front-loads the core action and uses parentheses for alternatives. Could benefit from more structured formatting (e.g., bullet points), but not overly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description details the return value (JSON array of findings or a string). It covers input semantics, behavioral effects, and usage context. Given the tool's complexity (6 params, mutation potential, keyring interaction), the description is remarkably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers all 6 parameters (100% coverage). The description adds valuable context: for 'files', it notes 'Non-existent paths surface as scan errors'; for 'scope', it explains defaults and requirements for team/org scopes. This enriches the schema without repeating it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description begins with '[scan] Inspect a specific list of files for hardcoded secrets...' which clearly states the action (inspect) and resource (files). It explicitly contrasts with siblings 'scan_codebase_for_secrets' and 'import_dotenv', providing differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers explicit usage guidance: 'Use to migrate a known set of files...; prefer scan_codebase_for_secrets for a whole-tree audit and import_dotenv to ingest an existing .env.' It also distinguishes read-only (fix:false) vs mutation (fix:true) behavior.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/I4cTime/quantum_ring'

If you have feedback or need assistance with the MCP directory API, please join our Discord server