Skip to main content
Glama

SAPLint

Check ABAP source code for issues using local abaplint rules, automatically selecting cloud or on-prem rules based on system type. Perform linting, auto-fix issues, or list available rules.

Instructions

Run local abaplint rules on ABAP source code. System-aware: auto-selects cloud or on-prem rules based on detected system type.

Actions:

  • "lint": Check ABAP source for issues. Returns errors and warnings.

  • "lint_and_fix": Lint + auto-fix all fixable issues (keyword case, obsolete statements, etc.). Returns fixed source.

  • "list_rules": List all available rules with current config. No source needed.

For server-side checks (ATC, syntax check, unit tests), use SAPDiagnose instead.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
actionYesCheck type
sourceNoABAP source code to lint (not needed for list_rules)
nameNoObject name (used for filename detection)
rulesNoRule overrides: { "rule_name": false } to disable, { "rule_name": { "severity": "Warning" } } to configure. Overrides system defaults.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: system-aware rule selection (cloud vs. on-prem), the three distinct actions with their outcomes (errors/warnings, fixed source, rule list), and clarifies that 'list_rules' requires no source. However, it doesn't mention potential side effects like performance impacts or authentication needs, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by a bulleted list of actions and a clear alternative tool mention. Every sentence adds value without redundancy, making it efficient and easy to parse for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (multiple actions, system-aware behavior, 4 parameters) and no output schema or annotations, the description does a good job covering purpose, usage, and basic behaviors. However, it lacks details on output formats (e.g., structure of errors/warnings or fixed source) and doesn't address potential limitations like error handling or performance, leaving some contextual gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds some value by explaining when 'source' is not needed for 'list_rules' and hinting at 'rules' overriding system defaults, but it doesn't provide significant additional semantics beyond what the schema offers. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Run local abaplint rules on ABAP source code') and distinguishes it from sibling tools by mentioning 'system-aware' behavior and explicitly naming SAPDiagnose as an alternative for server-side checks. It provides a detailed breakdown of three distinct actions, making the purpose highly specific and well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool (for local abaplint rules) versus when to use SAPDiagnose (for server-side checks like ATC, syntax check, unit tests). It also provides clear guidance on which actions require source code and which don't, making usage context and alternatives explicitly defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/marianfoo/arc-1'

If you have feedback or need assistance with the MCP directory API, please join our Discord server