Skip to main content
Glama

SAPRead

Read SAP ABAP objects, including source code, metadata, and system information, to analyze and understand SAP system components.

Instructions

Read SAP ABAP objects. Types: PROG, CLAS, INTF, FUNC, FUGR (use expand_includes=true to get all include sources), INCL, DDLS, DDLX (CDS metadata extensions — UI annotations), BDEF, SRVD, SRVB (service bindings — returns structured binding info: OData version, publish status, service definition ref), TABL, VIEW, STRU (DDIC structures like BAPIRET2 — returns CDS-like source), DOMA (DDIC domains — returns type info, value table, fixed values), DTEL (data elements — returns domain, labels, search help), TRAN (transaction codes — returns description, program, package), TABLE_CONTENTS, DEVC, SOBJ (BOR business objects — returns method catalog or full implementation), SYSTEM, COMPONENTS, MESSAGES, TEXT_ELEMENTS, VARIANTS. For CLAS: omit include to get the full class source (definition + implementation combined). The include param is optional — use it only to read class-local sections: definitions (local types), implementations (local helper classes), macros, testclasses (ABAP Unit). For CLAS with method param: use method="*" to list all methods with signatures and visibility, or method="method_name" to read a single method implementation (95% fewer tokens than full source). For SOBJ: returns BOR method catalog; use method param to read a specific method implementation. BSP (deployed UI5/Fiori apps — list apps, browse files, read content; use name to browse app structure, include for subfolder or file), BSP_DEPLOY (query deployed UI5 apps via ABAP Repository OData Service — returns name, package, description). API_STATE (API release state — checks if an object is released for ABAP Cloud / S/4HANA Clean Core; returns contract states C0-C4, successor info; use objectType param for non-class objects). INACTIVE_OBJECTS (list all objects pending activation — no name param needed; use before SAPActivate batch_activate to see what needs activating).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
typeYesObject type to read
nameNoObject name (e.g., ZTEST_PROGRAM, ZCL_ORDER, MARA)
includeNoFor CLAS: DO NOT use this to read the main class — omit include entirely to get the full class source (CLASS DEFINITION + CLASS IMPLEMENTATION). This parameter reads class-LOCAL auxiliary files only: definitions (local type definitions, NOT the main class definition), implementations (local helper class implementations), macros, testclasses (ABAP Unit). Comma-separated. Not all classes have these sections — missing ones return a note instead of an error. For DDLS: use include="elements" to get a structured field list extracted from the CDS DDL source — shows key fields, aliases, associations, and expression types (calculated, case, cast). Useful for understanding CDS entity structure without parsing raw DDL.
groupNoFor FUNC type. The function group containing the function module. Optional — auto-resolved via SAPSearch if omitted.
methodNoFor CLAS: method name to read a single method implementation (e.g., "get_name", "zif_order~process"). Use "*" to list all methods with signatures and visibility. For SOBJ: BOR method name to read. If omitted, returns the full BOR method catalog. Not used with other types.
expand_includesNoFor FUGR type only. When true, expands all INCLUDE statements and returns the full source of each include inline.
formatNoOutput format. "text" (default): raw source code. "structured" (CLAS only): JSON with metadata (description, language, category) + decomposed source (main, testclasses, definitions, implementations, macros). Useful when you need to understand class structure or separate test code from production code.
maxRowsNoFor TABLE_CONTENTS: max rows to return (default 100)
sqlFilterNoFor TABLE_CONTENTS: SQL WHERE clause filter
objectTypeNoFor API_STATE: SAP object type (CLAS, INTF, PROG, FUGR, TABL, DDLS, etc.) — auto-detected from name if omitted
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does an excellent job disclosing behavioral traits: it explains token reduction strategies (95% fewer tokens for method reading), error handling (missing sections return notes instead of errors), optional parameter behaviors (auto-resolution for 'group'), and output format implications. The only minor gap is lack of explicit rate limit or permission requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is information-dense but lacks optimal structure - it's a single paragraph mixing type listings with parameter usage notes. While every sentence adds value, it would benefit from clearer organization (e.g., separating type overview from parameter guidance) to improve scannability for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's high complexity (29 object types, 10 parameters) and absence of both annotations and output schema, the description does remarkably well covering most essential context. It explains return behaviors, parameter dependencies, and practical considerations. The main gap is lack of explicit output format details beyond the 'format' parameter description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds significant value beyond the schema by explaining parameter interactions (e.g., how 'include' works differently for CLAS vs DDLS), practical usage examples (method='*' to list all methods), and context-specific behaviors (expand_includes for FUGR only). It doesn't cover all 10 parameters equally, preventing a perfect score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the verb 'Read' and specifies the resource 'SAP ABAP objects', listing all 29 supported object types. It clearly distinguishes this tool from siblings by focusing on reading operations rather than context setting, diagnosis, navigation, or search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use specific parameters (e.g., 'use expand_includes=true to get all include sources', 'omit include entirely to get the full class source'), but does not explicitly mention when to use this tool versus sibling alternatives like SAPSearch or SAPNavigate, which would be needed for a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/marianfoo/arc-1'

If you have feedback or need assistance with the MCP directory API, please join our Discord server