Skip to main content
Glama
KuudoAI

Amazon Ads API MCP

by KuudoAI

execute

Run sandboxed async Python code to call Amazon Ads API tools via call_tool, enabling dynamic campaign management and reporting.

Instructions

Run async Python in a sandboxed interpreter. The whole script runs as one turn; use return to produce the tool result.

Available in scope:

  • await call_tool(name: str, params: dict) -> Any — calls any backend tool. Failures raise RuntimeError(<envelope_json>) where the message body is the full v1 cross-server error envelope JSON. Inside the sandbox: try: await call_tool(name, params) except RuntimeError as e: env = json.loads(str(e)) # env['error_kind'], env['hints'], env['error_code'], # env['retryable'], env.get('_meta', {}) all available. Non-envelope failures fall back to RuntimeError("<OriginalType>: <message>"). Both forms are catchable with try/except RuntimeError:. The format matches Amazon SP MCP for cross-server symmetry.

Sandbox guardrails (Monty interpreter):

  • No network: urllib, requests, httpx, socket not importable. Use call_tool.

  • No filesystem writes: open() is missing from builtins; os and pathlib import successfully but most side-effecting methods are gated. Tool results larger than ~1 MB may be auto-stashed by the host client.

  • Stdlib (verified against pydantic_monty==0.0.11; allowlist is hardcoded in Monty's compiled extension and not configurable from Python — may differ on other CodeMode hosts running a different pydantic_monty):

    • Available: json, re, math, datetime, sys, typing, asyncio, os, pathlib.

    • Blocked: collections, itertools, functools, statistics, decimal, dataclasses, random, string, time, base64, hashlib, urllib.parse. Use built-ins and comprehensions for aggregation; for hashing/encoding/URL work, request a server-side tool via await call_tool(...).

  • Builtins (verified missing / behavior differences):

    • hasattr(o, k) is unavailable. Use a unique sentinel with getattr: _MISSING = [] (or {}), then getattr(o, k, _MISSING) is not _MISSING. object() is unavailable in this sandbox.

    • callable(x) is unavailable and getattr(x, '__call__', None) is not None is not reliable for built-in instances or functions in this sandbox. Prefer known-callable inputs, or guarded invocation with try: x(...); except TypeError: when safe.

    • No reliable capability-probing on objects. The _MISSING sentinel pattern above works for module attributes (e.g. getattr(math, 'pi', _MISSING)) but not for methods on built-in instances or attributes on user-defined functions. In this sandbox, built-in methods are invokable via d.keys() syntax but are not reachable as attributes via d.keys (which itself raises AttributeError). Same for function attributes — fn.__name__ raises AttributeError. getattr(o, k, default) returns default for these cases not because getattr is misbehaving but because the attribute isn't there. Write code that knows the shape of its inputs rather than probing for capabilities.

    • setattr, dir, vars, globals, locals, open are all absent.

  • Blocked-import error semantics: inside the sandbox, from collections import Counter raises a stock Python ImportError (NOT a RuntimeError-wrapped envelope). Catch with except ImportError: (or except Exception:). The v1 sandbox_runtime / SANDBOX_MODULE_BLOCKED envelope only fires when the import error is uncaught and propagates out of execute — at which point the calling tool layer sees the typed envelope.

  • Unsupported-syntax error semantics: class definitions, match statements, and other parser-not-yet-supported constructs raise a NotImplementedError that DOES surface as a v1 envelope with error_kind=sandbox_runtime / error_code=SANDBOX_RUNTIME_ERROR. Asymmetric with imports — known limitation.

  • print() output may be discarded depending on the client path; return data via the script's final expression instead.

  • asyncio.sleep is unavailable by design in this sandbox path. Don't sleep — chain await call_tool calls (e.g. poll a report-status tool) instead. For long-running reports (typically 1-20 minutes), do NOT rapid-poll inside a single execute block; return after one status check and let the user decide when to re-check.

  • try/except/finally work normally. To probe many candidates in one block, wrap each await call_tool(...) in its own try/except RuntimeError.

  • with works for pure-Python context managers (e.g. decimal.localcontext()). It does NOT work for open(...) because file I/O is blocked.

  • json.dumps(default=...) may trip on Pydantic models; call model_dump() first.

Auth, region, and active profile are managed by the server. Do not pass Amazon-Ads-AccountId, Amazon-Advertising-API-Scope, or bearer tokens in params — set them once via set_active_identity / set_region / set_active_profile and they ride every subsequent call_tool.

Session-scope contract:

  • Call get_session_state first; if state_scope == 'request', re-run set_active_identity / set_region / set_active_profile in each execute block. If state_scope == 'session', set them once via the corresponding tools and they ride subsequent call_tool calls in that session.

  • To detect the transport's scope, call get_session_state at the start of a block. It is a read-only probe with no side effects.

  • Rule: Re-establish context before the next tool call iff state_scope == 'request' or state_reason is not null.

  • Within a block the scope cannot change; one probe per block is sufficient.

  • state_reason enumerates: "no_mcp_session" (request-scoped transport), "token_swapped" (a different bearer/refresh token arrived mid-session and the previous tenant's state was cleared — state_scope stays 'session' but you must re-establish context for the new tenant), and "bridge_unavailable" (reserved; treat as 'request').

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYesPython async code to execute tool calls via call_tool(name, arguments)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description bears full burden for behavioral disclosure. It details sandbox guardrails, available/blocked modules, built-in differences, error semantics, auth handling, and session scope—going far beyond a simple 'run code' and making the tool's behavior very transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is excessively long and includes many details that could be condensed or omitted. While well-structured with sections and bullet points, it is not concise; every sentence does not earn its place, and the verbosity detracts from quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a sandboxed Python interpreter and no output schema, the description covers nearly all aspects: code execution, available libraries, error handling, auth, session management, and workarounds. It is complete for an AI agent to understand how to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for its single parameter (code), describing it as 'Python async code.' The tool description adds immense value by explaining how to use it (e.g., call_tool, return, sandbox constraints), greatly augmenting the schema's minimal description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Run async Python in a sandboxed interpreter.' This is a specific verb (Run) and resource (async Python in a sandboxed interpreter), and it distinguishes itself from sibling tools (get_schema, search, tags) which serve different functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides extensive usage context, including how to structure code (use `return`), sandbox limitations, auth management, and session scope contract. However, it does not explicitly compare to alternatives or state when not to use this tool, lacking explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/KuudoAI/amazon_ads_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server