Skip to main content
Glama
tarnover
by tarnover

aws_s3

Manage AWS S3 buckets and objects directly from Ansible. Perform actions like creating, deleting, listing buckets, and uploading or downloading objects with specified configurations.

Instructions

Manage AWS S3 buckets and objects

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
aclNo
actionYes
bucketNo
contentTypeNo
localPathNo
metadataNo
objectKeyNo
regionYes
tagsNo

Implementation Reference

  • Main handler function for the aws_s3 tool. Generates dynamic Ansible playbooks for S3 operations (list_buckets, create_bucket, delete_bucket, list_objects, upload, download) and executes them using executeAwsPlaybook.
    export async function s3Operations(args: S3Options): Promise<string> {
      await verifyAwsCredentials();
    
      const { action, region, bucket, objectKey, localPath, acl, tags, metadata, contentType } = args;
    
      let playbookContent = `---
    - name: AWS S3 ${action} operation
      hosts: localhost
      connection: local
      gather_facts: no
      tasks:`;
      
      switch (action) {
        case 'list_buckets':
          playbookContent += `
        - name: List S3 buckets
          amazon.aws.s3_bucket_info:
            region: "${region}"
          register: s3_buckets
        
        - name: Display buckets
          debug:
            var: s3_buckets.buckets`;
          break;
          
        case 'create_bucket':
          playbookContent += `
        - name: Create S3 bucket
          amazon.aws.s3_bucket:
            region: "${region}"
            name: "${bucket}"
            state: present
    ${formatYamlParams({ tags, acl })}
          register: s3_create
          
        - name: Display creation result
          debug:
            var: s3_create`;
          break;
          
        case 'delete_bucket':
          playbookContent += `
        - name: Delete S3 bucket
          amazon.aws.s3_bucket:
            region: "${region}"
            name: "${bucket}"
            state: absent
            force: true
          register: s3_delete
          
        - name: Display deletion result
          debug:
            var: s3_delete`;
          break;
          
        case 'list_objects':
          playbookContent += `
        - name: List S3 objects
          amazon.aws.s3_object:
            region: "${region}"
            bucket: "${bucket}"
            mode: list
          register: s3_objects
        
        - name: Display objects
          debug:
            var: s3_objects.keys`;
          break;
          
        case 'upload':
          playbookContent += `
        - name: Upload file to S3
          amazon.aws.s3_object:
            region: "${region}"
            bucket: "${bucket}"
            object: "${objectKey}"
            src: "${localPath}"
            mode: put
    ${formatYamlParams({ acl, tags, metadata, content_type: contentType })}
          register: s3_upload
          
        - name: Display upload result
          debug:
            var: s3_upload`;
          break;
          
        case 'download':
          playbookContent += `
        - name: Download file from S3
          amazon.aws.s3_object:
            region: "${region}"
            bucket: "${bucket}"
            object: "${objectKey}"
            dest: "${localPath}"
            mode: get
          register: s3_download
          
        - name: Display download result
          debug:
            var: s3_download`;
          break;
          
        default:
          throw new AnsibleError(`Unsupported S3 action: ${action}`);
      }
      
      // Execute the generated playbook
      return executeAwsPlaybook(`s3-${action}`, playbookContent);
    }
  • Zod schema definition for aws_s3 tool input validation, including action enum and parameters like region, bucket, objectKey, etc.
    export const S3Schema = z.object({
      action: S3ActionEnum,
      region: z.string().min(1, 'AWS region is required'),
      bucket: z.string().optional(),
      objectKey: z.string().optional(),
      localPath: z.string().optional(),
      acl: z.string().optional(),
      tags: z.record(z.string()).optional(),
      metadata: z.record(z.string()).optional(),
      contentType: z.string().optional()
    });
    
    export type S3Options = z.infer<typeof S3Schema>;
  • Registration of the aws_s3 tool in the toolDefinitions map, linking description, schema from aws.S3Schema, and handler aws.s3Operations.
    aws_s3: {
      description: 'Manage AWS S3 buckets and objects',
      schema: aws.S3Schema,
      handler: aws.s3Operations,
    },
  • Enum definition for S3 actions used in the aws_s3 schema.
    export const S3ActionEnum = z.enum(['list_buckets', 'create_bucket', 'delete_bucket', 'list_objects', 'upload', 'download']);
    export type S3Action = z.infer<typeof S3ActionEnum>;
  • Helper function executeAwsPlaybook used by all AWS handlers including s3Operations to execute the generated Ansible playbook in a temp directory.
    async function executeAwsPlaybook(
      operationName: string, 
      playbookContent: string, 
      extraParams: string = '',
      tempFiles: { filename: string, content: string }[] = [] // For additional files like templates, policies
    ): Promise<string> {
      let tempDir: string | undefined;
      try {
        // Create a unique temporary directory
        tempDir = await createTempDirectory(`ansible-aws-${operationName}`);
        
        // Write the main playbook file
        const playbookPath = await writeTempFile(tempDir, 'playbook.yml', playbookContent);
        
        // Write any additional temporary files
        for (const file of tempFiles) {
          await writeTempFile(tempDir, file.filename, file.content);
        }
    
        // Build the command
        const command = `ansible-playbook ${playbookPath} ${extraParams}`;
        console.error(`Executing: ${command}`);
    
        // Execute the playbook asynchronously
        const { stdout, stderr } = await execAsync(command);
        
        // Return stdout, or a success message if stdout is empty
        return stdout || `${operationName} completed successfully (no output).`;
    
      } catch (error: any) {
        // Handle execution errors
        const errorMessage = error.stderr || error.message || 'Unknown error';
        throw new AnsibleExecutionError(`Ansible execution failed for ${operationName}: ${errorMessage}`, error.stderr);
      } finally {
        // Ensure cleanup happens even if errors occur
        if (tempDir) {
          await cleanupTempDirectory(tempDir);
        }
      }
    }
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. 'Manage' implies both read and write operations, but it doesn't specify destructive actions (e.g., delete_bucket), authentication requirements, rate limits, error handling, or response formats. For a multi-action tool with potential mutations, this lack of detail is inadequate and fails to inform the agent about critical behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with a single phrase, 'Manage AWS S3 buckets and objects', which is front-loaded and wastes no words. It efficiently states the domain and high-level purpose without unnecessary elaboration, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given high complexity (9 parameters, multiple actions including mutations, no annotations, no output schema), the description is severely incomplete. It doesn't cover behavioral aspects, parameter meanings, usage contexts, or return values. For a tool that handles diverse S3 operations, this minimal description fails to provide the necessary context for an agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning no parameters are documented in the schema. The description adds no parameter semantics beyond the tool name—it doesn't explain what 'action', 'bucket', 'objectKey', or other parameters mean, their relationships, or how they map to S3 operations. With 9 parameters and zero coverage, the description fails to compensate, leaving parameters largely unexplained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Manage AWS S3 buckets and objects' states the general domain (AWS S3) and high-level purpose (manage), but it's vague about what specific actions are available. It doesn't specify verbs like list, create, delete, upload, or download, which are revealed in the action parameter enum. It distinguishes from siblings by mentioning S3 specifically, but lacks precision about the tool's functional scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., AWS credentials), differentiate from other AWS tools (e.g., aws_ec2 for compute), or specify contexts like file operations versus bucket management. Usage is implied only by the tool name, with no explicit when/when-not statements or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tarnover/mcp-ansible'

If you have feedback or need assistance with the MCP directory API, please join our Discord server