Skip to main content
Glama
HenkDz

Self-Hosted Supabase MCP Server

list_storage_objects

Lists files and objects in a Supabase storage bucket, with options to filter by prefix and control pagination for efficient storage management.

Instructions

Lists objects within a specific storage bucket, optionally filtering by prefix.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
bucket_idYesThe ID of the bucket to list objects from.
limitNoMax number of objects to return
offsetNoNumber of objects to skip
prefixNoFilter objects by a path prefix (e.g., 'public/')

Implementation Reference

  • The handler function that performs the core logic: validates input, checks DB availability, builds and executes a dynamic parameterized SQL query on storage.objects table to list objects matching bucket_id, optional prefix (LIKE prefix%), ordered by name, with limit/offset pagination, processes response with handleSqlResponse, and logs results.
    execute: async (
        input: ListStorageObjectsInput,
        context: ToolContext
    ): Promise<ListStorageObjectsOutput> => {
        const client = context.selfhostedClient;
        const { bucket_id, limit, offset, prefix } = input;
    
        console.error(`Listing objects for bucket ${bucket_id} (Prefix: ${prefix || 'N/A'})...`);
    
        if (!client.isPgAvailable()) {
            context.log('Direct database connection (DATABASE_URL) is required to list storage objects.', 'error');
            throw new Error('Direct database connection (DATABASE_URL) is required to list storage objects.');
        }
    
        // Use a transaction to get access to the pg client for parameterized queries
        const objects = await client.executeTransactionWithPg(async (pgClient: PoolClient) => {
            // Build query with parameters
            let sql = `
                SELECT
                    id,
                    name,
                    bucket_id,
                    owner,
                    version,
                    metadata ->> 'mimetype' AS mimetype,
                    metadata ->> 'size' AS size, -- Extract size from metadata
                    metadata,
                    created_at::text,
                    updated_at::text,
                    last_accessed_at::text
                FROM storage.objects
                WHERE bucket_id = $1
            `;
            const params: (string | number)[] = [bucket_id];
            let paramIndex = 2;
    
            if (prefix) {
                sql += ` AND name LIKE $${paramIndex++}`;
                params.push(`${prefix}%`);
            }
    
            sql += ' ORDER BY name ASC NULLS FIRST';
            sql += ` LIMIT $${paramIndex++}`;
            params.push(limit);
            sql += ` OFFSET $${paramIndex++}`;
            params.push(offset);
            sql += ';';
    
            console.error('Executing parameterized SQL to list storage objects within transaction...');
            const result = await pgClient.query(sql, params); // Raw pg result
    
            // Explicitly pass result.rows, which matches the expected structure
            // of SqlSuccessResponse (unknown[]) for handleSqlResponse.
            return handleSqlResponse(result.rows as SqlSuccessResponse, ListStorageObjectsOutputSchema);
        });
    
        console.error(`Found ${objects.length} objects.`);
        context.log(`Found ${objects.length} objects.`);
        return objects;
    },
  • Zod schemas for input validation (bucket_id required, limit/offset/prefix optional) and output validation (array of StorageObject with fields like id, name, bucket_id, metadata-extracted mimetype/size, timestamps).
    const ListStorageObjectsInputSchema = z.object({
        bucket_id: z.string().describe('The ID of the bucket to list objects from.'),
        limit: z.number().int().positive().optional().default(100).describe('Max number of objects to return'),
        offset: z.number().int().nonnegative().optional().default(0).describe('Number of objects to skip'),
        prefix: z.string().optional().describe('Filter objects by a path prefix (e.g., \'public/\')'),
    });
    type ListStorageObjectsInput = z.infer<typeof ListStorageObjectsInputSchema>;
    
    // Output schema
    const StorageObjectSchema = z.object({
        id: z.string().uuid(),
        name: z.string().nullable(), // Name can be null according to schema
        bucket_id: z.string(),
        owner: z.string().uuid().nullable(),
        version: z.string().nullable(),
        // Get mimetype directly from SQL extraction
        mimetype: z.string().nullable(), 
        // size comes from metadata
        size: z.string().pipe(z.coerce.number().int()).nullable(),
        // Keep raw metadata as well
        metadata: z.record(z.any()).nullable(),
        created_at: z.string().nullable(),
        updated_at: z.string().nullable(),
        last_accessed_at: z.string().nullable(),
    });
    const ListStorageObjectsOutputSchema = z.array(StorageObjectSchema);
    type ListStorageObjectsOutput = z.infer<typeof ListStorageObjectsOutputSchema>;
  • src/index.ts:98-121 (registration)
    The tool is imported (line 33) and registered in the availableTools object (line 119), which is used to populate MCP server capabilities and handle tool calls.
    const availableTools = {
        // Cast here assumes tools will implement AppTool structure
        [listTablesTool.name]: listTablesTool as AppTool,
        [listExtensionsTool.name]: listExtensionsTool as AppTool,
        [listMigrationsTool.name]: listMigrationsTool as AppTool,
        [applyMigrationTool.name]: applyMigrationTool as AppTool,
        [executeSqlTool.name]: executeSqlTool as AppTool,
        [getDatabaseConnectionsTool.name]: getDatabaseConnectionsTool as AppTool,
        [getDatabaseStatsTool.name]: getDatabaseStatsTool as AppTool,
        [getProjectUrlTool.name]: getProjectUrlTool as AppTool,
        [getAnonKeyTool.name]: getAnonKeyTool as AppTool,
        [getServiceKeyTool.name]: getServiceKeyTool as AppTool,
        [generateTypesTool.name]: generateTypesTool as AppTool,
        [rebuildHooksTool.name]: rebuildHooksTool as AppTool,
        [verifyJwtSecretTool.name]: verifyJwtSecretTool as AppTool,
        [listAuthUsersTool.name]: listAuthUsersTool as AppTool,
        [getAuthUserTool.name]: getAuthUserTool as AppTool,
        [deleteAuthUserTool.name]: deleteAuthUserTool as AppTool,
        [createAuthUserTool.name]: createAuthUserTool as AppTool,
        [updateAuthUserTool.name]: updateAuthUserTool as AppTool,
        [listStorageBucketsTool.name]: listStorageBucketsTool as AppTool,
        [listStorageObjectsTool.name]: listStorageObjectsTool as AppTool,
        [listRealtimePublicationsTool.name]: listRealtimePublicationsTool as AppTool,
    };
  • TypeScript interface defining the StorageObject type used in the tool's output schema and comments reference the tool.
     */
    export interface StorageObject {
        id: string; // uuid
        name: string | null;
        bucket_id: string;
        owner: string | null; // uuid
        version: string | null;
        mimetype: string | null; // Extracted from metadata
        size: number | null;     // Extracted from metadata, parsed as number
        metadata: Record<string, unknown> | null; // Use unknown instead of any
        created_at: string | null; // Timestamps returned as text from DB
        updated_at: string | null;
        last_accessed_at: string | null;
    } 
  • Uses the shared handleSqlResponse utility to validate and type the SQL query results against the output schema.
    return handleSqlResponse(result.rows as SqlSuccessResponse, ListStorageObjectsOutputSchema);
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions filtering by prefix but fails to describe critical behaviors like pagination mechanics (implied by limit/offset), authentication requirements, rate limits, error conditions, or what the output looks like. For a list operation with 4 parameters, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Lists objects within a specific storage bucket') and adds a useful qualifier ('optionally filtering by prefix'). There is no wasted verbiage or redundancy, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain the return format (e.g., list of objects with metadata), pagination behavior, error handling, or authentication needs. For a tool with 4 parameters and no structured output documentation, this leaves significant gaps for an agent to operate effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema fully documents all parameters. The description adds minimal value by mentioning prefix filtering but doesn't provide additional context beyond what's in the schema (e.g., examples of prefix usage, interaction between limit/offset). This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Lists') and resource ('objects within a specific storage bucket'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'list_storage_buckets' or 'list_tables', which would require a more specific scope statement to earn a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions optional filtering by prefix but doesn't address scenarios like when to use 'list_storage_buckets' instead or prerequisites for accessing buckets. This lack of contextual direction leaves the agent without usage boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/HenkDz/selfhosted-supabase-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server