Skip to main content
Glama

llm_cloud_management

Manage cloud operations across AWS, Azure, and GCP using LLM-generated commands with research-driven best practices.

Instructions

LLM-managed cloud provider operations with research-driven approach

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
providerYesCloud provider to use
actionYesAction to perform
parametersNoAction parameters
llmInstructionsYesLLM instructions for command generation
researchFirstNoResearch best approach first
projectPathNoPath to project directory.
adrDirectoryNoDirectory containing ADR filesdocs/adrs

Implementation Reference

  • The core handler function that implements LLM-driven cloud management operations. Orchestrates research, LLM command generation, execution (simulated), and returns formatted results with analysis.
    export async function llmCloudManagement( args: { provider: 'aws' | 'azure' | 'gcp' | 'redhat' | 'ubuntu' | 'macos'; action: string; parameters?: Record<string, any>; llmInstructions: string; researchFirst?: boolean; projectPath?: string; adrDirectory?: string; }, context?: ToolContext ): Promise<any> { const { provider, action, parameters = {}, llmInstructions, researchFirst = true, projectPath, adrDirectory, } = args; if (!provider || !action || !llmInstructions) { throw new McpAdrError( 'Provider, action, and llmInstructions are required', 'MISSING_REQUIRED_PARAMS' ); } try { context?.info(`🔧 Initializing ${provider} cloud management for: ${action}`); context?.report_progress(0, 100); // Initialize research orchestrator const orchestrator = new ResearchOrchestrator(projectPath, adrDirectory); let researchResult = null; if (researchFirst) { context?.info('📚 Researching best practices and documentation...'); context?.report_progress(20, 100); // Step 1: Research the best approach const researchQuery = ` How to ${action} on ${provider} platform? Best practices for ${provider} ${action} ${provider} ${action} documentation and examples Security considerations for ${provider} ${action} ${llmInstructions} `; researchResult = await orchestrator.answerResearchQuestion(researchQuery); } // Step 2: Generate command using LLM context?.info('🤖 Generating commands with LLM guidance...'); context?.report_progress(50, 100); const command = await generateCloudCommand({ provider, action, parameters, research: researchResult, instructions: llmInstructions, }); // Step 3: Execute the command (simulated for now) context?.info(`☁️ Executing ${provider} operation...`); context?.report_progress(80, 100); const executionResult = await executeCloudCommand(command); context?.info('✅ Cloud operation complete!'); context?.report_progress(100, 100); return { content: [ { type: 'text', text: `# LLM-Managed Cloud Operation ## Operation Details - **Provider**: ${provider} - **Action**: ${action} - **Parameters**: ${JSON.stringify(parameters, null, 2)} ## LLM Instructions ${llmInstructions} ${ researchResult ? ` ## Research Results - **Confidence**: ${(researchResult.confidence * 100).toFixed(1)}% - **Sources**: ${researchResult.metadata.sourcesQueried.join(', ')} - **Research Summary**: ${researchResult.answer} ` : '' } ## Generated Command \`\`\`bash ${command.generated} \`\`\` ## Execution Result ${executionResult.success ? '✅ Success' : '❌ Failed'} ${executionResult.output ? `\n\`\`\`\n${executionResult.output}\n\`\`\`` : ''} ## LLM Analysis ${command.analysis || 'No analysis available'} ## Metadata - **Command Confidence**: ${(command.confidence * 100).toFixed(1)}% - **Timestamp**: ${new Date().toISOString()} - **Research-Driven**: ${researchFirst ? 'Yes' : 'No'} `, }, ], }; } catch (error) { throw new McpAdrError( `Cloud management operation failed: ${error instanceof Error ? error.message : String(error)}`, 'CLOUD_MANAGEMENT_ERROR' ); } }
  • Tool catalog registration providing metadata, input schema, and categorization for the llm_cloud_management tool.
    TOOL_CATALOG.set('llm_cloud_management', { name: 'llm_cloud_management', shortDescription: 'Cloud management via LLM', fullDescription: 'Cloud resource management with LLM assistance.', category: 'research', complexity: 'complex', tokenCost: { min: 3000, max: 6000 }, hasCEMCPDirective: true, // Phase 4.3: Complex tool - cloud management orchestration relatedTools: ['llm_web_search', 'llm_database_management'], keywords: ['cloud', 'management', 'llm', 'aws', 'gcp', 'azure'], requiresAI: true, inputSchema: { type: 'object', properties: { operation: { type: 'string' }, provider: { type: 'string', enum: ['aws', 'gcp', 'azure'] }, }, required: ['operation'], }, });
  • Helper function for generating cloud commands using LLM (currently placeholder implementation).
    async function generateCloudCommand(context: { provider: string; action: string; parameters: Record<string, any>; research: any; instructions: string; }): Promise<{ generated: string; confidence: number; analysis: string }> { // const { loadAIConfig, getAIExecutor } = await import('../config/ai-config.js'); // const aiConfig = loadAIConfig(); // const executor = getAIExecutor(); // const prompt = ` // Generate a ${context.provider} command for the following operation: // // Action: ${context.action} // Parameters: ${JSON.stringify(context.parameters, null, 2)} // Instructions: ${context.instructions} // // ${context.research ? ` // Research Context: // - Confidence: ${(context.research.confidence * 100).toFixed(1)}% // - Sources: ${context.research.metadata.sourcesQueried.join(', ')} // - Key Findings: ${context.research.answer} // ` : ''} // // Provider Context: // ${getProviderContext(context.provider)} // // Generate the CLI command and provide analysis of the approach. // `; // TODO: Implement LLM command generation when AI executor is available // const result = await executor.executeStructuredPrompt(prompt, { // type: 'object', // properties: { // command: { type: 'string' }, // confidence: { type: 'number' }, // analysis: { type: 'string' } // } // }); // return { // generated: result.data.command || 'echo "Command generation failed"', // confidence: result.data.confidence || 0.5, // analysis: result.data.analysis || 'No analysis available' // }; // Placeholder implementation return { generated: `echo "LLM command generation for ${context.provider} ${context.action} not yet implemented"`, confidence: 0.3, analysis: 'LLM command generation is not yet available. This is a placeholder implementation.', }; }
  • Helper function for executing generated cloud commands (simulated).
    async function executeCloudCommand(command: { generated: string; confidence: number; }): Promise<{ success: boolean; output: string }> { // For now, simulate command execution // In a real implementation, this would execute the actual command return { success: command.confidence > 0.7, output: `Simulated execution of: ${command.generated}\n\nThis is a simulation. In production, this would execute the actual command.`, }; }
  • Tool listing in server context generator with name and description.
    { name: 'llm_cloud_management', description: 'Manage cloud resources and infrastructure' }, { name: 'llm_database_management', description: 'Manage database operations and queries' },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/mcp-adr-analysis-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server