Skip to main content
Glama

generate_optimization_recommendations

Generate Clarity contract optimization recommendations based on SIP-012 improvements and best practices to address performance issues and achieve target throughput goals.

Instructions

Generate specific optimization recommendations for Clarity contracts based on SIP-012 improvements and best practices.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contractPatternYesType of contract pattern
currentIssuesYesCurrent performance issues identified
targetThroughputNoTarget performance goals

Implementation Reference

  • The main execution handler for the 'generate_optimization_recommendations' tool. It takes contract pattern and issues, calls generatePatternOptimizations, and formats a comprehensive Markdown report with recommendations, roadmap, and SIP-012 specific advice.
    export const generateOptimizationRecommendationsTool: Tool<undefined, typeof OptimizationRecommendationScheme> = {
      name: "generate_optimization_recommendations",
      description: "Generate specific optimization recommendations for Clarity contracts based on SIP-012 improvements and best practices.",
      parameters: OptimizationRecommendationScheme,
      execute: async (args, context) => {
        try {
          await recordTelemetry({ action: "generate_optimization_recommendations" }, context);
          
          const recommendations = generatePatternOptimizations(args.contractPattern, args.currentIssues);
          
          return `# SIP-012 Optimization Recommendations
    
    ## Contract Pattern: ${args.contractPattern.toUpperCase()}
    
    ${args.targetThroughput ? `**Target Performance**: ${args.targetThroughput}` : ''}
    
    ## Current Issues Analysis
    ${args.currentIssues.map(issue => `- ${issue}`).join('\n')}
    
    ## Optimization Strategy
    
    ${recommendations.map((rec: any, index: number) => `
    ### ${index + 1}. ${rec.title}
    **Priority**: ${rec.priority.toUpperCase()}
    **Expected Impact**: ${rec.impact}
    **Implementation Effort**: ${rec.effort}
    
    **Problem**: ${rec.problem}
    
    **Solution**: ${rec.solution}
    
    **Implementation**:
    \`\`\`clarity
    ${rec.code}
    \`\`\`
    
    **Benefits**:
    ${rec.benefits.map((b: string) => `- ${b}`).join('\n')}
    
    **SIP-012 Advantages**: ${rec.sip012Benefits}
    `).join('\n')}
    
    ## Implementation Roadmap
    
    ### Phase 1: Critical Optimizations (Week 1-2)
    ${recommendations.filter((r: any) => r.priority === 'high').map((r: any) => `- ${r.title}: ${r.impact}`).join('\n')}
    
    ### Phase 2: Performance Improvements (Week 3-4)  
    ${recommendations.filter((r: any) => r.priority === 'medium').map((r: any) => `- ${r.title}: ${r.impact}`).join('\n')}
    
    ### Phase 3: Advanced Optimizations (Week 5-6)
    ${recommendations.filter((r: any) => r.priority === 'low').map((r: any) => `- ${r.title}: ${r.impact}`).join('\n')}
    
    ## SIP-012 Specific Optimizations
    
    ### Database Operations
    - **Leverage 2x Capacity**: Use increased read/write limits for complex operations
    - **Batch Processing**: Group operations to minimize overhead
    - **Efficient Indexing**: Optimize map key structures
    
    ### Storage Efficiency  
    - **Dynamic Lists**: Take advantage of actual-size pricing
    - **Data Packing**: Combine related data into tuples
    - **Lazy Loading**: Defer expensive computations
    
    ### Performance Monitoring
    \`\`\`clarity
    ;; Add performance monitoring to your contracts
    (define-read-only (get-operation-cost)
      {
        estimated-runtime: u50000,
        read-operations: u2,
        write-operations: u1
      }
    )
    \`\`\`
    
    ## Testing Strategy
    1. **Clarinet Performance Tests**: Measure actual costs
    2. **Load Testing**: Test with maximum data sizes
    3. **Benchmark Comparisons**: Before/after optimization
    4. **Production Monitoring**: Track real-world performance
    
    ## Success Metrics
    - [ ] Runtime costs reduced by ${recommendations.filter(r => r.priority === 'high').length * 25}%
    - [ ] Database operations optimized for SIP-012 limits
    - [ ] Storage costs minimized with dynamic sizing
    - [ ] Transaction throughput improved
    - [ ] User experience enhanced with faster confirmations`;
          
        } catch (error) {
          return `❌ Failed to generate optimization recommendations: ${error}`;
        }
      },
    };
  • Zod schema defining the input parameters for the tool: contractPattern (enum), currentIssues (array of strings), and optional targetThroughput.
    const OptimizationRecommendationScheme = z.object({
      contractPattern: z.enum([
        "token-contract", "nft-collection", "defi-pool", "dao-governance", 
        "marketplace", "staking-pool", "generic"
      ]).describe("Type of contract pattern"),
      currentIssues: z.array(z.string()).describe("Current performance issues identified"),
      targetThroughput: z.string().optional().describe("Target performance goals"),
    });
  • Registration of the tool in the FastMCP server within the registerTools function.
    server.addTool(generateOptimizationRecommendationsTool);
  • Core helper function that provides pattern-specific optimization recommendations (e.g., for token-contract, nft-collection) used in the tool's execute function.
    function generatePatternOptimizations(pattern: string, issues: string[]) {
      const baseOptimizations = {
        "token-contract": [
          {
            title: "Implement Batch Transfers",
            priority: "high",
            impact: "50-70% cost reduction for multiple transfers",
            effort: "Medium",
            problem: "Individual transfers are expensive",
            solution: "Batch multiple transfers in single transaction",
            code: "(define-public (batch-transfer (transfers (list 25 {recipient: principal, amount: uint})))\n  (fold execute-transfer transfers (ok u0)))",
            benefits: ["Reduced per-transfer overhead", "Better UX for airdrops", "Lower gas costs"],
            sip012Benefits: "Leverages increased write capacity for efficient batch processing"
          },
          {
            title: "Optimize Balance Storage",
            priority: "medium",
            impact: "20-30% storage cost reduction",
            effort: "Low",
            problem: "Separate maps for different user data",
            solution: "Consolidate user data into single map with tuple",
            code: "(define-map user-data principal {balance: uint, last-transfer: uint, flags: uint})",
            benefits: ["Fewer database operations", "Consolidated user data", "Easier maintenance"],
            sip012Benefits: "Reduced read/write operations with tuple-based storage"
          }
        ],
        
        "nft-collection": [
          {
            title: "Dynamic Metadata Storage",
            priority: "high",
            impact: "40-60% metadata storage savings",
            effort: "Medium",
            problem: "Fixed-size metadata allocations waste storage",
            solution: "Use SIP-012 dynamic list sizing for variable metadata",
            code: "(define-map token-metadata uint {name: (string-ascii 64), traits: (list 20 (string-ascii 32))})",
            benefits: ["Pay only for actual metadata size", "Flexible trait system", "Reduced costs"],
            sip012Benefits: "Dynamic list storage assessment based on actual content"
          }
        ],
        
        "defi-pool": [
          {
            title: "State Consolidation",
            priority: "high",
            impact: "30-50% operation cost reduction",
            effort: "High",
            problem: "Multiple state variables require separate reads/writes",
            solution: "Consolidate pool state into single tuple",
            code: "(define-data-var pool-state {reserve-a: uint, reserve-b: uint, k: uint, fees: uint} ...)",
            benefits: ["Atomic state updates", "Consistent data", "Fewer operations"],
            sip012Benefits: "Single read/write operations for complete state updates"
          }
        ],
        
        "generic": [
          {
            title: "Implement Caching Strategy",
            priority: "medium",
            impact: "Variable (20-80% for repeated operations)",
            effort: "Medium",
            problem: "Expensive computations repeated multiple times",
            solution: "Cache computation results in maps",
            code: "(define-map computation-cache uint uint)\n(define-private (cached-compute (input uint))\n  (match (map-get? computation-cache input)\n    cached cached\n    (let ((result (expensive-computation input)))\n      (map-set computation-cache input result)\n      result)))",
            benefits: ["Avoid redundant computations", "Predictable costs", "Better performance"],
            sip012Benefits: "Efficient caching with optimized map operations"
          }
        ]
      };
      
      return baseOptimizations[pattern as keyof typeof baseOptimizations] || baseOptimizations["generic"];
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'generates' recommendations, implying a read-only or advisory operation, but doesn't clarify if it modifies contracts, requires authentication, has rate limits, or produces structured vs. textual output. For a tool with no annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary details. Every word earns its place by specifying the action, target, and context concisely, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of contract optimization, lack of annotations, and no output schema, the description is insufficient. It doesn't explain what the recommendations entail (e.g., code snippets, configuration changes), how they're formatted, or any dependencies on other tools. For a generative tool with three parameters and no structured output, more context is needed for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters well-documented in the schema itself (e.g., 'contractPattern' with enum values, 'currentIssues' as an array of strings, 'targetThroughput' as a string). The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Generate specific optimization recommendations for Clarity contracts based on SIP-012 improvements and best practices.' It specifies the verb ('generate'), resource ('optimization recommendations'), and domain context ('Clarity contracts', 'SIP-012 improvements', 'best practices'). However, it doesn't explicitly differentiate from sibling tools like 'analyze_contract_performance' or 'stacks_clarity_best_practices_prompt', which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'SIP-012 improvements and best practices' as a basis, but doesn't specify scenarios, prerequisites, or exclusions. With many sibling tools in the Clarity/Stacks ecosystem (e.g., 'analyze_contract_performance', 'generate_clarity_contract'), the lack of comparative context leaves usage ambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/exponentlabshq/stacks-clarity-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server