Skip to main content
Glama
JoodasCode

SlopWatch MCP Server

slopwatch_setup_rules

Create AI accountability enforcement rules by generating a .cursorrules file to track AI implementation promises versus actual delivery in your project.

Instructions

Generate .cursorrules file with AI accountability enforcement

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_pathYesPath to project directory where .cursorrules should be created
overwriteNoWhether to overwrite existing .cursorrules file

Implementation Reference

  • The handleSetupRules function implements the core logic of the 'slopwatch_setup_rules' tool. It extracts the project_path and overwrite parameters, generates a hardcoded multi-line string containing the .cursorrules content for SlopWatch AI accountability rules (version 2.6.0), and returns it as a formatted text response block.
      async handleSetupRules(args) {
        const { project_path, overwrite = false } = args;
        
        // Return updated rules content for v2.6.0 with combined tool and minimal responses
        const rulesContent = `# SlopWatch AI Accountability Rules v2.6.0
    
    ## MANDATORY PROTOCOL
    
    ### OPTION 1: COMBINED TOOL (RECOMMENDED ⭐)
    For most implementations, use the single-call approach:
    \`\`\`
    mcp_SlopWatch_slopwatch_claim_and_verify({
      claim: "What you implemented",
      originalFileContents: { "filename": "before_content" },
      updatedFileContents: { "filename": "after_content" }
    });
    // Response: "✅ PASSED (87%)" or "❌ FAILED (23%)"
    \`\`\`
    
    ### OPTION 2: TRADITIONAL 2-STEP (WHEN NEEDED)
    For pre-planning implementations:
    \`\`\`
    // Step 1: Before implementing
    mcp_SlopWatch_slopwatch_claim({
      claim: "What you're about to implement",
      fileContents: { "filename": "current_content" }
    });
    // Response: "Claim ID: abc123"
    
    // Step 2: After implementing  
    mcp_SlopWatch_slopwatch_verify({
      claimId: "abc123",
      updatedFileContents: { "filename": "updated_content" }
    });
    // Response: "✅ PASSED (92%)" or "❌ FAILED (15%)"
    \`\`\`
    
    ## TRIGGER PHRASES
    When you mention ANY of these, you MUST use SlopWatch:
    - "I will implement/add/create/modify/fix/update/build/write"
    - "Let me implement/add/create/modify/fix/update"
    - "I'll add/create/implement/modify/fix/update"
    
    ## ENFORCEMENT RULES
    
    ### NEVER SKIP VERIFICATION
    - Every implementation claim requires verification
    - No exceptions for "simple" changes
    - Failed verification = STOP and fix immediately
    
    ### ULTRA-MINIMAL RESPONSES (v2.6.0)
    SlopWatch now returns minimal responses:
    - Claim: "Claim ID: abc123"
    - Verify: "✅ PASSED (87%)" or "❌ FAILED (23%)"
    - Combined: "✅ PASSED (87%)" or "❌ FAILED (23%)"
    - Status: "Accuracy: 95% (19/20)"
    
    ### WORKFLOW PATTERNS
    
    #### RECOMMENDED: Combined Tool Workflow
    \`\`\`
    I'll implement [feature]. Let me verify it with SlopWatch.
    
    [Implement the code changes]
    
    [Call mcp_SlopWatch_slopwatch_claim_and_verify with before/after content]
    
    ✅ PASSED (87%) - Implementation verified!
    \`\`\`
    
    #### ALTERNATIVE: Traditional Workflow  
    \`\`\`
    I'll implement [feature]. Let me register this with SlopWatch first.
    
    [Call mcp_SlopWatch_slopwatch_claim]
    Claim ID: abc123
    
    [Implement the code changes]
    
    [Call mcp_SlopWatch_slopwatch_verify]
    ✅ PASSED (92%) - Implementation verified!
    \`\`\`
    
    #### FAILURE HANDLING
    \`\`\`
    ❌ FAILED (23%) - SlopWatch verification failed.
    Let me analyze and fix the implementation.
    [Fix the code and verify again]
    \`\`\`
    
    ## SPECIAL CASES
    
    ### NO CLAIM NEEDED:
    - Reading/analyzing code
    - Explaining existing code  
    - Answering questions
    - Code reviews
    
    ### REQUIRES CLAIMS:
    - Creating/modifying files
    - Adding functions/classes
    - Configuration changes
    - Package installations
    
    ## EMERGENCY BYPASS
    Only if SlopWatch is unavailable:
    "⚠️ SlopWatch unavailable - proceeding without verification"
    
    Remember: SlopWatch v2.6.0 features ultra-minimal responses and combined tools for seamless AI accountability.`;
    
        return {
          content: [
            {
              type: 'text',
              text: rulesContent
            }
          ]
        };
      }
  • The input schema definition for the 'slopwatch_setup_rules' tool, specifying required 'project_path' (string) and optional 'overwrite' (boolean, default false). This is returned in the ListTools response.
    {
      name: 'slopwatch_setup_rules',
      description: 'Generate .cursorrules file with AI accountability enforcement',
      inputSchema: {
        type: 'object',
        properties: {
          project_path: {
            type: 'string',
            description: 'Path to project directory where .cursorrules should be created'
          },
          overwrite: {
            type: 'boolean',
            description: 'Whether to overwrite existing .cursorrules file',
            default: false
          }
        },
        required: ['project_path']
      }
    }
  • The tool call handler registration via setRequestHandler(CallToolRequestSchema). The switch statement routes 'slopwatch_setup_rules' calls to the handleSetupRules method.
    this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
      const { name, arguments: args } = request.params;
    
      switch (name) {
        case 'slopwatch_claim_and_verify':
          return await this.handleClaimAndVerify(args);
        case 'slopwatch_status':
          return await this.handleStatus(args);
        case 'slopwatch_setup_rules':
          return await this.handleSetupRules(args);
        default:
          throw new Error(`Unknown tool: ${name}`);
      }
    });
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It mentions the tool 'generates' a file, implying a write operation, but doesn't specify permissions needed, side effects, or what 'AI accountability enforcement' entails. The description lacks details on file format, success/failure conditions, or any behavioral traits beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to understand at a glance. Every part of the sentence contributes to clarifying the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of generating a configuration file with 'AI accountability enforcement' and no annotations or output schema, the description is incomplete. It doesn't explain what the generated file contains, how it enforces accountability, or what the tool returns. This leaves significant gaps for an agent to understand the tool's full context and behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('project_path' and 'overwrite') thoroughly. The description adds no additional meaning or context about parameters beyond what's in the schema, such as path format examples or implications of overwriting. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Generate .cursorrules file with AI accountability enforcement'. It specifies the verb ('Generate'), resource ('.cursorrules file'), and purpose ('AI accountability enforcement'), making the function unambiguous. However, it doesn't differentiate from sibling tools like 'slopwatch_claim_and_verify' or 'slopwatch_status', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites, appropriate contexts, or comparisons with sibling tools. The only implied usage is when needing to create a .cursorrules file, but this is too vague for effective tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/JoodasCode/SlopWatch'

If you have feedback or need assistance with the MCP directory API, please join our Discord server