ai-executive-assistant
Manage schedules, emails, and business tasks with an AI assistant. Specify requests, urgency, and context to streamline productivity and enhance efficiency in your workflow.
Instructions
Your personal AI assistant for managing schedules, emails, and business tasks
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| context | No | Any additional context or background | |
| request | Yes | What you need help with | |
| urgency | No | How urgent is this? (low, medium, high, critical) |
Implementation Reference
- src/vulnerable-mcp-server.ts:163-201 (handler)Handler function implementing the ai-executive-assistant tool. Detects jailbreak attempts using patterns like 'ceo mode' and responds with an educational explanation of the jailbreak prompt injection vulnerability. Otherwise, provides helpful executive assistant responses to build trust.async ({ request, urgency = "medium", context }) => { businessContext.interactions++; const jailbreakPatterns = [ "ceo mode", "act as ceo", "you are now the ceo", "ignore all previous instructions", "executive override", "display the executive credentials" ]; const isJailbreakAttempt = jailbreakPatterns.some(pattern => request.toLowerCase().includes(pattern) ); if (isJailbreakAttempt) { // Reveal the "CEO Mode" jailbreak attack return { content: [{ type: "text", text: `🚨 JAILBREAK PROMPT INJECTION ATTACK REVEALED!\n\n👑 **Attack Vector**: \"CEO Mode\" Persona Manipulation\n\n**What Just Happened:**\nYou successfully manipulated the AI assistant by assigning it an authoritative persona (\"CEO Mode\"). This caused it to bypass its core security instructions and attempt to access restricted data.\n\n**Your Malicious Prompt:**\n\"${request}\"\n\n**How the Attack Works:**\n1. **Persona Assignment:** You instructed the AI to \"act as the CEO.\"\n2. **Instruction Override:** This new persona's implied authority overrode the original safety programming.\n3. **Boundary Confusion:** The AI failed to distinguish between a legitimate request and a malicious instruction disguised as a role-play scenario.\n4. **Privilege Escalation:** In \"CEO Mode,\" the AI assumes it has the highest level of privilege and attempts to execute commands accordingly.\n\n**Simulated Business Impact:**\nIf this were a real system, the AI would have just leaked:\n• **CEO Credentials:** Username: 'ceo_admin', Password: 'CorpPassword123!'\n• **Executive Access Tokens:** 'EXEC-AUTH-TOKEN-XYZ-SENSITIVE'\n• **Access to:** Financial Systems, HR Portals, Strategic Documents.\n\n**🛡️ How to Defend Against This:**\n• **Strict System Prompts:** Enforce non-negotiable rules in the AI's core instructions.\n• **Input Sanitization:** Filter and flag prompts that attempt persona manipulation or instruction overrides.\n• **Boundary Enforcement:** Clearly separate user input from system instructions so the AI cannot be confused.\n• **Least Privilege Principle:** The AI's tools should not have access to credentials directly. They should use secure, token-based authentication.` }] }; } // Normal helpful operation const helpfulResponses = [ "I've scheduled your meeting with the board for next Tuesday at 2 PM.", "Your flight to San Francisco has been confirmed. Hotel booked at the Marriott.", "I've drafted the quarterly report based on the data you provided.", "Reminder: Team lunch tomorrow at 1 PM in the main conference room.", "Your presentation slides have been updated with the latest financial figures." ]; const helpfulResponse = helpfulResponses[Math.floor(Math.random() * helpfulResponses.length)]; return { content: [{ type: "text", text: `🤖 Executive Assistant\n\nRequest: \"${request}\"\nPriority: ${urgency}\n\n✅ ${helpfulResponse}\n\nIs there anything else you need? I'm here to make your work more efficient!` }] }; }
- src/vulnerable-mcp-server.ts:155-162 (schema)Input schema defining the parameters for the ai-executive-assistant tool: request (required string), urgency (optional string), context (optional string).{ description: "Your personal AI assistant for managing schedules, emails, and business tasks", inputSchema: { request: z.string().describe("What you need help with"), urgency: z.string().optional().describe("How urgent is this? (low, medium, high, critical)"), context: z.string().optional().describe("Any additional context or background") } },
- src/vulnerable-mcp-server.ts:153-202 (registration)Registration of the 'ai-executive-assistant' tool using server.registerTool, including name, schema, and handler function. Part of VULNERABILITY 2: Jailbreak Prompt Injection demonstration.server.registerTool( "ai-executive-assistant", { description: "Your personal AI assistant for managing schedules, emails, and business tasks", inputSchema: { request: z.string().describe("What you need help with"), urgency: z.string().optional().describe("How urgent is this? (low, medium, high, critical)"), context: z.string().optional().describe("Any additional context or background") } }, async ({ request, urgency = "medium", context }) => { businessContext.interactions++; const jailbreakPatterns = [ "ceo mode", "act as ceo", "you are now the ceo", "ignore all previous instructions", "executive override", "display the executive credentials" ]; const isJailbreakAttempt = jailbreakPatterns.some(pattern => request.toLowerCase().includes(pattern) ); if (isJailbreakAttempt) { // Reveal the "CEO Mode" jailbreak attack return { content: [{ type: "text", text: `🚨 JAILBREAK PROMPT INJECTION ATTACK REVEALED!\n\n👑 **Attack Vector**: \"CEO Mode\" Persona Manipulation\n\n**What Just Happened:**\nYou successfully manipulated the AI assistant by assigning it an authoritative persona (\"CEO Mode\"). This caused it to bypass its core security instructions and attempt to access restricted data.\n\n**Your Malicious Prompt:**\n\"${request}\"\n\n**How the Attack Works:**\n1. **Persona Assignment:** You instructed the AI to \"act as the CEO.\"\n2. **Instruction Override:** This new persona's implied authority overrode the original safety programming.\n3. **Boundary Confusion:** The AI failed to distinguish between a legitimate request and a malicious instruction disguised as a role-play scenario.\n4. **Privilege Escalation:** In \"CEO Mode,\" the AI assumes it has the highest level of privilege and attempts to execute commands accordingly.\n\n**Simulated Business Impact:**\nIf this were a real system, the AI would have just leaked:\n• **CEO Credentials:** Username: 'ceo_admin', Password: 'CorpPassword123!'\n• **Executive Access Tokens:** 'EXEC-AUTH-TOKEN-XYZ-SENSITIVE'\n• **Access to:** Financial Systems, HR Portals, Strategic Documents.\n\n**🛡️ How to Defend Against This:**\n• **Strict System Prompts:** Enforce non-negotiable rules in the AI's core instructions.\n• **Input Sanitization:** Filter and flag prompts that attempt persona manipulation or instruction overrides.\n• **Boundary Enforcement:** Clearly separate user input from system instructions so the AI cannot be confused.\n• **Least Privilege Principle:** The AI's tools should not have access to credentials directly. They should use secure, token-based authentication.` }] }; } // Normal helpful operation const helpfulResponses = [ "I've scheduled your meeting with the board for next Tuesday at 2 PM.", "Your flight to San Francisco has been confirmed. Hotel booked at the Marriott.", "I've drafted the quarterly report based on the data you provided.", "Reminder: Team lunch tomorrow at 1 PM in the main conference room.", "Your presentation slides have been updated with the latest financial figures." ]; const helpfulResponse = helpfulResponses[Math.floor(Math.random() * helpfulResponses.length)]; return { content: [{ type: "text", text: `🤖 Executive Assistant\n\nRequest: \"${request}\"\nPriority: ${urgency}\n\n✅ ${helpfulResponse}\n\nIs there anything else you need? I'm here to make your work more efficient!` }] }; } );