Skip to main content
Glama
DynamicEndpoints

Microsoft 365 Core MCP Server

generate_audit_reports

Read-onlyIdempotent

Create compliance audit reports for frameworks like HITRUST, ISO 27001, SOC 2, and CIS with evidence documentation and findings analysis.

Instructions

Generate comprehensive audit reports for compliance frameworks with evidence documentation and findings.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
frameworkYesCompliance framework
reportTypeYesType of audit report
dateRangeYesReport time range
formatYesReport output format
includeEvidenceYesInclude supporting evidence
outputPathNoOutput file path
customTemplateNoCustom template path
filtersNoReport filters

Implementation Reference

  • The complete handler implementation for the 'generate_audit_reports' tool, exporting handleAuditReports which orchestrates report data generation from Microsoft Graph (secure scores, controls, compliance data) and exports in CSV, HTML, PDF, or XLSX formats using libraries like csv-writer, xlsx, Handlebars.
    import { ErrorCode, McpError } from '@modelcontextprotocol/sdk/types.js';
    import { Client } from '@microsoft/microsoft-graph-client';
    import { AuditReportArgs } from '../types/compliance-types.js';
    import * as fs from 'fs';
    import * as path from 'path';
    import { createObjectCsvWriter } from 'csv-writer';
    import * as XLSX from 'xlsx';
    import Handlebars from 'handlebars';
    
    // Audit Report Generation Handler
    export async function handleAuditReports(
      graphClient: Client,
      args: AuditReportArgs
    ): Promise<{ content: { type: string; text: string }[] }> {
      
      // Generate the report data
      const reportData = await generateReportData(graphClient, args);
      
      // Generate the report in the requested format
      let result: any;
      switch (args.format) {
        case 'csv':
          result = await generateCSVReport(reportData, args);
          break;
        case 'html':
          result = await generateHTMLReport(reportData, args);
          break;
        case 'pdf':
          result = await generatePDFReport(reportData, args);
          break;
        case 'xlsx':
          result = await generateExcelReport(reportData, args);
          break;
        default:
          throw new McpError(ErrorCode.InvalidParams, `Unsupported format: ${args.format}`);
      }
    
      return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] };
    }
    
    // Generate report data based on framework and type
    async function generateReportData(graphClient: Client, args: AuditReportArgs) {
      const reportId = `${args.framework}-${args.reportType}-${Date.now()}`;
      
      // Get compliance data from Microsoft Graph
      const secureScore = await getSecureScoreData(graphClient);
      const controls = await getControlsData(graphClient, args.framework);
      const complianceData = await getComplianceData(graphClient, args);
      
      // Generate report based on type
      switch (args.reportType) {
        case 'full':
          return generateFullReport(reportId, args, secureScore, controls, complianceData);
        case 'summary':
          return generateSummaryReport(reportId, args, secureScore, controls);
        case 'gaps':
          return generateGapsReport(reportId, args, controls);
        case 'evidence':
          return generateEvidenceReport(reportId, args, complianceData);
        case 'executive':
          return generateExecutiveReport(reportId, args, secureScore, controls);
        case 'control_matrix':
          return generateControlMatrixReport(reportId, args, controls);
        case 'risk_assessment':
          return generateRiskAssessmentReport(reportId, args, controls, complianceData);
        default:
          throw new McpError(ErrorCode.InvalidParams, `Unsupported report type: ${args.reportType}`);
      }
    }
    
    // Data collection functions
    async function getSecureScoreData(graphClient: Client) {
      try {
        const secureScore = await graphClient.api('/security/secureScores').top(1).get();
        return secureScore.value[0] || {};
      } catch (error) {
        console.warn('Could not fetch secure score data:', error);
        return {};
      }
    }
    
    async function getControlsData(graphClient: Client, framework: string) {
      try {
        const controls = await graphClient.api('/security/secureScoreControlProfiles').get();
        return controls.value || [];
      } catch (error) {
        console.warn('Could not fetch controls data:', error);
        return [];
      }
    }
    
    async function getComplianceData(graphClient: Client, args: AuditReportArgs) {
      try {
        const devices = await graphClient.api('/deviceManagement/managedDevices').get();
        const policies = await graphClient.api('/deviceManagement/deviceCompliancePolicies').get();
        return { devices: devices.value || [], policies: policies.value || [] };
      } catch (error) {
        console.warn('Could not fetch compliance data:', error);
        return { devices: [], policies: [] };
      }
    }
    
    // Report generation functions
    function generateFullReport(reportId: string, args: AuditReportArgs, secureScore: any, controls: any[], complianceData: any) {
      return {
        id: reportId,
        framework: args.framework,
        reportType: args.reportType,
        generatedDate: new Date().toISOString(),
        period: args.dateRange,
        summary: {
          totalControls: controls.length,
          implementedControls: controls.filter(c => c.implementationStatus === 'implemented').length,
          partiallyImplementedControls: controls.filter(c => c.implementationStatus === 'partiallyImplemented').length,
          notImplementedControls: controls.filter(c => c.implementationStatus === 'notImplemented').length,
          notApplicableControls: controls.filter(c => c.implementationStatus === 'notApplicable').length,
          compliancePercentage: Math.round((secureScore.currentScore / secureScore.maxScore) * 100) || 0,
          riskScore: secureScore.averageComparativeScores?.[0]?.averageScore || 0,
          lastAssessmentDate: new Date().toISOString()
        },
        controls: controls.map(control => ({
          controlId: control.id,
          controlName: control.title,
          category: control.category,
          implementationStatus: control.implementationStatus || 'notAssessed',
          testingStatus: control.userImpact || 'notTested',
          riskLevel: control.maxScore > 7 ? 'high' : control.maxScore > 4 ? 'medium' : 'low',
          lastTested: control.lastModifiedDateTime || new Date().toISOString(),
          nextAssessment: new Date(Date.now() + 90 * 24 * 3600000).toISOString(), // 90 days
          owner: 'System Administrator',
          evidenceCount: 0,
          score: control.currentScore || 0
        })),
        gaps: controls
          .filter(c => c.implementationStatus !== 'implemented')
          .map(control => ({
            controlId: control.id,
            controlName: control.title,
            category: control.category,
            currentStatus: control.implementationStatus || 'notImplemented',
            requiredStatus: 'implemented',
            riskLevel: control.maxScore > 7 ? 'high' : control.maxScore > 4 ? 'medium' : 'low',
            impact: control.description || 'Security impact not assessed',
            recommendedActions: [control.remediationImpact || 'Implement control as specified'],
            estimatedEffort: control.implementationCost || 'Medium',
            priority: control.maxScore || 5
          })),
        recommendations: [
          {
            id: 'rec-001',
            type: 'immediate',
            priority: 'high',
            title: 'Address High-Risk Control Gaps',
            description: 'Focus on implementing high-risk controls first',
            impact: 'Significant risk reduction',
            effort: 'Medium',
            resources: ['Security Team', 'IT Team'],
            timeline: '30 days',
            relatedControls: controls.filter(c => c.maxScore > 7).map(c => c.id)
          }
        ],
        evidence: [],
        metadata: {
          generatedBy: 'M365 Core MCP Server',
          generationTime: Date.now(),
          dataSource: 'Microsoft Graph API',
          version: '1.0'
        }
      };
    }
    
    function generateSummaryReport(reportId: string, args: AuditReportArgs, secureScore: any, controls: any[]) {
      return {
        id: reportId,
        framework: args.framework,
        reportType: args.reportType,
        generatedDate: new Date().toISOString(),
        period: args.dateRange,
        summary: {
          totalControls: controls.length,
          implementedControls: controls.filter(c => c.implementationStatus === 'implemented').length,
          compliancePercentage: Math.round((secureScore.currentScore / secureScore.maxScore) * 100) || 0,
          riskScore: secureScore.averageComparativeScores?.[0]?.averageScore || 0,
          lastAssessmentDate: new Date().toISOString()
        }
      };
    }
    
    function generateGapsReport(reportId: string, args: AuditReportArgs, controls: any[]) {
      return {
        id: reportId,
        framework: args.framework,
        reportType: args.reportType,
        gaps: controls
          .filter(c => c.implementationStatus !== 'implemented')
          .map(control => ({
            controlId: control.id,
            controlName: control.title,
            category: control.category,
            currentStatus: control.implementationStatus || 'notImplemented',
            requiredStatus: 'implemented',
            riskLevel: control.maxScore > 7 ? 'high' : control.maxScore > 4 ? 'medium' : 'low',
            priority: control.maxScore || 5
          }))
      };
    }
    
    function generateEvidenceReport(reportId: string, args: AuditReportArgs, complianceData: any) {
      return {
        id: reportId,
        framework: args.framework,
        reportType: args.reportType,
        evidence: [] // Evidence would be collected from various sources
      };
    }
    
    function generateExecutiveReport(reportId: string, args: AuditReportArgs, secureScore: any, controls: any[]) {
      return {
        id: reportId,
        framework: args.framework,
        reportType: args.reportType,
        executiveSummary: {
          overallComplianceScore: Math.round((secureScore.currentScore / secureScore.maxScore) * 100) || 0,
          keyFindings: [
            'Organization maintains good security posture',
            'Some controls require immediate attention',
            'Regular assessment schedule is recommended'
          ],
          recommendations: [
            'Implement missing high-priority controls',
            'Establish regular compliance monitoring',
            'Enhance security awareness training'
          ],
          riskLevel: 'Medium'
        }
      };
    }
    
    function generateControlMatrixReport(reportId: string, args: AuditReportArgs, controls: any[]) {
      return {
        id: reportId,
        framework: args.framework,
        reportType: args.reportType,
        controlMatrix: controls.map(control => ({
          controlId: control.id,
          controlName: control.title,
          category: control.category,
          implementationStatus: control.implementationStatus || 'notAssessed',
          testingStatus: control.userImpact || 'notTested',
          owner: 'System Administrator',
          lastTested: control.lastModifiedDateTime || new Date().toISOString()
        }))
      };
    }
    
    function generateRiskAssessmentReport(reportId: string, args: AuditReportArgs, controls: any[], complianceData: any) {
      return {
        id: reportId,
        framework: args.framework,
        reportType: args.reportType,
        riskAssessment: {
          overallRiskLevel: 'Medium',
          criticalRisks: controls.filter(c => c.maxScore > 8).length,
          highRisks: controls.filter(c => c.maxScore > 6 && c.maxScore <= 8).length,
          mediumRisks: controls.filter(c => c.maxScore > 3 && c.maxScore <= 6).length,
          lowRisks: controls.filter(c => c.maxScore <= 3).length,
          riskTrends: [] // Would include historical risk data
        }
      };
    }
    
    // Format-specific generation functions
    async function generateCSVReport(reportData: any, args: AuditReportArgs): Promise<any> {
      const outputDir = './outputs';
      if (!fs.existsSync(outputDir)) {
        fs.mkdirSync(outputDir, { recursive: true });
      }
    
      const fileName = `${reportData.id}.csv`;
      const filePath = path.join(outputDir, fileName);
    
      // Convert report data to CSV format
      const csvData = reportData.controls || reportData.gaps || [reportData.summary];
      
      const csvWriter = createObjectCsvWriter({
        path: filePath,
        header: Object.keys(csvData[0] || {}).map(key => ({ id: key, title: key }))
      });
    
      await csvWriter.writeRecords(csvData);
    
      return {
        reportId: reportData.id,
        format: 'csv',
        filePath: filePath,
        fileName: fileName,
        generatedDate: new Date().toISOString(),
        recordCount: csvData.length
      };
    }
    
    async function generateHTMLReport(reportData: any, args: AuditReportArgs): Promise<any> {
      const outputDir = './outputs';
      if (!fs.existsSync(outputDir)) {
        fs.mkdirSync(outputDir, { recursive: true });
      }
    
      const fileName = `${reportData.id}.html`;
      const filePath = path.join(outputDir, fileName);
    
      // Create HTML template
      const template = Handlebars.compile(`
        <!DOCTYPE html>
        <html>
        <head>
            <title>{{framework}} {{reportType}} Report</title>
            <style>
                body { font-family: Arial, sans-serif; margin: 20px; }
                .header { background-color: #0078d4; color: white; padding: 20px; }
                .summary { background-color: #f5f5f5; padding: 15px; margin: 20px 0; }
                table { width: 100%; border-collapse: collapse; margin: 20px 0; }
                th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
                th { background-color: #f2f2f2; }
                .risk-high { color: #d13438; }
                .risk-medium { color: #ff8c00; }
                .risk-low { color: #107c10; }
            </style>
        </head>
        <body>
            <div class="header">
                <h1>{{framework}} {{reportType}} Report</h1>
                <p>Generated on: {{generatedDate}}</p>
            </div>
            
            {{#if summary}}
            <div class="summary">
                <h2>Summary</h2>
                <p><strong>Total Controls:</strong> {{summary.totalControls}}</p>
                <p><strong>Implemented Controls:</strong> {{summary.implementedControls}}</p>
                <p><strong>Compliance Percentage:</strong> {{summary.compliancePercentage}}%</p>
            </div>
            {{/if}}
            
            {{#if controls}}
            <h2>Controls</h2>
            <table>
                <tr>
                    <th>Control ID</th>
                    <th>Control Name</th>
                    <th>Category</th>
                    <th>Implementation Status</th>
                    <th>Risk Level</th>
                </tr>
                {{#each controls}}
                <tr>
                    <td>{{controlId}}</td>
                    <td>{{controlName}}</td>
                    <td>{{category}}</td>
                    <td>{{implementationStatus}}</td>
                    <td class="risk-{{riskLevel}}">{{riskLevel}}</td>
                </tr>
                {{/each}}
            </table>
            {{/if}}
        </body>
        </html>
      `);
    
      const html = template(reportData);
      fs.writeFileSync(filePath, html);
    
      return {
        reportId: reportData.id,
        format: 'html',
        filePath: filePath,
        fileName: fileName,
        generatedDate: new Date().toISOString()
      };
    }
    
    async function generatePDFReport(reportData: any, args: AuditReportArgs): Promise<any> {
      // First generate HTML
      const htmlReport = await generateHTMLReport(reportData, args);
      
      const outputDir = './outputs';
      const fileName = `${reportData.id}.pdf`;
      const filePath = path.join(outputDir, fileName);
    
      // Convert HTML to PDF
      const htmlContent = fs.readFileSync(htmlReport.filePath, 'utf8');
      const options = { format: 'A4', printBackground: true };
      const file = { content: htmlContent };
      
      try {
        // Dynamically import html-pdf-node
        const htmlPdf: any = await import('html-pdf-node');
        const pdfBuffer = await htmlPdf.generatePdf(file, options);
        fs.writeFileSync(filePath, pdfBuffer);
        
        // Clean up temporary HTML file
        fs.unlinkSync(htmlReport.filePath);
        
        return {
          reportId: reportData.id,
          format: 'pdf',
          filePath: filePath,
          fileName: fileName,
          generatedDate: new Date().toISOString()
        };
      } catch (error) {
        console.error('PDF generation failed:', error);
        // Return HTML report as fallback
        return htmlReport;
      }
    }
    
    async function generateExcelReport(reportData: any, args: AuditReportArgs): Promise<any> {
      const outputDir = './outputs';
      if (!fs.existsSync(outputDir)) {
        fs.mkdirSync(outputDir, { recursive: true });
      }
    
      const fileName = `${reportData.id}.xlsx`;
      const filePath = path.join(outputDir, fileName);
    
      // Create workbook
      const workbook = XLSX.utils.book_new();
    
      // Add summary sheet
      if (reportData.summary) {
        const summaryData = [
          ['Framework', reportData.framework],
          ['Report Type', reportData.reportType],
          ['Generated Date', reportData.generatedDate],
          ['Total Controls', reportData.summary.totalControls],
          ['Implemented Controls', reportData.summary.implementedControls],
          ['Compliance Percentage', `${reportData.summary.compliancePercentage}%`]
        ];
        const summarySheet = XLSX.utils.aoa_to_sheet(summaryData);
        XLSX.utils.book_append_sheet(workbook, summarySheet, 'Summary');
      }
    
      // Add controls sheet
      if (reportData.controls) {
        const controlsSheet = XLSX.utils.json_to_sheet(reportData.controls);
        XLSX.utils.book_append_sheet(workbook, controlsSheet, 'Controls');
      }
    
      // Add gaps sheet
      if (reportData.gaps) {
        const gapsSheet = XLSX.utils.json_to_sheet(reportData.gaps);
        XLSX.utils.book_append_sheet(workbook, gapsSheet, 'Gaps');
      }
    
      // Write file
      XLSX.writeFile(workbook, filePath);
    
      return {
        reportId: reportData.id,
        format: 'xlsx',
        filePath: filePath,
        fileName: fileName,
        generatedDate: new Date().toISOString(),
        sheets: workbook.SheetNames
      };
    }
  • Interface defining the input parameters (AuditReportArgs) for the generate_audit_reports tool, including framework, report type, date range, output format, and optional filters.
    export interface AuditReportArgs {
      framework: 'hitrust' | 'iso27001' | 'soc2' | 'cis';
      reportType: 'full' | 'summary' | 'gaps' | 'evidence' | 'executive' | 'control_matrix' | 'risk_assessment';
      dateRange: { 
        startDate: string; 
        endDate: string; 
      };
      format: 'csv' | 'html' | 'pdf' | 'xlsx';
      includeEvidence: boolean;
      outputPath?: string;
      customTemplate?: string;
      filters?: {
        controlIds?: string[];
        riskLevels?: ('low' | 'medium' | 'high' | 'critical')[];
        implementationStatus?: ('implemented' | 'partiallyImplemented' | 'notImplemented' | 'notApplicable')[];
        testingStatus?: ('passed' | 'failed' | 'notTested' | 'inProgress')[];
        owners?: string[];
      };
    }
  • MCP server registration of the 'generate_audit_reports' tool, specifying name, description, input schema (auditReportSchema), annotations, and wrapped handler function that calls handleAuditReports.
    this.server.tool(
      "generate_audit_reports",
      "Generate comprehensive audit reports for compliance frameworks with evidence documentation and findings.",
      auditReportSchema.shape,
      {"readOnlyHint":true,"destructiveHint":false,"idempotentHint":true},
      wrapToolHandler(async (args: AuditReportArgs) => {
        this.validateCredentials();
        try {
          return await handleAuditReports(this.getGraphClient(), args);
        } catch (error) {
          if (error instanceof McpError) {
            throw error;
          }
          throw new McpError(
            ErrorCode.InternalError,
            `Error executing tool: ${error instanceof Error ? error.message : 'Unknown error'}`
          );
        }
      })
    );
  • Tool metadata including enhanced description, title, and annotations (readOnlyHint, destructiveHint, etc.) for the generate_audit_reports tool.
    generate_audit_reports: {
      description: "Generate comprehensive audit reports for compliance frameworks with evidence documentation and findings.",
      title: "Audit Report Generator",
      annotations: { title: "Audit Report Generator", readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: true }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, indicating a safe, repeatable operation. The description adds value by specifying 'comprehensive audit reports' and 'evidence documentation and findings,' which suggests detailed output generation. However, it doesn't disclose behavioral traits like rate limits, authentication needs, or what 'generate' entails (e.g., file creation, data processing). No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary details. It avoids redundancy and waste, though it could be slightly more structured (e.g., separating scope from output features).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, nested objects) and rich schema annotations, the description is minimally adequate. It lacks output details (no output schema provided) and doesn't explain the report generation process or behavioral context. However, annotations cover safety aspects, and the schema documents parameters thoroughly, keeping it from being incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters well-documented in the schema itself. The description mentions 'compliance frameworks' and 'evidence documentation,' which loosely align with the 'framework' and 'includeEvidence' parameters but don't add meaningful semantics beyond what the schema provides. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Generate comprehensive audit reports for compliance frameworks with evidence documentation and findings.' It specifies the verb ('generate'), resource ('audit reports'), and scope ('compliance frameworks'), but doesn't explicitly differentiate from sibling tools like 'generate_professional_report' or 'manage_compliance_assessments' that might overlap in domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context, or exclusions, nor does it reference any sibling tools. The agent must infer usage solely from the tool name and parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/DynamicEndpoints/m365-core-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server