Skip to main content
Glama

PagerDuty MCP Server

by wpfleger96
incidents_parsed.json5.08 kB
[ { "id": "123", "incident_number": 123, "title": "Incident 123", "status": "resolved", "urgency": "high", "created_at": "2025-02-17T23:19:05Z", "updated_at": "2025-02-17T23:27:00Z", "resolved_at": "2025-02-17T23:27:00Z", "service": { "id": "123" }, "teams": [ { "id": "PX4X5QR", "summary": "Team 123" } ], "alert_counts": { "all": 1, "resolved": 1, "triggered": 0 }, "summary": "REDACTED", "description": "REDACTED", "escalation_policy": { "id": "Escalation policy 123", "summary": "REDACTED" }, "last_status_change_at": "2025-02-17T23:27:00Z", "last_status_change_by": { "id": "123", "summary": "Service 123" }, "body_details": { "body": "The number of 200 responses is anomalous compared to the same 30-minute window a week ago for the ServiceName API.\nOncall engineer should first look into datadog: https://example.datadoghq.com/dashboard/abc-123/service-metrics?fromUser=false&fullscreen_end_ts=1720629828993&fullscreen_paused=false&fullscreen_refresh_mode=sliding&fullscreen_section=overview&fullscreen_start_ts=1720543428993&fullscreen_widget=5166521952505604&refresh_mode=sliding&tile_focus=5166521952505604&view=spans&from_ts=1720543383592&to_ts=1720629783592&live=true\nto check there isn't a spike occurring from something like an enumeration attack or a sudden drop due to an outage. As a potential follow-up, you can consult https://example.com/docs/troubleshooting for relevant logs \n\n\n @pagerduty-team-service-terraform\n \n\n\n\n\n\n\n\n\n\nPercent anomalous: 100.0%", "event_id": "1234567890123456789", "event_type": "query_alert_monitor", "monitor_state": "Triggered", "org": "Example", "priority": "normal", "query": "min(last_30m):\nanomalies(sum:framework.actions.service_api.clients.response_codes{service:service AND (env:production) AND region IN (*) AND api:service AND code:200s} by {env}, 'agile', 2, direction='below', alert_window='last_30m', interval=60, count_default_zero='true', seasonality='weekly')\n> 1", "tags": "api:service, code:200s, env:production, integration:custom, monitor, org:example, service:service, source:https://github.com/example/tf-mod-datadog-monitors/tree/main/modules/anomaly, team:team" } }, { "id": "456", "incident_number": 456, "title": "Incident 456", "status": "triggered", "urgency": "high", "created_at": "2025-02-18T00:31:12Z", "updated_at": "2025-02-18T00:31:12Z", "assignments": [ { "assignee": { "id": "456", "summary": "User 456" }, "at": "2025-02-18T00:31:12Z" } ], "service": { "id": "456" }, "teams": [ { "id": "456", "summary": "Team 456" } ], "alert_counts": { "all": 1, "resolved": 0, "triggered": 1 }, "summary": "REDACTED", "description": "REDACTED", "escalation_policy": { "id": "456", "summary": "Escalation Policy 456" }, "last_status_change_at": "2025-02-18T00:31:12Z", "last_status_change_by": { "id": "456", "summary": "Service 456" } }, { "id": "789", "incident_number": 789, "title": "Incident 789", "status": "acknowledged", "urgency": "high", "created_at": "2025-02-18T00:24:36Z", "updated_at": "2025-02-18T00:54:52Z", "assignments": [ { "assignee": { "id": "789", "summary": "User 789" }, "at": "2025-02-18T00:39:37Z" } ], "acknowledgements": [ { "acknowledger": { "id": "789", "summary": "User 789" }, "at": "2025-02-18T00:54:51Z" } ], "service": { "id": "789" }, "teams": [ { "id": "789", "summary": "Team 789" } ], "alert_counts": { "all": 1, "resolved": 0, "triggered": 1 }, "summary": "REDACTED", "description": "REDACTED", "escalation_policy": { "id": "789", "summary": "Escalation Policy 789" }, "last_status_change_at": "2025-02-18T00:54:51Z", "last_status_change_by": { "id": "789", "summary": "User 789" } } ]

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/wpfleger96/pagerduty-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server