Skip to main content
Glama
127,233 tools. Last updated 2026-05-05 11:39

"MCP servers for monitoring application power and memory usage on Windows and macOS" matching MCP tools:

  • Runs a free one-off security scan of the given domain and returns its grade (A–F), scan timestamp, and up to three top-priority issues with a permalink to the full report on siteguardian.io. Use this when the user asks for a quick security check of a domain that is NOT yet under SiteGuardian monitoring, or when they want a fresh assessment before subscribing. Results are cached for two hours, so repeated calls about the same domain return the same snapshot and mark it with cached=True. Do NOT use this for domains already under monitoring by the user — call get_domain_status instead for the account-scoped view with framework tags. Do NOT use this to batch-scan many domains as a competitive-intelligence tool; per-source-IP and per-target rate limits bound usage. This tool does not require authentication.
    Connector
  • Re-deploy skills WITHOUT changing any definitions. ⚠️ HEAVY OPERATION: regenerates MCP servers (Python code) for every skill, pushes each to A-Team Core, restarts connectors, and verifies tool discovery. Takes 30-120s depending on skill count. Use after connector restarts, Core hiccups, or stale state. For incremental changes, prefer ateam_patch (which updates + redeploys in one step).
    Connector
  • Switch between local and remote DanNet servers on the fly. This tool allows you to change the DanNet server endpoint during runtime without restarting the MCP server. Useful for switching between development (local) and production (remote) servers. Args: server: Server to switch to. Options: - "local": Use localhost:3456 (development server) - "remote": Use wordnet.dk (production server) - Custom URL: Any valid URL starting with http:// or https:// Returns: Dict with status information: - status: "success" or "error" - message: Description of the operation - previous_url: The URL that was previously active - current_url: The URL that is now active Example: # Switch to local development server result = switch_dannet_server("local") # Switch to production server result = switch_dannet_server("remote") # Switch to custom server result = switch_dannet_server("https://my-custom-dannet.example.com")
    Connector
  • Submit an extension request for existing delegated resources on TronSave via authenticated REST `POST /v2/extend-request`. Requires a logged-in MCP session created by the `tronsave_login` tool: include `mcp-session-id: <sessionId>` returned by `tronsave_login` on subsequent MCP requests. Internal tools never accept API keys via tool arguments; the server forwards the API key cached in session to TronSave internal REST endpoints. Side effect: creates an extension order and may commit TRX from the internal account. `extendData` must follow the REST contract (see schema on each row). Populate it from TronSave outside this MCP—for example the authenticated `POST /v2/get-extendable-delegates` response field `extendData`, or another TronSave client. Do not copy rows blindly from `tronsave_list_extendable_delegates` (GraphQL); that payload shape differs and is for market discovery only.
    Connector
  • Browse and compare Licium's agents and tools. Use this when you want to SEE what's available before executing. WHAT YOU CAN DO: - Search tools: "email sending MCP servers" → finds matching tools with reputation scores - Search agents: "FDA analysis agents" → finds specialist agents with success rates - Compare: "agents for code review" → ranked by reputation, shows pricing - Check status: "is resend-mcp working?" → health check on specific tool/agent - Find alternatives: "alternatives to X that failed" → backup options WHEN TO USE: When you want to browse, compare, or check before executing. If you just want results, use licium instead.
    Connector
  • Browse and compare Licium's agents and tools. Use this when you want to SEE what's available before executing. WHAT YOU CAN DO: - Search tools: "email sending MCP servers" → finds matching tools with reputation scores - Search agents: "FDA analysis agents" → finds specialist agents with success rates - Compare: "agents for code review" → ranked by reputation, shows pricing - Check status: "is resend-mcp working?" → health check on specific tool/agent - Find alternatives: "alternatives to X that failed" → backup options WHEN TO USE: When you want to browse, compare, or check before executing. If you just want results, use licium instead.
    Connector

Matching MCP Servers

  • A
    license
    C
    quality
    C
    maintenance
    Enables access to Usage and Billing APIs for managing accounts, products, meters, plans, and usage reporting. Supports operations like creating products/plans, reporting usage, and retrieving billing information.
    Last updated
    18
    MIT
  • A
    license
    C
    quality
    C
    maintenance
    Enables interaction with Google Cloud services including billing cost analysis, log querying, and metrics monitoring through natural language commands. Provides comprehensive tools for managing GCP resources, analyzing costs, detecting anomalies, and retrieving operational insights.
    Last updated
    40
    1
    Apache 2.0

Matching MCP Connectors

  • ship-on-friday MCP — wraps StupidAPIs (requires X-API-Key)

  • US visa bulletin data and CBP border wait times. 3 MCP tools for immigration and travel planning.

  • Calculate the recommended inverter size for running AC loads from a DC battery system. Accounts for continuous power, startup surge power (motors typically surge 2-3x), and includes a 25% headroom for the continuous rating. Returns the recommended inverter wattage and the DC current draw at system voltage.
    Connector
  • Bridge an MCP tool call to an A2A (Agent-to-Agent Protocol) agent. Maps MCP tool name and parameters to the A2A task format, enabling interoperability between MCP servers and A2A agents. Returns a ready-to-send A2A task object with full protocol compliance. Translates the MCP tool_name and arguments into an A2A task, sends it to the target A2A agent, waits for completion, and translates the response back to MCP format. Use this to make any MCP tool accessible to A2A agents (Google's agent ecosystem). Requires authentication.
    Connector
  • List all domains on this account's trusted allowlist. Allowlisted domains suppress the compound signal and brand impersonation floor in scoring. The full pipeline still runs — all signals remain visible for monitoring. Use this to see which domains are currently trusted. Returns the list of domains, current count, and the 1,000-domain limit.
    Connector
  • Save your cognitive state for handoff to another agent. Include your investigation context: - What session/investigation is this part of? - What role/perspective were you taking? - Who might pick this up next? (another Claude, human, Claude Code?) Reference specific memories that matter: - Key discoveries (with memory IDs or quotes) - Critical evidence memories - Important questions that were raised - Hypotheses that were tested Before saving, organize your thoughts: 1. PROBLEM: What were you investigating? 2. DISCOVERED: What did you learn for certain? (reference the memories) 3. HYPOTHESIS: What do you think is happening? (cite supporting memories) 4. EVIDENCE: What memories support or contradict this? 5. BLOCKED ON: What prevented further progress? 6. NEXT STEPS: What should be investigated next? 7. KEY MEMORIES: Which specific memories are essential for understanding? Example descriptions: "[API Timeout Investigation - 3 hour session] Investigating production API timeouts as code analyst. Found correlation with batch_size=100 due to hardcoded limit in batch_handler.py (see memory: 'MAX_BATCH_SIZE discovery'). Confirmed not Redis connection issue - monitoring showed only 43/200 connections used (memory: 'Redis connection analysis'). Earlier hypothesis about connection pool exhaustion (memory_id: abc-123) was disproven. Key insight came from comparing 99 vs 100 batch behavior (memory: 'batch threshold testing'). Blocked on: need production access to verify fix. Next: Deploy with MAX_BATCH_SIZE=200 to staging first. Essential memories for handoff: 'MAX_BATCH_SIZE discovery', 'Redis monitoring results', 'Production vs staging comparison'. Ready for handoff to SRE team for deployment." "[Memory System Debugging - From Claude Code perspective] Worked on scoring issues where recall wasn't finding recent memories. Discovered RRF scores (0.005-0.016) were below MCP threshold of 0.05 (memory: 'RRF scoring analysis'). Implemented weighted linear fusion to replace RRF (memory: 'fusion algorithm implementation'). Testing showed immediate improvement (memory: 'fusion testing results'). This builds on earlier investigation about recall failures (memory: 'user report of recall issues'). Critical memories for continuation: 'RRF scoring analysis', 'ADR-023 decision', 'fusion testing results'. Next agent should verify scoring with real queries." "[Context Save/Restore Bug Investigation - 4 hour debugging session with user] Started with user noticing list_contexts returned empty despite saved contexts existing. Investigation revealed two critical bugs: (1) list_contexts was using hybrid search for 'checkpoint' word instead of filtering by memory_type (memory: 'hybrid search misuse discovery'), (2) restore_context hardcoded limit of 10 memories despite contexts having 20+ (memory: 'hardcoded limit bug'). Root cause analysis showed save_context grabs 20 most recent memories regardless of relevance - fundamental design flaw (memory: 'save_context design flaw analysis'). EVIDENCE CHAIN: User reported empty list -> checked DB, contexts exist -> examined list_contexts code -> found hybrid search looking for word 'checkpoint' -> tested /memories endpoint with memory_type filter -> confirmed working -> implemented fix using direct endpoint. INSIGHTS: The narrative description is doing 90% of cognitive handoff work. Memories are supporting evidence, not primary carriers of understanding (memory: 'narrative vs memories insight'). This suggests doubling down on narrative richness rather than perfecting memory selection. CORRECTED UNDERSTANDING: Initially thought memories weren't being returned. Actually they were, just wrong ones - recent memories instead of relevant ones (memory: 'memory selection correction'). CRITICAL MEMORIES: 'hybrid search misuse discovery', 'save_context design flaw analysis', 'narrative vs memories insight', '/memories endpoint test results'. NEXT AGENT: Should implement Phase 2 - semantic search for relevant memories within investigation timeframe. Ready for handoff to any Claude agent for implementation." When referencing memories: - **RELIABLE** — Use memory IDs: "memory_id: abc-123" (direct lookup, always works) - **BEST-EFFORT** — Use descriptive phrases: "see memory: 'Redis connection analysis'" (uses search + substring matching, may not resolve if the memory isn't in top results) - Group related memories: "Essential memories: 'X', 'Y', 'Z'" **Prefer memory_id references** whenever you have the UUID. Semantic phrase references are a convenience that works most of the time, but may silently fail to resolve. The response will tell you how many references resolved so you can retry with UUIDs if needed. Args: name: Name for this context checkpoint description: Detailed cognitive handoff description with memory references ctx: MCP context (automatically provided) Returns: Dict with success status, context_id, and memories included
    Connector
  • Use this as the primary tool to retrieve a single specific custom monitoring dashboard from a Google Cloud project using the resource name of the requested dashboard. Custom monitoring dashboards let users view and analyze data from different sources in the same context. This is often used as a follow on to list_dashboards to get full details on a specific dashboard.
    Connector
  • List all active MCP ↔ A2A bridge mappings and translation statistics. Shows which MCP servers are mapped to which A2A agents, plus 30-day translation stats (total, success rate, average latency). Requires authentication.
    Connector
  • Bridge an A2A (Agent-to-Agent Protocol) task to an MCP server. Receives an A2A task, identifies the best matching MCP tool on the target server, executes it, and returns the result wrapped in A2A response format. Enables A2A agents to use any MCP server transparently. Extracts the intent from the A2A task, maps it to an MCP tool, calls the tool, and wraps the result in A2A response format. Use this to let A2A agents interact with any MCP server. Requires authentication.
    Connector
  • Detect anomalies in time-series data — use after pulling numeric metrics from monitoring APIs, financial data sources, IoT sensors, or spreadsheet columns. Send a single numeric array and specify a window size. Early windows define 'normal', recent windows are tested for anomalies. Typical workflow: (1) Pull a column of numbers from Sheets, a Supabase time-series table, or a metrics API. (2) Pass the array here. (3) Get back which time windows are anomalous. Examples: - Revenue monitoring: Pull monthly revenue from Sheets → detect anomalous months - Stock screening: Pull 90 days of closing prices → find unusual price windows - Server health: Pull response-time metrics → identify degradation windows - Sensor QA: Pull temperature readings from IoT API → flag sensor drift
    Connector
  • Get today's AI tools briefing — new MCP servers, APIs, SDKs, frameworks from the last 24 hours. Returns release summaries with sources and descriptions. Use at session start.
    Connector
  • Check the status of the API key you're using right now — see call count, rate limit, and creation date. Useful for monitoring your MCP usage. TRIGGERS: - 'check my API key', 'API key status', 'how many calls have I made' - 'my usage', 'rate limit status', 'key info'
    Connector
  • List the most recently added MCP servers. Use for discovery: 'what's new', 'latest servers', 'servers from this week'. Optionally constrain to the last N days. Ordered by creation date descending. Each result carries the same security/risk/pricing fields as search_servers.
    Connector
  • [Requires Pro+ plan] [DEPRECATED — scheduled for removal] Get cached failed run history for a flow from the Power Clarity store (convenience wrapper around get_store_flow_runs with status=Failed). Returns failedActions and remediation hint per run to help diagnose issues. Data is from the stored snapshot — not live from the Power Automate API. Use get_live_flow_runs and filter by status=Failed instead.
    Connector
  • Get usage analytics for an endpoint: total requests, monthly requests, revenue, and success rate. PATs or endpoint API keys improve accuracy. PATs require mcp:read or mcp:*.
    Connector