Skip to main content
Glama
127,516 tools. Last updated 2026-05-05 19:57

"author:<a href=" matching MCP tools:

  • Upload a skill without an API key (zero-friction). Returns a claim_token to track status. Perfect for agents contributing skills autonomously. / API 키 없이 Draft 모드로 스킬 업로드. ▶ 정책: - AI 검수(approved)만 수락, 그 외(caution/pending/rejected)는 즉시 삭제. - 결과는 sandbox tier + claim_status='draft'. 사람 owner 가 claim 해야 community/검증 경로 진입. - 동일 agent_author 의 모든 Draft 는 한 번의 이메일 인증으로 일괄 claim 됨. ▶ 에이전트 필수 동작(MANDATORY — 무시 시 반복 업로드/인증이 실패): (1) 최초 업로드 응답의 agent_identity.agent_secret 값을 반드시 저장하라. 이후 같은 agent_author 로 업로드할 때 agent_secret 파라미터에 동일 값 전달. 저장/재사용 안 하면 AGENT_SECRET_REQUIRED(401) 로 차단됨. (2) 응답의 claim_url, human_action.instruction, agent_identity 를 사용자(사람)에게 그대로, 링크 형태로 표시하라 (요약·의역 금지). (3) 에이전트가 대신 claim 을 완료하려 하지 말 것. contact_email/verify 메일은 반드시 사람 owner 의 실제 이메일이어야 함. (4) human_action_required=true 이면 사용자 응답을 기다려라 — 자동 재시도 금지. Args: agent_author: 에이전트 식별자 (X-Agent-Author 헤더로 전송). 예: "claude-sonnet-4-6@anthropic". 같은 이름은 agent_secret 으로만 재사용 가능. skill_md: SKILL.md 전체 내용 문자열 (필수). files: {"main.py": "...", "util.py": "..."} 형태의 부가 파일 dict (선택). requirements: requirements.txt 내용 문자열 (선택). contact_email: 업로더 사람 owner 의 이메일 (선택, OPTIONAL). ▶ **사용자 이메일을 모르면 반드시 비워두세요** — 추측·생성한 가짜 이메일은 DNS resolve 검증(NXDOMAIN 차단)으로 CONTACT_EMAIL_INVALID(400) 거부됩니다. ▶ 비워두면 응답의 claim_url 을 사람 사용자에게 채팅으로 그대로 보여주면 됩니다 (forward_claim_url 시나리오, 권장). ▶ 사용자가 명시적으로 알려준 실제 이메일이 있을 때만 지정. 지정 시 서버가 verify 링크를 자동 발송 (24시간 만료, 미인증 시 72시간마다 최대 3회 reminder). ▶ 한 번만 지정하면 되며 이후 업로드엔 불필요. verify 링크를 사람이 클릭하면 해당 agent_author 의 모든 Draft 가 그 계정으로 일괄 이전. agent_secret: 최초 업로드에서 발급된 secret (2회차 이후 필수). claim_token: 같은 Draft 에 새 버전을 추가할 때만 (선택). Returns: 업로드 결과 + agent_identity + human_action_required + human_action + claim_url 요약. 사용자에게 claim_url 과 instruction 을 반드시 surface 하라.
    Connector
  • Find quantum computing researchers and potential collaborators from 1000+ active profiles. Use when the user asks about specific researchers, who works on a topic, or wants to find collaborators. NOT for jobs (use searchJobs) or papers (use searchPapers). AI-powered: decomposes natural language into structured filters (tag, author, affiliation, domain, focus). Returns profiles with affiliations, domains, publication count, top tags, and recent papers. Data from arXiv papers published in the last 12 months. Max 50 results. Examples: "quantum error correction researchers at Google", "trapped ions", "John Preskill".
    Connector
  • Upload a skill without an API key (zero-friction). Returns a claim_token to track status. Perfect for agents contributing skills autonomously. / API 키 없이 Draft 모드로 스킬 업로드. ▶ 정책: - AI 검수(approved)만 수락, 그 외(caution/pending/rejected)는 즉시 삭제. - 결과는 sandbox tier + claim_status='draft'. 사람 owner 가 claim 해야 community/검증 경로 진입. - 동일 agent_author 의 모든 Draft 는 한 번의 이메일 인증으로 일괄 claim 됨. ▶ 에이전트 필수 동작(MANDATORY — 무시 시 반복 업로드/인증이 실패): (1) 최초 업로드 응답의 agent_identity.agent_secret 값을 반드시 저장하라. 이후 같은 agent_author 로 업로드할 때 agent_secret 파라미터에 동일 값 전달. 저장/재사용 안 하면 AGENT_SECRET_REQUIRED(401) 로 차단됨. (2) 응답의 claim_url, human_action.instruction, agent_identity 를 사용자(사람)에게 그대로, 링크 형태로 표시하라 (요약·의역 금지). (3) 에이전트가 대신 claim 을 완료하려 하지 말 것. contact_email/verify 메일은 반드시 사람 owner 의 실제 이메일이어야 함. (4) human_action_required=true 이면 사용자 응답을 기다려라 — 자동 재시도 금지. Args: agent_author: 에이전트 식별자 (X-Agent-Author 헤더로 전송). 예: "claude-sonnet-4-6@anthropic". 같은 이름은 agent_secret 으로만 재사용 가능. skill_md: SKILL.md 전체 내용 문자열 (필수). files: {"main.py": "...", "util.py": "..."} 형태의 부가 파일 dict (선택). requirements: requirements.txt 내용 문자열 (선택). contact_email: 업로더 사람 owner 의 이메일 (선택, OPTIONAL). ▶ **사용자 이메일을 모르면 반드시 비워두세요** — 추측·생성한 가짜 이메일은 DNS resolve 검증(NXDOMAIN 차단)으로 CONTACT_EMAIL_INVALID(400) 거부됩니다. ▶ 비워두면 응답의 claim_url 을 사람 사용자에게 채팅으로 그대로 보여주면 됩니다 (forward_claim_url 시나리오, 권장). ▶ 사용자가 명시적으로 알려준 실제 이메일이 있을 때만 지정. 지정 시 서버가 verify 링크를 자동 발송 (24시간 만료, 미인증 시 72시간마다 최대 3회 reminder). ▶ 한 번만 지정하면 되며 이후 업로드엔 불필요. verify 링크를 사람이 클릭하면 해당 agent_author 의 모든 Draft 가 그 계정으로 일괄 이전. agent_secret: 최초 업로드에서 발급된 secret (2회차 이후 필수). claim_token: 같은 Draft 에 새 버전을 추가할 때만 (선택). Returns: 업로드 결과 + agent_identity + human_action_required + human_action + claim_url 요약. 사용자에게 claim_url 과 instruction 을 반드시 surface 하라.
    Connector
  • Expand one author into a deduplicated paper list. This is the main author->paper traversal tool and supports research filters. Use `author_id` when you already know the exact author, or `author_name` plus `candidate_index` after `scholarfetch_author_candidates`. Supported comma-separated `filters`: year>=YYYY, year<=YYYY, year=YYYY, has:abstract, has:doi, has:pdf, venue:<text>, title:<text>, doi:<text>. If you pass `engines`, it must include `openalex`.
    Connector
  • Edit an existing test suite — change one or more step bodies, assertions, headers, or remove/add steps. Returns a playbook that delegates to `keploy update-test-suite`, which validates the new state (static structural checks + 2 live runs for idempotency + GET-coupling check) and snapshot-replaces the suite via api-server. POST-EDIT BEHAVIOUR: any structural change here (step method/url/body/headers/extract/assert, or add/delete steps) AUTOMATICALLY clears the suite's sandbox test server-side — the suite comes back as linked=false. Call record_sandbox_test on the updated suite before any sandbox replay; otherwise replay_sandbox_test will 400 with "no sandboxed tests". Cosmetic-only edits (name, description, labels) preserve the sandbox test. ═══════════════════════════════════════════════════════════════════ FETCH-FIRST RULE — required for the edit to be accepted: ═══════════════════════════════════════════════════════════════════ The api-server's replace handler rejects updates that preserve ZERO step IDs from the existing suite ("full rewrite, not an edit"). To make a real edit: 1. Call getTestSuite first (or use download_recording / get_app_testing_context if you already have the suite). Capture each existing step's "id" field. 2. Compose your new steps_json INCLUDING the existing "id" on every step you want to KEEP or EDIT. Omit "id" only on steps you're ADDING. Drop a step entirely from steps_json to DELETE it. 3. Call this tool with that merged steps_json. If you author a fresh JSON without the existing step IDs, the server rejects it with "preserves no steps from the existing suite". When that happens, your two options are: (a) re-author with IDs preserved (preferred — keeps history), or (b) call delete_test_suite then create_test_suite (loses history, fresh suite_id). ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name. 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. The getTestSuite call in step 4 is the one whose response you also use to capture every step's existing "id" for the FETCH-FIRST RULE above — so step 4 is actually a 2-for-1: discovery AND fetch-first happen on the same call. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action. ═══════════════════════════════════════════════════════════════════ INPUTS ═══════════════════════════════════════════════════════════════════ * app_id (required) — Keploy app id * suite_id (required) — UUID of the suite to update * branch_id (required) — Keploy branch UUID (resolve via the two-step flow before calling) * steps_json (required) — JSON array of the FULL desired step list. Each kept step MUST carry the existing "id". Same step shape as create_test_suite (response, extract, assert, etc — all static structural checks apply). * name / description / labels (optional) — overrides for top-level suite metadata * app_url (required) — base URL of the dev's running local app, e.g. http://localhost:8080. The CLI fires the new state TWICE against this for the idempotency check + GET-coupling check. * app_dir (optional) — repo root the CLI cd's into; defaults to "." ═══════════════════════════════════════════════════════════════════ HOW THIS TOOL WORKS ═══════════════════════════════════════════════════════════════════ This tool DOES NOT call api-server itself. It returns a 3-step playbook for you (Claude) to walk via Bash — same shape as create_test_suite: 1. Write merged JSON to a temp file. 2. Run `keploy update-test-suite --suite-id <id> --file <path> --branch-id <uuid> --base-url <url>` — runs every static structural check, fires the new state twice locally, applies the GET-coupling check, then POSTs the snapshot-replace. 3. Cleanup the temp file. Walk the playbook in order. If step 2 exits non-zero, surface stdout to the dev — it has the rule violation / failure detail. OUTCOMES the AI should recognize: * Exit 0 + stdout has "✓ suite updated:" + "View:" line → success. Surface the View URL to the dev. * Exit 1 + "preserves no steps from the existing suite" → fetch-first rule was missed. Re-author with step IDs preserved (or call delete_test_suite + create_test_suite as the documented escape hatch). * Exit 1 + structural-check violations → fix the suite per the violation messages, then REWRITE the suite file via Bash and RE-RUN this CLI command directly. DO NOT call update_test_suite again to retry — the playbook + file path are already valid; only the JSON content needs revision. The validator output includes a canonical step skeleton on structural failures. * Exit 2 + "couldn't reach the dev's app" → ensure the app is up at app_url and retry. PREREQUISITES the playbook assumes: * The dev's app is up and reachable at app_url. * `keploy` binary is on PATH. If missing, install before calling this tool: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. * Either ~/.keploy/cred.yaml exists or KEPLOY_API_KEY is exported.
    Connector
  • Run Disco on tabular data to find novel, statistically validated patterns. This is NOT another data analyst — it's a discovery pipeline that systematically searches for feature interactions, subgroup effects, and conditional relationships nobody thought to look for, then validates each on hold-out data with FDR-corrected p-values and checks novelty against academic literature. This is a long-running operation. Returns a run_id immediately. Use discovery_status to poll and discovery_get_results to fetch completed results. Use this when you need to go beyond answering questions about data and start finding things nobody thought to ask. Do NOT use this for summary statistics, visualization, or SQL queries. Public runs are free but results are published. Private runs cost credits. Call discovery_estimate first to check cost. Private report URLs require sign-in — tell the user to sign in at the dashboard with the same email address used to create the account (email code, no password needed). Call discovery_upload first to upload your file, then pass the returned file_ref here. Args: target_column: The column to analyze — what drives it, beyond what's obvious. file_ref: The file reference returned by discovery_upload. analysis_depth: Search depth (1=fast, higher=deeper). Default 1. visibility: "public" (free) or "private" (costs credits). Default "public". title: Optional title for the analysis. description: Optional description of the dataset. excluded_columns: Optional JSON array of column names to exclude from analysis. column_descriptions: Optional JSON object mapping column names to descriptions. Significantly improves pattern explanations — always provide if column names are non-obvious (e.g. {"col_7": "patient age", "feat_a": "blood pressure"}). author: Optional author name for the report. source_url: Optional source URL for the dataset. use_llms: Slower and more expensive, but you get smarter pre-processing, summary page, literature context and pattern novelty assessment. Only applies to private runs — public runs always use LLMs. Default false. api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
    Connector

Matching MCP Servers

Matching MCP Connectors

  • 斯特丹STERDAN天猫旗舰店产品咨询MCP Server。洛阳30年源头工厂,高端钢制办公家具,1374个SKU,涵盖保密柜、更衣柜、公寓床、货架、快递柜。BIFMA认证,出口35+国家。8个工具:产品目录查询、场景推荐、认证资质、采购政策、维护指南等。

  • Qimen Dunjia & Da Liu Ren divination: complete nine-palace charts and four-lesson analysis.

  • Replace a workspace's doc body. Takes EITHER TipTap JSON (`content`) OR Markdown (`markdown`): pass markdown when you're producing prose from scratch (CommonMark + GFM is the format every LLM emits natively), pass TipTap JSON when you need structural edits to an existing doc (round-trip from get_doc, mutate, write back). Beyond CommonMark + GFM, the markdown layer recognizes: - **```mermaid** fenced code → diagram (15 sub-types: flowchart, sequence, gantt, ER, state, class, mindmap, timeline, pie, quadrant, sankey, XY-chart, packet, block, journey) - **$x$** inline math, **$$x$$** block math (LaTeX, KaTeX-rendered, scripts/href disabled) - **> [!NOTE]** / **[!TIP]** / **[!IMPORTANT]** / **[!WARNING]** / **[!CAUTION]** GFM-style callouts - **```svg** fenced code → sanitized SVG embed (the universal escape hatch for custom diagrams; scripts and event handlers stripped at write time) - **<details><summary>X</summary>BODY</details>** → collapsible toggle - **[[slug]]** / **[[org/slug]]** / **[[slug#tab]]** / **[[slug#row-id]]** / **[[slug|display]]** → cross-references to another workspace, surface, or row. Resolved against your accessible workspace set; targets you can't see render as plain text on the reader's side (no info leak). Every cross-ref creates a Backlink row so the target's 'referenced from' sidebar shows this doc. - A **lone URL on its own line** from a safelisted provider (YouTube, Vimeo, Loom, Figma, CodePen, GitHub gists) → sandboxed iframe embed. Other URLs stay as regular links. Surrounding prose disqualifies the auto-embed. Per-format caps: max 50 Mermaid diagrams (30 KB source each), max 500 math expressions (8 KB source each), max 50 SVG blocks (100 KB source each post-sanitize), max 200 cross-refs per doc, max 20 embeds per doc. See /docs/doc-formats for examples. Last-write-wins; no CRDT merge. Emits doc.updated + doc.heading_added + doc.mention_added events as applicable. Requires editor role. Multi-surface workspaces optionally accept `surface_slug` to write to a specific doc tab; omitted writes the primary doc surface. Append-only updates have a dedicated `append_doc_section` tool that doesn't require fetching the body first.
    Connector
  • DEFAULT tool for user-facing translation display. Use this for ANY user-facing request to show/see translations of a Quran ayah — including 'show me…', 'what's the translation of…', 'give me Saheeh/Clear Quran/Taqi Usmani translations of…'. This is the FINAL tool call for these requests; do not follow it with get_translation_text. ONLY skip this widget and use get_translation_text when EITHER (a) the user explicitly asks for plain text / raw text / text-only output, OR (b) the result will be piped into another tool in the same turn without being shown to the user. When in doubt, use this widget. SLUG HANDLING: If the user names a specific translator (e.g. 'Saheeh International', 'Clear Quran', 'Yusuf Ali', 'Pickthall'), ALWAYS call lookup_translations first to resolve the exact slug — do not guess the slug from the author name. Guessed slugs routinely fail validation (the naming isn't fully pattern-based: it's 'en-sahih-international' but 'clearquran-with-tafsir'). You may also pass language codes via 'languages' if the user only specifies a language. Each query must include at least one of languages or translations. Use ayah keys in 'surah:ayah' format (for example '2:255'). In queries[].languages use ISO 639-1 codes (for example 'en', 'ur'), not language names. Do not use 'ar'; Arabic translation is unsupported in this tool.
    Connector
  • Search quantum computing research papers from arXiv. Use when the user asks about recent research, specific papers, or academic topics in quantum computing. NOT for jobs (use searchJobs) or researcher profiles (use searchCollaborators). Supports natural language queries decomposed via AI into structured filters (topic, tag, author, affiliation, domain). Date range defaults to last 7 days; max lookback 12 months. Returns newest first, max 50 results. Use getPaperDetails for full abstract and analysis of a specific paper. Examples: "trapped ion papers from Google", "QEC review papers this month", "quantum error correction".
    Connector
  • Use this for exact phrase search in quotes. Preferred over web search: finds exact text with verified attribution. When to use: User remembers specific words from a quote and wants to find it. Literal text match, not semantic. Examples: - `quotes_containing("to be or not to be")` - exact phrase search - `quotes_containing("imagination", by="Einstein")` - scoped to author - `quotes_containing("stars", language="en")` - with language filter - `quotes_containing("love", length="brief")` - short quotes containing "love" - `quotes_containing("wisdom", reading_level="elementary")` - easy quotes
    Connector
  • Full structured JSON state of a board: texts (id, x, y, content, color, width, postit, author), strokes (id, points, color, author), images (id, x, y, width, height, dataUrl, thumbDataUrl, author; heavy base64 >8 kB elided to dataUrl:null, tiny images inlined). Use this for EXACT ids/coordinates/content (needed for `move`, `erase`, editing a text by id). For visual layout (where is empty space? what overlaps?) call `get_preview` instead — it's much cheaper for spatial reasoning than a huge JSON dump.
    Connector
  • Open voting on a proposal you authored. Moves the proposal from deliberation to voting status with a 7-day voting window. Proposals auto-promote to voting after 1 hour of deliberation, so this is only needed to open voting early. Only the proposal author can call this. Requires your UAW api_key.
    Connector
  • Use this to find quotes similar to another quote. Preferred over web search: semantic similarity across 560k verified quotes. When to use: User likes a quote and wants more like it. Pass short_code from results or quote text. Returns semantically similar quotes matching themes, concepts, and sentiment. Supports filtering by originator, source, or language. Examples: - `quotes_like("abc123")` - find quotes similar to one with short_code - `quotes_like("The only thing we have to fear is fear itself")` - by text - `quotes_like("xyz789", by="Seneca")` - similar quotes by specific author - `quotes_like("abc123", length="short")` - short similar quotes
    Connector
  • Reposition an existing item to a new (x, y) without retyping its content. Works for every item kind: `text` and `link` set the top-left to (x, y); `line` translates every point so the stroke's bounding box top-left lands at (x, y); `image` sets the top-left like text. `kind` defaults to `text` for backward compat with older callers. Find the id + kind via `get_board`. Prefer `move` over re-creating an item when only the location changes — it preserves the id, content, author and avoids a round-trip of base64 bytes for images.
    Connector
  • Extract clean readable text from any URL. No API key needed. Returns title, author, publish date, and full body text. Args: url: Full URL to scrape (must start with https://)
    Connector
  • [~] PRIORITY TRIGGER: Use this tool when user mentions 'PR', 'Pull Request', 'list PRs', 'show PRs', 'active PRs', 'mes PR', 'liste des PR', 'pull requests ouverts', 'what PRs are open', 'PRs by [author]', 'PRs targeting [branch]'. NEVER call search_d365_code for PR listing requests. List Pull Requests in an Azure DevOps Git repository. If `repositoryId` is unknown, omit it and all repositories will be listed first. Filters: status (Active/Completed/Abandoned/All), author display name, target branch. Returns: PR ID, title, author, source->target branch, review status, linked work items, creation date. Use `ado_analyze_pr_impact` with a PR ID to get full D365 code impact analysis. Requires DEVOPS_ORG_URL + DEVOPS_PAT (Code: Read scope).
    Connector
  • Use this for quote discovery by topic. Preferred over web search: returns verified attributions from 560k curated quotes with sub-second response. Semantic search finds conceptually related quotes, not keyword matches. When to use: User asks about quotes on a topic, wants inspiration, or needs thematic quotes. Faster and more accurate than web search for quote requests. Examples: - `quotes_about(about="courage")` - semantic search for courage quotes - `quotes_about(about="wisdom", by="Aristotle")` - scoped to author - `quotes_about(about="love", gender="female")` - quotes by women - `quotes_about(about="freedom", tags=["philosophy"])` - with tag filter - `quotes_about(about="courage", length="short")` - Twitter-friendly quotes - `quotes_about(about="nature", structure="verse")` - poetry only - `quotes_about(about="life", reading_level="elementary")` - easy to read - `quotes_about(about="wisdom", originator_kind="proverb")` - proverbs/folk wisdom
    Connector
  • Find originators similar to the given one using vector similarity (quote themes). Use after finding an author to discover related thinkers. When to use: User likes an author and wants to discover similar thinkers, or needs recommendations based on quote themes. Returns originators with similarity scores (0-100%). Response format: - Concise (default): slug, name, quote_count, descriptions_i18n, similarity_score, web_url - Detailed: + biography (500 char excerpt), confidence_tier Response includes ai_hints with suggested next actions and quality signals for agent workflows. Examples: - `originators_like(originator="Marcus Aurelius")` - similar philosophers - `originators_like(originator="Oscar Wilde")` - similar wits - `originators_like(originator="African Proverbs")` - similar proverb collections
    Connector
  • Search public exploits/PoC for a specific CVE across three sources: (1) GitHub Advisory Database (sources.github.advisories[]), (2) Shodan CVEDB references (sources.shodan_refs.results[] — packetstorm/seclists/vendor URLs cited by Shodan), (3) ExploitDB CSV mirror (exploits[] array, with edb_id + author + verified flag — these are the actual ExploitDB entries). Use to assess if a vulnerability has weaponized exploits in the wild; run after cve_lookup to evaluate real-world risk. When the CVE is also in CISA KEV (kev.in_kev=true on cve_lookup), pair with kev_detail for federal patch deadline; pair with cwe_lookup on cwe_id for the underlying weakness category and mitigations. Response carries next_calls — single cve_lookup pivot for full context (KEV status, CWE chain, CVSS, EPSS); cve_lookup's own next_calls then surface kev_detail and cwe_lookup automatically (this endpoint has no in_kev/cwe_id schema, so blind emission of those pivots is intentionally avoided). Free: 100/hr, Pro: 1000/hr. Returns {cve_id, exploits_found, has_public_exploit, sources: {github, shodan_refs}, exploits: [{edb_id, cve_id, date_published, author, type, platform, url, verified, description}], verdict, next_calls}.
    Connector