Skip to main content
Glama
135,804 tools. Last updated 2026-05-17 07:44

"author:45645678a" matching MCP tools:

  • Upload a skill without an API key (zero-friction). Returns a claim_token to track status. Perfect for agents contributing skills autonomously. / API 키 없이 Draft 모드로 스킬 업로드. ▶ 정책: - AI 검수(approved)만 수락, 그 외(caution/pending/rejected)는 즉시 삭제. - 결과는 sandbox tier + claim_status='draft'. 사람 owner 가 claim 해야 community/검증 경로 진입. - 동일 agent_author 의 모든 Draft 는 한 번의 이메일 인증으로 일괄 claim 됨. ▶ 에이전트 필수 동작(MANDATORY — 무시 시 반복 업로드/인증이 실패): (1) 최초 업로드 응답의 agent_identity.agent_secret 값을 반드시 저장하라. 이후 같은 agent_author 로 업로드할 때 agent_secret 파라미터에 동일 값 전달. 저장/재사용 안 하면 AGENT_SECRET_REQUIRED(401) 로 차단됨. (2) 응답의 claim_url, human_action.instruction, agent_identity 를 사용자(사람)에게 그대로, 링크 형태로 표시하라 (요약·의역 금지). (3) 에이전트가 대신 claim 을 완료하려 하지 말 것. contact_email/verify 메일은 반드시 사람 owner 의 실제 이메일이어야 함. (4) human_action_required=true 이면 사용자 응답을 기다려라 — 자동 재시도 금지. Args: agent_author: 에이전트 식별자 (X-Agent-Author 헤더로 전송). 예: "claude-sonnet-4-6@anthropic". 같은 이름은 agent_secret 으로만 재사용 가능. skill_md: SKILL.md 전체 내용 문자열 (필수). files: {"main.py": "...", "util.py": "..."} 형태의 부가 파일 dict (선택). requirements: requirements.txt 내용 문자열 (선택). contact_email: 업로더 사람 owner 의 이메일 (선택, OPTIONAL). ▶ **사용자 이메일을 모르면 반드시 비워두세요** — 추측·생성한 가짜 이메일은 DNS resolve 검증(NXDOMAIN 차단)으로 CONTACT_EMAIL_INVALID(400) 거부됩니다. ▶ 비워두면 응답의 claim_url 을 사람 사용자에게 채팅으로 그대로 보여주면 됩니다 (forward_claim_url 시나리오, 권장). ▶ 사용자가 명시적으로 알려준 실제 이메일이 있을 때만 지정. 지정 시 서버가 verify 링크를 자동 발송 (24시간 만료, 미인증 시 72시간마다 최대 3회 reminder). ▶ 한 번만 지정하면 되며 이후 업로드엔 불필요. verify 링크를 사람이 클릭하면 해당 agent_author 의 모든 Draft 가 그 계정으로 일괄 이전. agent_secret: 최초 업로드에서 발급된 secret (2회차 이후 필수). claim_token: 같은 Draft 에 새 버전을 추가할 때만 (선택). Returns: 업로드 결과 + agent_identity + human_action_required + human_action + claim_url 요약. 사용자에게 claim_url 과 instruction 을 반드시 surface 하라.
    Connector
  • Upload a skill without an API key (zero-friction). Returns a claim_token to track status. Perfect for agents contributing skills autonomously. / API 키 없이 Draft 모드로 스킬 업로드. ▶ 정책: - AI 검수(approved)만 수락, 그 외(caution/pending/rejected)는 즉시 삭제. - 결과는 sandbox tier + claim_status='draft'. 사람 owner 가 claim 해야 community/검증 경로 진입. - 동일 agent_author 의 모든 Draft 는 한 번의 이메일 인증으로 일괄 claim 됨. ▶ 에이전트 필수 동작(MANDATORY — 무시 시 반복 업로드/인증이 실패): (1) 최초 업로드 응답의 agent_identity.agent_secret 값을 반드시 저장하라. 이후 같은 agent_author 로 업로드할 때 agent_secret 파라미터에 동일 값 전달. 저장/재사용 안 하면 AGENT_SECRET_REQUIRED(401) 로 차단됨. (2) 응답의 claim_url, human_action.instruction, agent_identity 를 사용자(사람)에게 그대로, 링크 형태로 표시하라 (요약·의역 금지). (3) 에이전트가 대신 claim 을 완료하려 하지 말 것. contact_email/verify 메일은 반드시 사람 owner 의 실제 이메일이어야 함. (4) human_action_required=true 이면 사용자 응답을 기다려라 — 자동 재시도 금지. Args: agent_author: 에이전트 식별자 (X-Agent-Author 헤더로 전송). 예: "claude-sonnet-4-6@anthropic". 같은 이름은 agent_secret 으로만 재사용 가능. skill_md: SKILL.md 전체 내용 문자열 (필수). files: {"main.py": "...", "util.py": "..."} 형태의 부가 파일 dict (선택). requirements: requirements.txt 내용 문자열 (선택). contact_email: 업로더 사람 owner 의 이메일 (선택, OPTIONAL). ▶ **사용자 이메일을 모르면 반드시 비워두세요** — 추측·생성한 가짜 이메일은 DNS resolve 검증(NXDOMAIN 차단)으로 CONTACT_EMAIL_INVALID(400) 거부됩니다. ▶ 비워두면 응답의 claim_url 을 사람 사용자에게 채팅으로 그대로 보여주면 됩니다 (forward_claim_url 시나리오, 권장). ▶ 사용자가 명시적으로 알려준 실제 이메일이 있을 때만 지정. 지정 시 서버가 verify 링크를 자동 발송 (24시간 만료, 미인증 시 72시간마다 최대 3회 reminder). ▶ 한 번만 지정하면 되며 이후 업로드엔 불필요. verify 링크를 사람이 클릭하면 해당 agent_author 의 모든 Draft 가 그 계정으로 일괄 이전. agent_secret: 최초 업로드에서 발급된 secret (2회차 이후 필수). claim_token: 같은 Draft 에 새 버전을 추가할 때만 (선택). Returns: 업로드 결과 + agent_identity + human_action_required + human_action + claim_url 요약. 사용자에게 claim_url 과 instruction 을 반드시 surface 하라.
    Connector
  • Find specific PASSAGES inside books — returns page-level snippets with citation URLs. Use this when you want a quote or evidence on a topic across the whole library. ORIENTATION HINT: if the user has named a specific author or work, prefer get_book (returns a summary + chapter outline) over passage hunting — every book in the corpus has an AI-generated summary that is usually the right first read. Use search_translations when sweeping across many books for evidence of a theme. For finding which BOOKS cover a topic, use search_library. Query tips: single distinctive terms ("memory palace", "wax tablet") work best; multi-word natural-English queries ("unity of the intellect") may return fewer results because matching is term-based, not phrase-based. Each snippet has a snippet_type — "translation"/"ocr" means it is a verbatim extract from the source text; "summary" means it is AI-generated description (do not quote those as the author's words). Response includes total_matches, returned, and offset for pagination. Cross-cultural tip: for pre-modern or non-Western topics, search source-tradition vocabulary rather than modern English terms — e.g. for seminal economy search "jing" or "bindu" or "istimnāʾ", not "semen retention"; for female homoeroticism search "tribade" or "sahq", not "lesbian". The corpus is indexed via period translations that use tradition-internal terminology.
    Connector
  • Find quantum computing researchers and potential collaborators from 1000+ active profiles. Use when the user asks about specific researchers, who works on a topic, or wants to find collaborators. NOT for jobs (use searchJobs) or papers (use searchPapers). AI-powered: decomposes natural language into structured filters (tag, author, affiliation, domain, focus). Returns profiles with affiliations, domains, publication count, top tags, and recent papers. Data from arXiv papers published in the last 12 months. Max 50 results. Examples: "quantum error correction researchers at Google", "trapped ions", "John Preskill".
    Connector
  • Edit an existing test suite — change one or more step bodies, assertions, headers, or remove/add steps. Returns a playbook that delegates to `keploy update-test-suite`, which validates the new state (static structural checks + 2 live runs for idempotency + GET-coupling check) and snapshot-replaces the suite via api-server. POST-EDIT BEHAVIOUR: any structural change here (step method/url/body/headers/extract/assert, or add/delete steps) AUTOMATICALLY clears the suite's sandbox test server-side — the suite comes back as linked=false. Call record_sandbox_test on the updated suite before any sandbox replay; otherwise replay_sandbox_test will 400 with "no sandboxed tests". Cosmetic-only edits (name, description, labels) preserve the sandbox test. ═══════════════════════════════════════════════════════════════════ FETCH-FIRST RULE — required for the edit to be accepted: ═══════════════════════════════════════════════════════════════════ The api-server's replace handler rejects updates that preserve ZERO step IDs from the existing suite ("full rewrite, not an edit"). To make a real edit: 1. Call getTestSuite first (or use download_recording / get_app_testing_context if you already have the suite). Capture each existing step's "id" field. 2. Compose your new steps_json INCLUDING the existing "id" on every step you want to KEEP or EDIT. Omit "id" only on steps you're ADDING. Drop a step entirely from steps_json to DELETE it. 3. Call this tool with that merged steps_json. If you author a fresh JSON without the existing step IDs, the server rejects it with "preserves no steps from the existing suite". When that happens, your two options are: (a) re-author with IDs preserved (preferred — keeps history), or (b) call delete_test_suite then create_test_suite (loses history, fresh suite_id). ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name. 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. The getTestSuite call in step 4 is the one whose response you also use to capture every step's existing "id" for the FETCH-FIRST RULE above — so step 4 is actually a 2-for-1: discovery AND fetch-first happen on the same call. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action. ═══════════════════════════════════════════════════════════════════ INPUTS ═══════════════════════════════════════════════════════════════════ * app_id (required) — Keploy app id * suite_id (required) — UUID of the suite to update * branch_id (required) — Keploy branch UUID (resolve via the two-step flow before calling) * steps_json (required) — JSON array of the FULL desired step list. Each kept step MUST carry the existing "id". Same step shape as create_test_suite (response, extract, assert, etc — all static structural checks apply). * name / description / labels (optional) — overrides for top-level suite metadata * app_url (required) — base URL of the dev's running local app, e.g. http://localhost:8080. The CLI fires the new state TWICE against this for the idempotency check + GET-coupling check. * app_dir (optional) — repo root the CLI cd's into; defaults to "." ═══════════════════════════════════════════════════════════════════ HOW THIS TOOL WORKS ═══════════════════════════════════════════════════════════════════ This tool DOES NOT call api-server itself. It returns a 3-step playbook for you (Claude) to walk via Bash — same shape as create_test_suite: 1. Write merged JSON to a temp file. 2. Run `keploy update-test-suite --suite-id <id> --file <path> --branch-id <uuid> --base-url <url>` — runs every static structural check, fires the new state twice locally, applies the GET-coupling check, then POSTs the snapshot-replace. 3. Cleanup the temp file. Walk the playbook in order. If step 2 exits non-zero, surface stdout to the dev — it has the rule violation / failure detail. OUTCOMES the AI should recognize: * Exit 0 + stdout has "✓ suite updated:" + "View:" line → success. Surface the View URL to the dev. * Exit 1 + "preserves no steps from the existing suite" → fetch-first rule was missed. Re-author with step IDs preserved (or call delete_test_suite + create_test_suite as the documented escape hatch). * Exit 1 + structural-check violations → fix the suite per the violation messages, then REWRITE the suite file via Bash and RE-RUN this CLI command directly. DO NOT call update_test_suite again to retry — the playbook + file path are already valid; only the JSON content needs revision. The validator output includes a canonical step skeleton on structural failures. * Exit 2 + "couldn't reach the dev's app" → ensure the app is up at app_url and retry. PREREQUISITES the playbook assumes: * The dev's app is up and reachable at app_url. * `keploy` binary is on PATH. If missing, install before calling this tool: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. * Either ~/.keploy/cred.yaml exists or KEPLOY_API_KEY is exported.
    Connector
  • Expand one author into a deduplicated paper list. This is the main author->paper traversal tool and supports research filters. Use `author_id` when you already know the exact author, or `author_name` plus `candidate_index` after `scholarfetch_author_candidates`. Supported comma-separated `filters`: year>=YYYY, year<=YYYY, year=YYYY, has:abstract, has:doi, has:pdf, venue:<text>, title:<text>, doi:<text>. If you pass `engines`, it must include `openalex`.
    Connector
  • Delete a test suite on a Keploy branch — synchronous, no playbook to walk. USE THIS when: * The dev's update_test_suite call was rejected with "preserves no steps from the existing suite — that's a full rewrite, not an edit". Delete the existing suite and re-author from scratch via create_test_suite. The error message itself routes here. * The dev explicitly says "delete the suite", "remove suite X", "wipe my orderflow suite". * A genuine wholesale redesign — every step changed in shape — that the audit trail shouldn't try to reconcile as edits. DO NOT USE THIS when: * The dev wants a real edit (one assertion, one step's body). Use update_test_suite + preserve existing step IDs instead — keeps audit history intact. * The dev wants to "redo" a single failed run. Test runs are independent of suite state; just rerun via replay_test_suite. INPUT * app_id (required) — Keploy app id * suite_id (required) — UUID of the suite to delete * branch_id (required) — Keploy branch UUID. The delete creates a branch-scoped DeleteTestSuite audit event so reads on the same branch see the suite as gone. Direct main writes are blocked. OUTPUT * On success: {"deleted": true} — suite is tombstoned at the branch overlay; subsequent reads (getTestSuite / listTestSuites) on this branch return 404 / exclude it. * 404 if the suite_id doesn't exist on this app/branch (verify via getTestSuite or listTestSuites first if you're unsure). After delete, the standard re-create flow is: (1) call create_test_suite with a freshly authored steps_json. The new suite gets a fresh suite_id; the old id is tombstoned, not reusable. ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name (don't invent one). 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename>. Usually 1–2 candidates. If 0 → ASK; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id}. If no match → not this app, try next. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app/branch, try next. 5. If steps 2–4 exhaust, walk every OPEN branch on each candidate app, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. After resolving once in a session, REUSE the {app_id, branch_id} for subsequent suite-targeted calls; don't re-walk discovery for every action.
    Connector
  • Conceptual / semantic passage search across the whole library. Use when the modern term won't literally appear in historical texts — e.g. "distributed cognition" maps to passages about active intellect, art of memory, wax tablet metaphors; "social contract" maps to pre-Hobbesian discussions of consent and authority. Ranks passages by cosine similarity on Gemini embeddings (768d), so paraphrases and conceptually adjacent phrasings match even when no keyword overlaps. ORIENTATION HINT: if the user named a specific author or work, prefer get_book (returns the book's AI summary + chapter outline) — semantic search is expensive and best reserved for cross-corpus discovery. Prefer search_translations for literal phrases or distinctive single terms; use search_concept when the concept matters more than the wording. Similarity calibration: 0.70+ is a strong match, 0.55–0.70 is worth reading but verify, below 0.55 is mostly conceptual drift. Set max_per_book to diversify results across many books rather than cluster on one source. Each passage carries a snippet_type — quote only "translation" snippets, never "summary". Cross-cultural tip: for pre-modern or non-Western topics, also try source-tradition vocabulary — e.g. for seminal economy try "jing preservation" or "bindu yoga" or "istimnāʾ"; for masturbation try "mollities" (Latin) or "hastamaithuna" (Sanskrit) or "shouyin" (Chinese). The corpus is indexed via period translations that use tradition-internal terminology, so adjacent/euphemistic terms often surface material that modern English keywords miss.
    Connector
  • Recall notes from your notebook. By default returns only your own notes (all scopes, newest first). Pass filter_agent_id=<int> to read another agent's notebook, or filter_agent_id="all" (or "*") to read across every agent in the workspace. Pass scope to narrow to global/thread/person. Each result includes agent_id and agent_name of the author.
    Connector
  • Fetch a single article from Psychiatry for Kids by slug. Returns title, body content, author, clinical reviewer, citations, and metadata.
    Connector
  • Run Disco on tabular data to find novel, statistically validated patterns. This is NOT another data analyst — it's a discovery pipeline that systematically searches for feature interactions, subgroup effects, and conditional relationships nobody thought to look for, then validates each on hold-out data with FDR-corrected p-values and checks novelty against academic literature. This is a long-running operation. Returns a run_id immediately. Use discovery_status to poll and discovery_get_results to fetch completed results. Use this when you need to go beyond answering questions about data and start finding things nobody thought to ask. Do NOT use this for summary statistics, visualization, or SQL queries. Public runs are free but results are published. Private runs cost credits. Call discovery_estimate first to check cost. Private report URLs require sign-in — tell the user to sign in at the dashboard with the same email address used to create the account (email code, no password needed). Call discovery_upload first to upload your file, then pass the returned file_ref here. Args: target_column: The column to analyze — what drives it, beyond what's obvious. file_ref: The file reference returned by discovery_upload. analysis_depth: Search depth (1=fast, higher=deeper). Default 1. visibility: "public" (free) or "private" (costs credits). Default "public". title: Optional title for the analysis. description: Optional description of the dataset. excluded_columns: Optional JSON array of column names to exclude from analysis. column_descriptions: Optional JSON object mapping column names to descriptions. Significantly improves pattern explanations — always provide if column names are non-obvious (e.g. {"col_7": "patient age", "feat_a": "blood pressure"}). author: Optional author name for the report. source_url: Optional source URL for the dataset. use_llms: Slower and more expensive, but you get smarter pre-processing, summary page, literature context and pattern novelty assessment. Only applies to private runs — public runs always use LLMs. Default false. api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
    Connector
  • POST /apps/{appId}/recordings/{testSetId}/mocks — Author one mock under a recording — Insert a single mock into the given test set. When `branch_id` is supplied, the mock lands on that branch's overlay (`branch_sandbox_ops`) and only surfaces to main on merge. Without `branch_id` the mock writes straight to main — same behaviour as the recording-driven agent path. Authoring shape — pick ONE: - **`mock_yaml`** (PREFERRED) — paste the canonical mock YAML envelope (`version` / `kind` / `name` / `spec` with the per-kind payload, exactly as it lives in `mocks.yaml` on disk). The server decodes via OSS DecodeMocks so kind- specific Spec contents (`req`, `resp`, `metadata`, …) round-trip without field-name loss. This is the only path that preserves payloads pasted from existing mocks. - **`mock`** — typed OSS Mock JSON object. Brittle: the OSS struct uses PascalCase JSON tags (`Metadata`, `Req`, `Res`), so lowercase canonical keys are silently dropped. Use only when authoring programmatically from typed Go shapes. When both are sent, `mock_yaml` wins. Requires scope: `write`.
    Connector
  • Get a book's AI-generated summary, chapter list, edition metadata, DOI, and page counts. THIS IS THE RIGHT FIRST CALL whenever the user has named a specific author or work — the summary is typically a multi-paragraph orientation covering the book's argument, structure, and significance, often answering the question without any further searching. Pair with get_book_text to read selected chapters, or search_within_book to locate passages inside it.
    Connector
  • Use this to find quotes similar to another quote. Preferred over web search: semantic similarity across 560k verified quotes. When to use: User likes a quote and wants more like it. Pass short_code from results or quote text. Returns semantically similar quotes matching themes, concepts, and sentiment. Supports filtering by originator, source, or language. Examples: - `quotes_like("abc123")` - find quotes similar to one with short_code - `quotes_like("The only thing we have to fear is fear itself")` - by text - `quotes_like("xyz789", by="Seneca")` - similar quotes by specific author - `quotes_like("abc123", length="short")` - short similar quotes
    Connector
  • Full structured JSON state of a board: texts (id, x, y, content, color, width, postit, author), strokes (id, points, color, author), images (id, x, y, width, height, dataUrl, thumbDataUrl, author; heavy base64 >8 kB elided to dataUrl:null, tiny images inlined). Use this for EXACT ids/coordinates/content (needed for `move`, `erase`, editing a text by id). For visual layout (where is empty space? what overlaps?) call `get_preview` instead — it's much cheaper for spatial reasoning than a huge JSON dump.
    Connector
  • Browse the catalog by metadata — filter by author/title fragment, language, category, or translation recency. Returns books with title, author, language, year, and translation progress. Use this to discover WHAT EXISTS by an author or in a tradition before searching content. For content matches (passages on a topic), use search_translations or search_concept instead.
    Connector
  • DEFAULT tool for user-facing translation display. Use this for ANY user-facing request to show/see translations of a Quran ayah — including 'show me…', 'what's the translation of…', 'give me Saheeh/Clear Quran/Taqi Usmani translations of…'. This is the FINAL tool call for these requests; do not follow it with get_translation_text. ONLY skip this widget and use get_translation_text when EITHER (a) the user explicitly asks for plain text / raw text / text-only output, OR (b) the result will be piped into another tool in the same turn without being shown to the user. When in doubt, use this widget. SLUG HANDLING: If the user names a specific translator (e.g. 'Saheeh International', 'Clear Quran', 'Yusuf Ali', 'Pickthall'), ALWAYS call lookup_translations first to resolve the exact slug — do not guess the slug from the author name. Guessed slugs routinely fail validation (the naming isn't fully pattern-based: it's 'en-sahih-international' but 'clearquran-with-tafsir'). You may also pass language codes via 'languages' if the user only specifies a language. Each query must include at least one of languages or translations. Use ayah keys in 'surah:ayah' format (for example '2:255'). In queries[].languages use ISO 639-1 codes (for example 'en', 'ur'), not language names. Do not use 'ar'; Arabic translation is unsupported in this tool.
    Connector
  • Search arXiv preprints. Plain text searches all fields; use prefixes for targeted queries: au:hinton (author), ti:transformer (title), abs:diffusion (abstract), cat:cs.AI (category), all:quantum (any field). Combine with AND/OR/ANDNOT, e.g., "ti:transformer AND cat:cs.LG". Returns id, title, authors, abstract, categories, published date, PDF URL.
    Connector
  • Reposition an existing item to a new (x, y) without retyping its content. Works for every item kind: `text` and `link` set the top-left to (x, y); `line` translates every point so the stroke's bounding box top-left lands at (x, y); `image` sets the top-left like text. `kind` defaults to `text` for backward compat with older callers. Find the id + kind via `get_board`. Prefer `move` over re-creating an item when only the location changes — it preserves the id, content, author and avoids a round-trip of base64 bytes for images.
    Connector