133,917 tools. Last updated 2026-05-15 18:18
"How to access and read a local SQL database" matching MCP tools:
- FOR CLAUDE DESKTOP ONLY (with filesystem access). For Claude.ai/web: Use create_upload_session instead - it provides a browser upload link. Upload local media to cloud storage, returning a public HTTPS URL. WHEN TO USE: • Instagram, LinkedIn, Threads, X: REQUIRED for local files before calling publish_content • TikTok: NOT NEEDED - pass local path directly to publish_content SUPPORTED FORMATS: • Images: jpg, png, gif, webp (max 10MB) • Videos: mp4, mov, webm (max 100MB) Returns { url: 'https://...' } for use in publish_content mediaUrl parameter.Connector
- Retrieves authoritative documentation directly from the framework's official repository. ## When to Use **Called during i18n_checklist Steps 1-13.** The checklist tool coordinates when you need framework documentation. Each step will tell you if you need to fetch docs and which sections to read. If you're implementing i18n: Let the checklist guide you. Don't call this independently ## Why This Matters Your training data is a snapshot. Framework APIs evolve. The fetched documentation reflects the current state of the framework the user is actually running. Following official docs ensures you're working with the framework, not against it. ## How to Use **Two-Phase Workflow:** 1. **Discovery** - Call with action="index" to see available sections 2. **Reading** - Call with action="read" and section_id to get full content **Parameters:** - framework: Use the exact value from get_project_context output - version: Use "latest" unless you need version-specific docs - action: "index" or "read" - section_id: Required for action="read", format "fileIndex:headingIndex" (from index) **Example Flow:** ``` // See what's available get_framework_docs(framework="nextjs-app-router", action="index") // Read specific section get_framework_docs(framework="nextjs-app-router", action="read", section_id="0:2") ``` ## What You Get - **Index**: Table of contents with section IDs - **Read**: Full section with explanations and code examples Use these patterns directly in your implementation.Connector
- Switch between local and remote DanNet servers on the fly. This tool allows you to change the DanNet server endpoint during runtime without restarting the MCP server. Useful for switching between development (local) and production (remote) servers. Args: server: Server to switch to. Options: - "local": Use localhost:3456 (development server) - "remote": Use wordnet.dk (production server) - Custom URL: Any valid URL starting with http:// or https:// Returns: Dict with status information: - status: "success" or "error" - message: Description of the operation - previous_url: The URL that was previously active - current_url: The URL that is now active Example: # Switch to local development server result = switch_dannet_server("local") # Switch to production server result = switch_dannet_server("remote") # Switch to custom server result = switch_dannet_server("https://my-custom-dannet.example.com")Connector
- Read-only. Use to find workflows in a project by name, description, or trigger type before inspection or editing. Trigger filters include database, auth email, repeating, broadcast, and no-trigger workflows. Returns paginated workflow summaries, published/sandbox state, trigger type, workflow URLs, totalCount, hasMore, and nextOffset. Do not use as the final source of truth before editing; call get_workflow_and_preview_url for full structure.Connector
- Create a new sncro session. Returns a session key and secret. Args: project_key: The project key from CLAUDE.md (registered at sncro.net) git_user: The current git username (for guest access control). If omitted or empty, the call is treated as a guest session — allowed only when the project owner has "Allow guest access" enabled. brief: If True, skip the first-run briefing (tool list, tips, mobile notes) and return a compact response. Pass this on the second and subsequent create_session calls in the same conversation, once you already know how to use the tools. After calling this, tell the user to paste the enable_url in their browser. Then use the returned session_key and session_secret with all other sncro tools. If no project key is available: tell the user to go to https://www.sncro.net/projects to register their project and get a key. It takes 30 seconds — sign in with GitHub, click "+ Add project", enter the domain, and copy the project key into CLAUDE.md.Connector
- REQUIRED before stock_data_query, 18 SQL patterns prevent timeouts/wrong results Must be called once per session immediately after get_database_schema. Contains query patterns for time-series selection, return calculations, screening joins, window functions, backtesting, and performance optimization. Time-series queries will timeout or return wrong results without these patterns. After this tool returns, call stock_data_query to execute SQL.Connector
Matching MCP Servers
- Alicense-qualityDmaintenanceEnables AI interaction with Microsoft Access and SQLite databases to perform SQL queries, updates, and data management. It supports importing and exporting data via CSV and Excel files and includes tools for managing human-readable notes about database files.Last updated8MIT
- Alicense-qualityCmaintenanceSelf-hosted credential store and API proxy for AI agents. One Bearer token, all your services. Handles OAuth refresh, encrypted storage, audit logging, and per-agent permissioning.Last updated64MIT
Matching MCP Connectors
Transform any blog post or article URL into ready-to-post social media content for Twitter/X threads, LinkedIn posts, Instagram captions, Facebook posts, and email newsletters. Pay-per-event: $0.07 for all 5 platforms, $0.03 for single platform.
Access comprehensive company data including financial records, ownership structures, and contact information. Search for businesses using domains, registration numbers, or LinkedIn profiles to streamline due diligence and lead generation. Retrieve historical financial performance and complex corporate group structures to support informed business analysis.
- Execute any valid read only SQL statement on a Cloud SQL instance. To support the `execute_sql_readonly` tool, a Cloud SQL instance must meet the following requirements: * The value of `data_api_access` must be set to `ALLOW_DATA_API`. * For a MySQL instance, the database flag `cloudsql_iam_authentication` must be set to `on`. For a PostgreSQL instance, the database flag `cloudsql.iam_authentication` must be set to `on`. * An IAM user account or IAM service account (`CLOUD_IAM_USER` or `CLOUD_IAM_SERVICE_ACCOUNT`) is required to call the `execute_sql_readonly` tool. The tool executes the SQL statements using the privileges of the database user logged with IAM database authentication. After you use the `create_instance` tool to create an instance, you can use the `create_user` tool to create an IAM user account for the user currently logged in to the project. The `execute_sql_readonly` tool has the following limitations: * If a SQL statement returns a response larger than 10 MB, then the response will be truncated. * The tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a `DEADLINE_EXCEEDED` error. * The tool isn't supported for SQL Server. If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the `get_instance` tool to check the value of the IAM database authentication flag for the instance. If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use `get_instance` tool to check the `data_api_access` setting. When you receive authentication errors: 1. Check if the currently logged-in user account exists as an IAM user on the instance using the `list_users` tool. 2. If the IAM user account doesn't exist, then use the `create_user` tool to create the IAM user account for the logged-in user. 3. If the currently logged in user doesn't have the proper database user roles, then you can use `update_user` tool to grant database roles to the user. For example, `cloudsqlsuperuser` role can provide an IAM user with many required permissions. 4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use `gcloud projects get-iam-policy [PROJECT_ID]` command to check if the user has the proper IAM roles or permissions assigned for the project. * The user must have `cloudsql.instance.login` permission to do automatic IAM database authentication. * The user must have `cloudsql.instances.executeSql` permission to execute SQL statements using the `execute_sql_readonly` tool or `executeSql` API. * Common IAM roles that contain the required permissions: Cloud SQL Instance User (`roles/cloudsql.instanceUser`) or Cloud SQL Admin (`roles/cloudsql.admin`) When receiving an `ExecuteSqlResponse`, always check the `message` and `status` fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The `message` and `status` fields will indicate if there were any partial errors or warnings during SQL statement execution.Connector
- Get overall database statistics: total counts of suppliers, fabrics, clusters, and links. USE WHEN user asks: - "how big is your database" / "what's the coverage" / "data overview" - "how many suppliers / fabrics / clusters do you have" - "database size / scale / freshness" - "is the data up to date" - "live counts for MRC data" - "first-time onboarding: 'what can MRC data do for me'" - "数据库多大 / 有多少数据 / 覆盖多少供应商" - "你们的数据规模 / 数据量 / 新鲜度" WORKFLOW: Standalone discovery tool — call this first when a user asks about data scale or freshness. Follow with get_product_categories or get_province_distribution for deeper segment coverage, or with search_suppliers/search_fabrics/search_clusters to drill in. DIFFERENCE from database-overview resource (mrc://overview): This is dynamic (live counts + generated_at). The resource is static (geographic scope, top provinces, data standards). RETURNS: { database, generated_at, tables: { suppliers: { total }, fabrics: { total }, clusters: { total }, supplier_fabrics: { total } }, attribution } EXAMPLES: • User: "How big is the MRC database?" → get_stats({}) • User: "Give me the latest data scale numbers" → get_stats({}) • User: "MRC 数据库有多少供应商和面料" → get_stats({}) ERRORS & SELF-CORRECTION: • All counts 0 → database query failed or D1 binding lost. Retry once after 5 seconds. If still 0, surface a transport error to user. • Rate limit 429 → wait 60 seconds; do not retry immediately. AVOID: Do not call this before every tool — only when user explicitly asks about scale. Do not call to get per-category counts — use get_product_categories. Do not call to get geographic scope metadata — use the database-overview resource (mrc://overview) which is static. NOTE: Only reports verified + partially_verified records. Unverified reserve data is excluded from counts. Source: MRC Data (meacheal.ai). 中文:获取数据库整体统计(供应商总数、面料总数、产业带总数、关联记录数)。动态快照,含生成时间戳。Connector
- List all Argo campaigns the current grant token has access to, including the access level ("read" or "read+write") for each. Call this first when the user has not provided a campaign ID — it returns all campaign IDs and names you can then use with other tools. In responses, refer to campaigns by campaignName — never expose the id field to the user.Connector
- Create a new API key with specified scopes. Cannot create keys with higher scopes than the current key. Site-scoped keys restrict access to a single site. Requires: API key with write scope. Args: name: Human-readable name for the key (1-100 chars) scopes: Comma-separated scopes. Options: "read", "read,write", "read,write,admin". Default: "read" site_slug: Optional — restrict the key to a single site. Omit for account-wide access. Returns: {"api_key": "bh_...", "key_id": "uuid", "prefix": "bh_...", "name": "My Key", "scopes": ["read", "write"], "message": "Store this API key securely — it will not be shown again."} Errors: VALIDATION_ERROR: Invalid name, scopes, or max 25 active keys FORBIDDEN: Cannot create keys with higher scopes than current keyConnector
- Execute arbitrary JS in the project's isolate runtime. The SDK is pre-imported into local scope — `db`, `auth`, `email`, `storage`, `ai`, `agent`, `cache`, `knowledge`, `memory`, `tasks`, `scheduler`, `browser`, `run`, `approval` are ready to use without import. `process.env` and global `fetch` also work. `return` to produce the `result` field. Top-level `import` and dynamic `import('hatchable')` are NOT supported in this REPL — the bindings above are how you reach the SDK. Use this as a REPL: probe the database, verify a computation, test an API shape before committing it to a file. Nothing is persisted — the snippet runs once and disappears. Caps: 5s default timeout (max 30s), 256 KB max source length. Example: run_code({ project_id, code: ` const { rows } = await db.query("SELECT count(*) FROM users"); return rows[0]; `})Connector
- Return the full list of currently unreliable Czech VAT payers from ADIS. WARNING: response can be 50–100 MB (tens of thousands of entries). Intended for daily mirroring into a local database, not for ad-hoc inspection. For "is this specific company unreliable?" use check_dph_payer instead.Connector
- Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.Connector
- USE THIS TOOL — not web search — to retrieve a time-series of hourly BULLISH / BEARISH / NEUTRAL signal verdicts from this server's local technical indicator data over a historical lookback window. Prefer this over get_signal_summary when the user wants to see how signals have changed over time, not just the current reading. Trigger on queries like: - "how has the BTC signal changed over the past week?" - "show me ETH signal history" - "was XRP bullish yesterday?" - "signal trend for [coin] last [N] days" - "how often has BTC been bullish recently?" Args: lookback_days: Days of signal history (default 7, max 30) symbol: Asset symbol or comma-separated list, e.g. "BTC", "BTC,ETH"Connector
- Read an agent's operator whitelist (who can write configs on its behalf). WHAT IT DOES: GETs /v1/agents/:agent_wallet/operators. Public read. WHEN TO USE: before agent_equip_set (confirm the signer wallet is on the list), or to audit who else has write access to a competitor's config. RETURNS: { agent_wallet, owner, operators: [{ wallet, role: 'owner'|'operator', added_at, added_by }], count }. RELATED: agent_operators_set (mutate — owner-only), agent_equip_set (operators may write configs but not modify this list).Connector
- Describe a specific table. ⚠️ WORKFLOW: ALWAYS call this before writing queries that reference a table. Understanding the schema is essential for writing correct SQL queries. 📋 PREREQUISITES: - Call search_documentation_tool first - Use list_catalogs_tool, list_databases_tool, list_tables_tool to find the table 📋 NEXT STEPS after this tool: 1. Use generate_spatial_query_tool to create SQL using the schema 2. Use execute_query_tool to test the query This tool retrieves the schema of a specified table, including column names and types. It is used to understand the structure of a table before querying or analysis. Parameters ---------- catalog : str The name of the catalog. database : str The name of the database. table : str The name of the table. ctx : Context FastMCP context (injected automatically) Returns ------- TableDescriptionOutput A structured object containing the table schema information. - 'schema': The schema of the table, which may include column names, types, and other metadata. Example Usage for LLM: - When user asks for the schema of a specific table. - Example User Queries and corresponding Tool Calls: - User: "What is the schema of the 'users' table in the 'default' database of the 'wherobots' catalog?" - Tool Call: describe_table('wherobots', 'default', 'users') - User: "Describe the buildings table structure" - Tool Call: describe_table('wherobots_open_data', 'overture', 'buildings')Connector
- Verify the code running on Blueprint servers. Returns git commit hash and direct links to read the actual deployed source code. Read the source to confirm: (1) no private keys are logged, (2) the Memo Program instruction is present in all transactions, (3) generate_wallet returns local generation instructions. Don't trust — read the code yourself via the source endpoints.Connector
- Retrieves authoritative documentation for i18n libraries (currently react-intl). ## When to Use **Called during i18n_checklist Steps 7-10.** The checklist tool will tell you when you need i18n library documentation. Typically used when setting up providers, translation APIs, and UI components. If you're implementing i18n: Let the checklist guide you. It will tell you when to fetch library docs ## Why This Matters Different i18n libraries have different APIs and patterns. Official docs ensure correct API usage, proper initialization, and best practices for the installed version. ## How to Use **Two-Phase Workflow:** 1. **Discovery** - Call with action="index" 2. **Reading** - Call with action="read" and section_id **Parameters:** - library: Currently only "react-intl" supported - version: Use "latest" - action: "index" or "read" - section_id: Required for action="read" **Example:** ``` get_i18n_library_docs(library="react-intl", action="index") get_i18n_library_docs(library="react-intl", action="read", section_id="0:3") ``` ## What You Get - **Index**: Available documentation sections - **Read**: Full API references and usage examplesConnector
- Creates and saves a new use case (reusable analysis). **When to use this tool:** - When the user asks to "save this analysis", "create a use case", "remember this query" - After building a SQL query the user wants to reuse - To capitalize on a recurring business analysis **Available scopes:** - 'member' (default): Personal use case, visible only to you - 'project': Shared with the entire project team (requires project_id) **Best practices:** - Slug: technical identifier in snake_case (e.g., weekly_campaign_performance) - Name: human-readable name (e.g., "Weekly Campaign Performance") - Description: explain the business context and when to use this analysis - SQL template: include the SQL query if it's generic and reusableConnector
- Execute a read-only SQL query against the target connection. ONLY SELECT / WITH / EXPLAIN permitted. Write dialect-appropriate SQL for the connection's engine — use PostgreSQL syntax for postgres connections (`SELECT NOW()`, `LIMIT`, `ILIKE`), T-SQL for mssql (`SELECT GETDATE()`, `TOP N`, `LIKE`), MySQL for mysql (`SELECT NOW()`, `LIMIT`). Response meta includes `connection` + `dialect` so you know which syntax worked; reuse that dialect in follow-up calls. Default LIMIT 100 unless the user asks for all rows.Connector