Open-Source MCP tools

Production-ready and experimental MCP tools that extend AI capabilities through file access, database connections, API integrations, and other contextual services.

5,632 tools. Last updated 2025-03-25 04:04
  • # Instructions 1. Query Axiom datasets using Axiom Processing Language (APL). The query must be a valid APL query string. 2. ALWAYS get the schema of the dataset before running queries rather than guessing. You can do this by getting a single event and projecting all fields. 3. Keep in mind that there's a maximum row limit of 65000 rows per query. 4. Prefer aggregations over non aggregating queries when possible to reduce the amount of data returned. 5. Be selective in what you project in each query (unless otherwise needed, like for discovering the schema). It's expensive to project all fields. 6. ALWAYS restrict the time range of the query to the smallest possible range that meets your needs. This will reduce the amount of data scanned and improve query performance. 7. NEVER guess the schema of the dataset. If you don't where something is, use search first to find in which fields it appears. # Examples Basic: - Filter: ['logs'] | where ['severity'] == "error" or ['duration'] > 500ms - Time range: ['logs'] | where ['_time'] > ago(2h) and ['_time'] < now() - Project rename: ['logs'] | project-rename responseTime=['duration'], path=['url'] Aggregations: - Count by: ['logs'] | summarize count() by bin(['_time'], 5m), ['status'] - Multiple aggs: ['logs'] | summarize count(), avg(['duration']), max(['duration']), p95=percentile(['duration'], 95) by ['endpoint'] - Dimensional: ['logs'] | summarize dimensional_analysis(['isError'], pack_array(['endpoint'], ['status'])) - Histograms: ['logs'] | summarize histogram(['responseTime'], 100) by ['endpoint'] - Distinct: ['logs'] | summarize dcount(['userId']) by bin_auto(['_time']) Search & Parse: - Search all: search "error" or "exception" - Parse logs: ['logs'] | parse-kv ['message'] as (duration:long, error:string) with (pair_delimiter=",") - Regex extract: ['logs'] | extend errorCode = extract("error code ([0-9]+)", 1, ['message']) - Contains ops: ['logs'] | where ['message'] contains_cs "ERROR" or ['message'] startswith "FATAL" Data Shaping: - Extend & Calculate: ['logs'] | extend duration_s = ['duration']/1000, success = ['status'] < 400 - Dynamic: ['logs'] | extend props = parse_json(['properties']) | where ['props.level'] == "error" - Pack/Unpack: ['logs'] | extend fields = pack("status", ['status'], "duration", ['duration']) - Arrays: ['logs'] | where ['url'] in ("login", "logout", "home") | where array_length(['tags']) > 0 Advanced: - Make series: ['metrics'] | make-series avg(['cpu']) default=0 on ['_time'] step 1m by ['host'] - Join: ['errors'] | join kind=inner (['users'] | project ['userId'], ['email']) on ['userId'] - Union: union ['logs-app*'] | where ['severity'] == "error" - Fork: ['logs'] | fork (where ['status'] >= 500 | as errors) (where ['status'] < 300 | as success) - Case: ['logs'] | extend level = case(['status'] >= 500, "error", ['status'] >= 400, "warn", "info") Time Operations: - Bin & Range: ['logs'] | where ['_time'] between(datetime(2024-01-01)..now()) - Multiple time bins: ['logs'] | summarize count() by bin(['_time'], 1h), bin(['_time'], 1d) - Time shifts: ['logs'] | extend prev_hour = ['_time'] - 1h String Operations: - String funcs: ['logs'] | extend domain = tolower(extract("://([^/]+)", 1, ['url'])) - Concat: ['logs'] | extend full_msg = strcat(['level'], ": ", ['message']) - Replace: ['logs'] | extend clean_msg = replace_regex("(password=)[^&]*", "\1***", ['message']) Common Patterns: - Error analysis: ['logs'] | where ['severity'] == "error" | summarize error_count=count() by ['error_code'], ['service'] - Status codes: ['logs'] | summarize requests=count() by ['status'], bin_auto(['_time']) | where ['status'] >= 500 - Latency tracking: ['logs'] | summarize p50=percentile(['duration'], 50), p90=percentile(['duration'], 90) by ['endpoint'] - User activity: ['logs'] | summarize user_actions=count() by ['userId'], ['action'], bin(['_time'], 1h)
    JavaScript
    MIT License
  • A detailed tool for dynamic and reflective problem-solving through thoughts. This tool helps analyze problems through a flexible thinking process that can adapt and evolve. Each thought can build on, question, or revise previous insights as understanding deepens. IMPORTANT: When initializing this tool, you must pass all available tools that you want the sequential thinking process to be able to use. The tool will analyze these tools and provide recommendations for their use. When to use this tool: - Breaking down complex problems into steps - Planning and design with room for revision - Analysis that might need course correction - Problems where the full scope might not be clear initially - Problems that require a multi-step solution - Tasks that need to maintain context over multiple steps - Situations where irrelevant information needs to be filtered out - When you need guidance on which tools to use and in what order Key features: - You can adjust total_thoughts up or down as you progress - You can question or revise previous thoughts - You can add more thoughts even after reaching what seemed like the end - You can express uncertainty and explore alternative approaches - Not every thought needs to build linearly - you can branch or backtrack - Generates a solution hypothesis - Verifies the hypothesis based on the Chain of Thought steps - Recommends appropriate tools for each step - Provides rationale for tool recommendations - Suggests tool execution order and parameters - Tracks previous recommendations and remaining steps Parameters explained: - thought: Your current thinking step, which can include: * Regular analytical steps * Revisions of previous thoughts * Questions about previous decisions * Realizations about needing more analysis * Changes in approach * Hypothesis generation * Hypothesis verification * Tool recommendations and rationale - next_thought_needed: True if you need more thinking, even if at what seemed like the end - thought_number: Current number in sequence (can go beyond initial total if needed) - total_thoughts: Current estimate of thoughts needed (can be adjusted up/down) - is_revision: A boolean indicating if this thought revises previous thinking - revises_thought: If is_revision is true, which thought number is being reconsidered - branch_from_thought: If branching, which thought number is the branching point - branch_id: Identifier for the current branch (if any) - needs_more_thoughts: If reaching end but realizing more thoughts needed - current_step: Current step recommendation, including: * step_description: What needs to be done * recommended_tools: Tools recommended for this step * expected_outcome: What to expect from this step * next_step_conditions: Conditions to consider for the next step - previous_steps: Steps already recommended - remaining_steps: High-level descriptions of upcoming steps You should: 1. Start with an initial estimate of needed thoughts, but be ready to adjust 2. Feel free to question or revise previous thoughts 3. Don't hesitate to add more thoughts if needed, even at the "end" 4. Express uncertainty when present 5. Mark thoughts that revise previous thinking or branch into new paths 6. Ignore information that is irrelevant to the current step 7. Generate a solution hypothesis when appropriate 8. Verify the hypothesis based on the Chain of Thought steps 9. Consider available tools that could help with the current step 10. Provide clear rationale for tool recommendations 11. Suggest specific tool parameters when appropriate 12. Consider alternative tools for each step 13. Track progress through the recommended steps 14. Provide a single, ideally correct answer as the final output 15. Only set next_thought_needed to false when truly done and a satisfactory answer is reached
    TypeScript
    MIT License
  • A detailed tool for dynamic and reflective problem-solving through Gemini AI. This tool helps analyze problems through a flexible thinking process powered by Google's Gemini model. Each thought can build on, question, or revise previous insights as understanding deepens. When to use this tool: - Breaking down complex problems into steps - Planning and design with room for revision - Analysis that might need course correction - Problems where the full scope might not be clear initially - Problems that require a multi-step solution - Tasks that need to maintain context over multiple steps - Situations where irrelevant information needs to be filtered out Key features: - Leverages Gemini AI for deep analytical thinking - Provides meta-commentary on the reasoning process - Indicates confidence levels for generated thoughts - Suggests alternative approaches when relevant - You can adjust total_thoughts up or down as you progress - You can question or revise previous thoughts - You can add more thoughts even after reaching what seemed like the end - You can express uncertainty and explore alternative approaches - Not every thought needs to build linearly - you can branch or backtrack - Session persistence: save and resume your analysis sessions Parameters explained: - query: The question or problem to be analyzed - context: Additional context information (e.g., code snippets, background) - approach: Suggested approach to the problem (optional) - previousThoughts: Array of previous thoughts for context - thought: The current thinking step (if empty, will be generated by Gemini) - next_thought_needed: True if you need more thinking, even if at what seemed like the end - thought_number: Current number in sequence (can go beyond initial total if needed) - total_thoughts: Current estimate of thoughts needed (can be adjusted up/down) - is_revision: A boolean indicating if this thought revises previous thinking - revises_thought: If is_revision is true, which thought number is being reconsidered - branch_from_thought: If branching, which thought number is the branching point - branch_id: Identifier for the current branch (if any) - needs_more_thoughts: If reaching end but realizing more thoughts needed - metaComments: Meta-commentary from Gemini about its reasoning process - confidenceLevel: Gemini's confidence in the generated thought (0-1) - alternativePaths: Alternative approaches suggested by Gemini Session commands: - sessionCommand: Command to manage sessions ('save', 'load', 'getState') - sessionPath: Path to save or load the session file (required for 'save' and 'load' commands) You should: 1. Start with a clear query and any relevant context 2. Let Gemini generate thoughts by not providing the 'thought' parameter 3. Review the generated thoughts and meta-commentary 4. Feel free to revise or branch thoughts as needed 5. Consider alternative paths suggested by Gemini 6. Only set next_thought_needed to false when truly done 7. Use session commands to save your progress and resume later
    JavaScript
  • Update an existing workflow. Args: workflow_id: ID of the workflow to update workflow_config: A Typed Dictionary containing required fields (destination_id, name, source_id, workflow_type) and non-required fields (schedule, and workflow_nodes) Returns: String containing the updated workflow information Custom workflow DAG nodes - If WorkflowType is set to custom, you must also specify the settings for the workflow’s directed acyclic graph (DAG) nodes. These nodes’ settings are specified in the workflow_nodes array. - A Source node is automatically created when you specify the source_id value outside of the workflow_nodes array. - A Destination node is automatically created when you specify the destination_id value outside of the workflow_nodes array. - You can specify Partitioner, Chunker, Prompter, and Embedder nodes. - The order of the nodes in the workflow_nodes array will be the same order that these nodes appear in the DAG, with the first node in the array added directly after the Source node. The Destination node follows the last node in the array. - Be sure to specify nodes in the allowed order. The following DAG placements are all allowed: - Source -> Partitioner -> Destination, - Source -> Partitioner -> Chunker -> Destination, - Source -> Partitioner -> Chunker -> Embedder -> Destination, - Source -> Partitioner -> Prompter -> Chunker -> Destination, - Source -> Partitioner -> Prompter -> Chunker -> Embedder -> Destination Partitioner node A Partitioner node has a type of partition and a subtype of auto, vlm, hi_res, or fast. Examples: - auto strategy: { "name": "Partitioner", "type": "partition", "subtype": "vlm", "settings": { "provider": "anthropic", (required) "model": "claude-3-5-sonnet-20241022", (required) "output_format": "text/html", "user_prompt": null, "format_html": true, "unique_element_ids": true, "is_dynamic": true, "allow_fast": true } } - vlm strategy: Allowed values are provider and model. Below are examples: - "provider": "anthropic" "model": "claude-3-5-sonnet-20241022", - "provider": "openai" "model": "gpt-4o" - hi_res strategy: { "name": "Partitioner", "type": "partition", "subtype": "unstructured_api", "settings": { "strategy": "hi_res", "include_page_breaks": <true|false>, "pdf_infer_table_structure": <true|false>, "exclude_elements": [ "<element-name>", "<element-name>" ], "xml_keep_tags": <true|false>, "encoding": "<encoding>", "ocr_languages": [ "<language>", "<language>" ], "extract_image_block_types": [ "image", "table" ], "infer_table_structure": <true|false> } } - fast strategy { "name": "Partitioner", "type": "partition", "subtype": "unstructured_api", "settings": { "strategy": "fast", "include_page_breaks": <true|false>, "pdf_infer_table_structure": <true|false>, "exclude_elements": [ "<element-name>", "<element-name>" ], "xml_keep_tags": <true|false>, "encoding": "<encoding>", "ocr_languages": [ "<language-code>", "<language-code>" ], "extract_image_block_types": [ "image", "table" ], "infer_table_structure": <true|false> } } Chunker node A Chunker node has a type of chunk and subtype of chunk_by_character or chunk_by_title. - chunk_by_character { "name": "Chunker", "type": "chunk", "subtype": "chunk_by_character", "settings": { "include_orig_elements": <true|false>, "new_after_n_chars": <new-after-n-chars>, (required, if not provided set same as max_characters) "max_characters": <max-characters>, (required) "overlap": <overlap>, (required, if not provided set default to 0) "overlap_all": <true|false>, "contextual_chunking_strategy": "v1" } } - chunk_by_title { "name": "Chunker", "type": "chunk", "subtype": "chunk_by_title", "settings": { "multipage_sections": <true|false>, "combine_text_under_n_chars": <combine-text-under-n-chars>, "include_orig_elements": <true|false>, "new_after_n_chars": <new-after-n-chars>, (required, if not provided set same as max_characters) "max_characters": <max-characters>, (required) "overlap": <overlap>, (required, if not provided set default to 0) "overlap_all": <true|false>, "contextual_chunking_strategy": "v1" } } Prompter node An Prompter node has a type of prompter and subtype of: - openai_image_description, - anthropic_image_description, - bedrock_image_description, - vertexai_image_description, - openai_table_description, - anthropic_table_description, - bedrock_table_description, - vertexai_table_description, - openai_table2html, - openai_ner Example: { "name": "Prompter", "type": "prompter", "subtype": "<subtype>", "settings": {} } Embedder node An Embedder node has a type of embed Allowed values for subtype and model_name include: - "subtype": "azure_openai" - "model_name": "text-embedding-3-small" - "model_name": "text-embedding-3-large" - "model_name": "text-embedding-ada-002" - "subtype": "bedrock" - "model_name": "amazon.titan-embed-text-v2:0" - "model_name": "amazon.titan-embed-text-v1" - "model_name": "amazon.titan-embed-image-v1" - "model_name": "cohere.embed-english-v3" - "model_name": "cohere.embed-multilingual-v3" - "subtype": "togetherai": - "model_name": "togethercomputer/m2-bert-80M-2k-retrieval" - "model_name": "togethercomputer/m2-bert-80M-8k-retrieval" - "model_name": "togethercomputer/m2-bert-80M-32k-retrieval" Example: { "name": "Embedder", "type": "embed", "subtype": "<subtype>", "settings": { "model_name": "<model-name>" } }
    Python
  • Update an existing workflow. Args: workflow_id: ID of the workflow to update workflow_config: A Typed Dictionary containing required fields (destination_id, name, source_id, workflow_type) and non-required fields (schedule, and workflow_nodes) Returns: String containing the updated workflow information Custom workflow DAG nodes - If WorkflowType is set to custom, you must also specify the settings for the workflow’s directed acyclic graph (DAG) nodes. These nodes’ settings are specified in the workflow_nodes array. - A Source node is automatically created when you specify the source_id value outside of the workflow_nodes array. - A Destination node is automatically created when you specify the destination_id value outside of the workflow_nodes array. - You can specify Partitioner, Chunker, Prompter, and Embedder nodes. - The order of the nodes in the workflow_nodes array will be the same order that these nodes appear in the DAG, with the first node in the array added directly after the Source node. The Destination node follows the last node in the array. - Be sure to specify nodes in the allowed order. The following DAG placements are all allowed: - Source -> Partitioner -> Destination, - Source -> Partitioner -> Chunker -> Destination, - Source -> Partitioner -> Chunker -> Embedder -> Destination, - Source -> Partitioner -> Prompter -> Chunker -> Destination, - Source -> Partitioner -> Prompter -> Chunker -> Embedder -> Destination Partitioner node A Partitioner node has a type of partition and a subtype of auto, vlm, hi_res, or fast. Examples: - auto strategy: { "name": "Partitioner", "type": "partition", "subtype": "vlm", "settings": { "provider": "anthropic", (required) "model": "claude-3-5-sonnet-20241022", (required) "output_format": "text/html", "user_prompt": null, "format_html": true, "unique_element_ids": true, "is_dynamic": true, "allow_fast": true } } - vlm strategy: Allowed values are provider and model. Below are examples: - "provider": "anthropic" "model": "claude-3-5-sonnet-20241022", - "provider": "openai" "model": "gpt-4o" - hi_res strategy: { "name": "Partitioner", "type": "partition", "subtype": "unstructured_api", "settings": { "strategy": "hi_res", "include_page_breaks": <true|false>, "pdf_infer_table_structure": <true|false>, "exclude_elements": [ "<element-name>", "<element-name>" ], "xml_keep_tags": <true|false>, "encoding": "<encoding>", "ocr_languages": [ "<language>", "<language>" ], "extract_image_block_types": [ "image", "table" ], "infer_table_structure": <true|false> } } - fast strategy { "name": "Partitioner", "type": "partition", "subtype": "unstructured_api", "settings": { "strategy": "fast", "include_page_breaks": <true|false>, "pdf_infer_table_structure": <true|false>, "exclude_elements": [ "<element-name>", "<element-name>" ], "xml_keep_tags": <true|false>, "encoding": "<encoding>", "ocr_languages": [ "<language-code>", "<language-code>" ], "extract_image_block_types": [ "image", "table" ], "infer_table_structure": <true|false> } } Chunker node A Chunker node has a type of chunk and subtype of chunk_by_character or chunk_by_title. - chunk_by_character { "name": "Chunker", "type": "chunk", "subtype": "chunk_by_character", "settings": { "include_orig_elements": <true|false>, "new_after_n_chars": <new-after-n-chars>, (required, if not provided set same as max_characters) "max_characters": <max-characters>, (required) "overlap": <overlap>, (required, if not provided set default to 0) "overlap_all": <true|false>, "contextual_chunking_strategy": "v1" } } - chunk_by_title { "name": "Chunker", "type": "chunk", "subtype": "chunk_by_title", "settings": { "multipage_sections": <true|false>, "combine_text_under_n_chars": <combine-text-under-n-chars>, "include_orig_elements": <true|false>, "new_after_n_chars": <new-after-n-chars>, (required, if not provided set same as max_characters) "max_characters": <max-characters>, (required) "overlap": <overlap>, (required, if not provided set default to 0) "overlap_all": <true|false>, "contextual_chunking_strategy": "v1" } } Prompter node An Prompter node has a type of prompter and subtype of: - openai_image_description, - anthropic_image_description, - bedrock_image_description, - vertexai_image_description, - openai_table_description, - anthropic_table_description, - bedrock_table_description, - vertexai_table_description, - openai_table2html, - openai_ner Example: { "name": "Prompter", "type": "prompter", "subtype": "<subtype>", "settings": {} } Embedder node An Embedder node has a type of embed Allowed values for subtype and model_name include: - "subtype": "azure_openai" - "model_name": "text-embedding-3-small" - "model_name": "text-embedding-3-large" - "model_name": "text-embedding-ada-002" - "subtype": "bedrock" - "model_name": "amazon.titan-embed-text-v2:0" - "model_name": "amazon.titan-embed-text-v1" - "model_name": "amazon.titan-embed-image-v1" - "model_name": "cohere.embed-english-v3" - "model_name": "cohere.embed-multilingual-v3" - "subtype": "togetherai": - "model_name": "togethercomputer/m2-bert-80M-2k-retrieval" - "model_name": "togethercomputer/m2-bert-80M-8k-retrieval" - "model_name": "togethercomputer/m2-bert-80M-32k-retrieval" Example: { "name": "Embedder", "type": "embed", "subtype": "<subtype>", "settings": { "model_name": "<model-name>" } }
    Python
  • Create a new workflow. Args: workflow_config: A Typed Dictionary containing required fields (destination_id - should be a valid UUID, name, source_id - should be a valid UUID, workflow_type) and non-required fields (schedule, and workflow_nodes). Note workflow_nodes is only enabled when workflow_type is `custom` and is a list of WorkflowNodeTypedDict: partition, prompter,chunk, embed Below is an example of a partition workflow node: { "name": "vlm-partition", "type": "partition", "sub_type": "vlm", "settings": { "provider": "your favorite provider", "model": "your favorite model" } } Returns: String containing the created workflow information Custom workflow DAG nodes - If WorkflowType is set to custom, you must also specify the settings for the workflow’s directed acyclic graph (DAG) nodes. These nodes’ settings are specified in the workflow_nodes array. - A Source node is automatically created when you specify the source_id value outside of the workflow_nodes array. - A Destination node is automatically created when you specify the destination_id value outside of the workflow_nodes array. - You can specify Partitioner, Chunker, Prompter, and Embedder nodes. - The order of the nodes in the workflow_nodes array will be the same order that these nodes appear in the DAG, with the first node in the array added directly after the Source node. The Destination node follows the last node in the array. - Be sure to specify nodes in the allowed order. The following DAG placements are all allowed: - Source -> Partitioner -> Destination, - Source -> Partitioner -> Chunker -> Destination, - Source -> Partitioner -> Chunker -> Embedder -> Destination, - Source -> Partitioner -> Prompter -> Chunker -> Destination, - Source -> Partitioner -> Prompter -> Chunker -> Embedder -> Destination Partitioner node A Partitioner node has a type of partition and a subtype of auto, vlm, hi_res, or fast. Examples: - auto strategy: { "name": "Partitioner", "type": "partition", "subtype": "vlm", "settings": { "provider": "anthropic", (required) "model": "claude-3-5-sonnet-20241022", (required) "output_format": "text/html", "user_prompt": null, "format_html": true, "unique_element_ids": true, "is_dynamic": true, "allow_fast": true } } - vlm strategy: Allowed values are provider and model. Below are examples: - "provider": "anthropic" "model": "claude-3-5-sonnet-20241022", - "provider": "openai" "model": "gpt-4o" - hi_res strategy: { "name": "Partitioner", "type": "partition", "subtype": "unstructured_api", "settings": { "strategy": "hi_res", "include_page_breaks": <true|false>, "pdf_infer_table_structure": <true|false>, "exclude_elements": [ "<element-name>", "<element-name>" ], "xml_keep_tags": <true|false>, "encoding": "<encoding>", "ocr_languages": [ "<language>", "<language>" ], "extract_image_block_types": [ "image", "table" ], "infer_table_structure": <true|false> } } - fast strategy { "name": "Partitioner", "type": "partition", "subtype": "unstructured_api", "settings": { "strategy": "fast", "include_page_breaks": <true|false>, "pdf_infer_table_structure": <true|false>, "exclude_elements": [ "<element-name>", "<element-name>" ], "xml_keep_tags": <true|false>, "encoding": "<encoding>", "ocr_languages": [ "<language-code>", "<language-code>" ], "extract_image_block_types": [ "image", "table" ], "infer_table_structure": <true|false> } } Chunker node A Chunker node has a type of chunk and subtype of chunk_by_character or chunk_by_title. - chunk_by_character { "name": "Chunker", "type": "chunk", "subtype": "chunk_by_character", "settings": { "include_orig_elements": <true|false>, "new_after_n_chars": <new-after-n-chars>, (required, if not provided set same as max_characters) "max_characters": <max-characters>, (required) "overlap": <overlap>, (required, if not provided set default to 0) "overlap_all": <true|false>, "contextual_chunking_strategy": "v1" } } - chunk_by_title { "name": "Chunker", "type": "chunk", "subtype": "chunk_by_title", "settings": { "multipage_sections": <true|false>, "combine_text_under_n_chars": <combine-text-under-n-chars>, "include_orig_elements": <true|false>, "new_after_n_chars": <new-after-n-chars>, (required, if not provided set same as max_characters) "max_characters": <max-characters>, (required) "overlap": <overlap>, (required, if not provided set default to 0) "overlap_all": <true|false>, "contextual_chunking_strategy": "v1" } } Prompter node An Prompter node has a type of prompter and subtype of: - openai_image_description, - anthropic_image_description, - bedrock_image_description, - vertexai_image_description, - openai_table_description, - anthropic_table_description, - bedrock_table_description, - vertexai_table_description, - openai_table2html, - openai_ner Example: { "name": "Prompter", "type": "prompter", "subtype": "<subtype>", "settings": {} } Embedder node An Embedder node has a type of embed Allowed values for subtype and model_name include: - "subtype": "azure_openai" - "model_name": "text-embedding-3-small" - "model_name": "text-embedding-3-large" - "model_name": "text-embedding-ada-002" - "subtype": "bedrock" - "model_name": "amazon.titan-embed-text-v2:0" - "model_name": "amazon.titan-embed-text-v1" - "model_name": "amazon.titan-embed-image-v1" - "model_name": "cohere.embed-english-v3" - "model_name": "cohere.embed-multilingual-v3" - "subtype": "togetherai": - "model_name": "togethercomputer/m2-bert-80M-2k-retrieval" - "model_name": "togethercomputer/m2-bert-80M-8k-retrieval" - "model_name": "togethercomputer/m2-bert-80M-32k-retrieval" Example: { "name": "Embedder", "type": "embed", "subtype": "<subtype>", "settings": { "model_name": "<model-name>" } }
    Python

Interested in MCP?

Join the MCP community for support and updates.

RedditDiscord
  • Create a new workflow. Args: workflow_config: A Typed Dictionary containing required fields (destination_id - should be a valid UUID, name, source_id - should be a valid UUID, workflow_type) and non-required fields (schedule, and workflow_nodes). Note workflow_nodes is only enabled when workflow_type is `custom` and is a list of WorkflowNodeTypedDict: partition, prompter,chunk, embed Below is an example of a partition workflow node: { "name": "vlm-partition", "type": "partition", "sub_type": "vlm", "settings": { "provider": "your favorite provider", "model": "your favorite model" } } Returns: String containing the created workflow information Custom workflow DAG nodes - If WorkflowType is set to custom, you must also specify the settings for the workflow’s directed acyclic graph (DAG) nodes. These nodes’ settings are specified in the workflow_nodes array. - A Source node is automatically created when you specify the source_id value outside of the workflow_nodes array. - A Destination node is automatically created when you specify the destination_id value outside of the workflow_nodes array. - You can specify Partitioner, Chunker, Prompter, and Embedder nodes. - The order of the nodes in the workflow_nodes array will be the same order that these nodes appear in the DAG, with the first node in the array added directly after the Source node. The Destination node follows the last node in the array. - Be sure to specify nodes in the allowed order. The following DAG placements are all allowed: - Source -> Partitioner -> Destination, - Source -> Partitioner -> Chunker -> Destination, - Source -> Partitioner -> Chunker -> Embedder -> Destination, - Source -> Partitioner -> Prompter -> Chunker -> Destination, - Source -> Partitioner -> Prompter -> Chunker -> Embedder -> Destination Partitioner node A Partitioner node has a type of partition and a subtype of auto, vlm, hi_res, or fast. Examples: - auto strategy: { "name": "Partitioner", "type": "partition", "subtype": "vlm", "settings": { "provider": "anthropic", (required) "model": "claude-3-5-sonnet-20241022", (required) "output_format": "text/html", "user_prompt": null, "format_html": true, "unique_element_ids": true, "is_dynamic": true, "allow_fast": true } } - vlm strategy: Allowed values are provider and model. Below are examples: - "provider": "anthropic" "model": "claude-3-5-sonnet-20241022", - "provider": "openai" "model": "gpt-4o" - hi_res strategy: { "name": "Partitioner", "type": "partition", "subtype": "unstructured_api", "settings": { "strategy": "hi_res", "include_page_breaks": <true|false>, "pdf_infer_table_structure": <true|false>, "exclude_elements": [ "<element-name>", "<element-name>" ], "xml_keep_tags": <true|false>, "encoding": "<encoding>", "ocr_languages": [ "<language>", "<language>" ], "extract_image_block_types": [ "image", "table" ], "infer_table_structure": <true|false> } } - fast strategy { "name": "Partitioner", "type": "partition", "subtype": "unstructured_api", "settings": { "strategy": "fast", "include_page_breaks": <true|false>, "pdf_infer_table_structure": <true|false>, "exclude_elements": [ "<element-name>", "<element-name>" ], "xml_keep_tags": <true|false>, "encoding": "<encoding>", "ocr_languages": [ "<language-code>", "<language-code>" ], "extract_image_block_types": [ "image", "table" ], "infer_table_structure": <true|false> } } Chunker node A Chunker node has a type of chunk and subtype of chunk_by_character or chunk_by_title. - chunk_by_character { "name": "Chunker", "type": "chunk", "subtype": "chunk_by_character", "settings": { "include_orig_elements": <true|false>, "new_after_n_chars": <new-after-n-chars>, (required, if not provided set same as max_characters) "max_characters": <max-characters>, (required) "overlap": <overlap>, (required, if not provided set default to 0) "overlap_all": <true|false>, "contextual_chunking_strategy": "v1" } } - chunk_by_title { "name": "Chunker", "type": "chunk", "subtype": "chunk_by_title", "settings": { "multipage_sections": <true|false>, "combine_text_under_n_chars": <combine-text-under-n-chars>, "include_orig_elements": <true|false>, "new_after_n_chars": <new-after-n-chars>, (required, if not provided set same as max_characters) "max_characters": <max-characters>, (required) "overlap": <overlap>, (required, if not provided set default to 0) "overlap_all": <true|false>, "contextual_chunking_strategy": "v1" } } Prompter node An Prompter node has a type of prompter and subtype of: - openai_image_description, - anthropic_image_description, - bedrock_image_description, - vertexai_image_description, - openai_table_description, - anthropic_table_description, - bedrock_table_description, - vertexai_table_description, - openai_table2html, - openai_ner Example: { "name": "Prompter", "type": "prompter", "subtype": "<subtype>", "settings": {} } Embedder node An Embedder node has a type of embed Allowed values for subtype and model_name include: - "subtype": "azure_openai" - "model_name": "text-embedding-3-small" - "model_name": "text-embedding-3-large" - "model_name": "text-embedding-ada-002" - "subtype": "bedrock" - "model_name": "amazon.titan-embed-text-v2:0" - "model_name": "amazon.titan-embed-text-v1" - "model_name": "amazon.titan-embed-image-v1" - "model_name": "cohere.embed-english-v3" - "model_name": "cohere.embed-multilingual-v3" - "subtype": "togetherai": - "model_name": "togethercomputer/m2-bert-80M-2k-retrieval" - "model_name": "togethercomputer/m2-bert-80M-8k-retrieval" - "model_name": "togethercomputer/m2-bert-80M-32k-retrieval" Example: { "name": "Embedder", "type": "embed", "subtype": "<subtype>", "settings": { "model_name": "<model-name>" } }
    Python
  • # Chain of Draft (CoD): Systematic Reasoning Tool ⚠️ REQUIRED PARAMETERS - ALL MUST BE PROVIDED: 1. reasoning_chain: string[] - At least one reasoning step 2. next_step_needed: boolean - Whether another iteration is needed 3. draft_number: number - Current draft number (≥ 1) 4. total_drafts: number - Total planned drafts (≥ draft_number) Optional parameters only required based on context: - is_critique?: boolean - If true, critique_focus is required - critique_focus?: string - Required when is_critique=true - revision_instructions?: string - Recommended for revision steps - step_to_review?: number - Specific step index to review - is_final_draft?: boolean - Marks final iteration ## Purpose: Enhances problem-solving through structured, iterative critique and revision. Chain of Draft is an advanced reasoning tool that enhances problem-solving through structured, iterative critique and revision. Unlike traditional reasoning approaches, CoD mimics the human drafting process to improve clarity, accuracy, and robustness of conclusions. ## When to Use This Tool: - **Complex Problem-Solving:** Tasks requiring detailed, multi-step analysis with high accuracy demands - **Critical Reasoning:** Problems where logical flow and consistency are essential - **Error-Prone Scenarios:** Questions where initial reasoning might contain mistakes or oversight - **Multi-Perspective Analysis:** Cases benefiting from examining a problem from different angles - **Self-Correction Needs:** When validation and refinement of initial thoughts are crucial - **Detailed Solutions:** Tasks requiring comprehensive explanations with supporting evidence - **Mathematical or Logical Puzzles:** Problems with potential for calculation errors or logical gaps - **Nuanced Analysis:** Situations with subtle distinctions that might be missed in a single pass ## Key Capabilities: - **Iterative Improvement:** Systematically refines reasoning through multiple drafts - **Self-Critique:** Critically examines previous reasoning to identify flaws and opportunities - **Focused Revision:** Targets specific aspects of reasoning in each iteration - **Perspective Flexibility:** Can adopt different analytical viewpoints during critique - **Progressive Refinement:** Builds toward optimal solutions through controlled iterations - **Context Preservation:** Maintains understanding across multiple drafts and revisions - **Adaptable Depth:** Adjusts the number of iterations based on problem complexity - **Targeted Improvements:** Addresses specific weaknesses in each revision cycle ## Parameters Explained: - **reasoning_chain:** Array of strings representing your current reasoning steps. Each element should contain a clear, complete thought that contributes to the overall analysis. - **next_step_needed:** Boolean flag indicating whether additional critique or revision is required. Set to true until the final, refined reasoning chain is complete. - **draft_number:** Integer tracking the current iteration (starting from 1). Increments with each critique or revision. - **total_drafts:** Estimated number of drafts needed for completion. This can be adjusted as the solution evolves. - **is_critique:** Boolean indicating the current mode: * true = Evaluating previous reasoning * false = Implementing revisions - **critique_focus:** (Required when is_critique=true) Specific aspect being evaluated, such as: * "logical_consistency": Checking for contradictions or flaws in reasoning * "factual_accuracy": Verifying correctness of facts and calculations * "completeness": Ensuring all relevant aspects are considered * "clarity": Evaluating how understandable the reasoning is * "relevance": Assessing if reasoning directly addresses the problem - **revision_instructions:** (Required when is_critique=false) Detailed guidance for improving the reasoning based on the preceding critique. - **step_to_review:** (Optional) Zero-based index of the specific reasoning step being critiqued or revised. When omitted, applies to the entire chain. - **is_final_draft:** (Optional) Boolean indicating whether this is the final iteration of reasoning. ## Best Practice Workflow: 1. **Start with Initial Draft:** Begin with your first-pass reasoning and set a reasonable total_drafts (typically 3-5). 2. **Alternate Critique and Revision:** Use is_critique=true to evaluate reasoning, then is_critique=false to implement improvements. 3. **Focus Each Critique:** Choose a specific critique_focus for each evaluation cycle rather than attempting to address everything at once. 4. **Provide Detailed Revision Guidance:** Include specific, actionable revision_instructions based on each critique. 5. **Target Specific Steps When Needed:** Use step_to_review to focus on particular reasoning steps that need improvement. 6. **Adjust Total Drafts As Needed:** Modify total_drafts based on problem complexity and progress. 7. **Mark Completion Appropriately:** Set next_step_needed=false only when the reasoning chain is complete and satisfactory. 8. **Aim for Progressive Improvement:** Each iteration should measurably improve the reasoning quality. ## Example Application: - **Initial Draft:** First-pass reasoning about a complex problem - **Critique #1:** Focus on logical consistency and identify contradictions - **Revision #1:** Address logical flaws found in the critique - **Critique #2:** Focus on completeness and identify missing considerations - **Revision #2:** Incorporate overlooked aspects and strengthen reasoning - **Final Critique:** Holistic review of clarity and relevance - **Final Revision:** Refine presentation and ensure direct addressing of the problem Chain of Draft is particularly effective when complex reasoning must be broken down into clear steps, analyzed from multiple perspectives, and refined through systematic critique. By mimicking the human drafting process, it produces more robust and accurate reasoning than single-pass approaches.
    TypeScript
    MIT License
  • This is a multipurpose tool that supports the following subcommands: ## ReadFile file_path offset? limit? Reads a file from the local filesystem. The file_path parameter must be an absolute path, not a relative path. By default, it reads up to ${MAX_LINES_TO_READ} lines starting from the beginning of the file. You can optionally specify a line offset and limit (especially handy for long files), but it's recommended to read the whole file by not providing these parameters. Any lines longer than ${MAX_LINE_LENGTH} characters will be truncated. For image files, the tool will display the image for you. ## WriteFile file_path content Write a file to the local filesystem. Overwrites the existing file if there is one. Before using this tool: 1. Use the ReadFile tool to understand the file's contents and context 2. Directory Verification (only applicable when creating new files): - Use the LS tool to verify the parent directory exists and is the correct location ## EditFile file_path old_string new_string This is a tool for editing files. For larger edits, use the Write tool to overwrite files. Before using this tool: 1. Use the View tool to understand the file's contents and context 2. Verify the directory path is correct (only applicable when creating new files): - Use the LS tool to verify the parent directory exists and is the correct location To make a file edit, provide the following: 1. file_path: The absolute path to the file to modify (must be absolute, not relative) 2. old_string: The text to replace (must be unique within the file, and must match the file contents exactly, including all whitespace and indentation) 3. new_string: The edited text to replace the old_string The tool will replace ONE occurrence of old_string with new_string in the specified file. CRITICAL REQUIREMENTS FOR USING THIS TOOL: 1. UNIQUENESS: The old_string MUST uniquely identify the specific instance you want to change. This means: - Include AT LEAST 3-5 lines of context BEFORE the change point - Include AT LEAST 3-5 lines of context AFTER the change point - Include all whitespace, indentation, and surrounding code exactly as it appears in the file 2. SINGLE INSTANCE: This tool can only change ONE instance at a time. If you need to change multiple instances: - Make separate calls to this tool for each instance - Each call must uniquely identify its specific instance using extensive context 3. VERIFICATION: Before using this tool: - Check how many instances of the target text exist in the file - If multiple instances exist, gather enough context to uniquely identify each one - Plan separate tool calls for each instance WARNING: If you do not follow these requirements: - The tool will fail if old_string matches multiple locations - The tool will fail if old_string doesn't match exactly (including whitespace) - You may change the wrong instance if you don't include enough context When making edits: - Ensure the edit results in idiomatic, correct code - Do not leave the code in a broken state - Always use absolute file paths (starting with /) If you want to create a new file, use: - A new file path, including dir name if needed - An empty old_string - The new file's contents as new_string Remember: when making multiple file edits in a row to the same file, you should prefer to send all edits in a single message with multiple calls to this tool, rather than multiple messages with a single call each. ## LS directory_path Lists files and directories in a given path. The path parameter must be an absolute path, not a relative path. You should generally prefer the Glob and Grep tools, if you know which directories to search. Args: ctx: The MCP context command: The subcommand to execute (ReadFile, WriteFile, EditFile, LS) file_path: The path to the file or directory to operate on content: Content for WriteFile command old_string: String to replace for EditFile command new_string: Replacement string for EditFile command offset: Line offset for ReadFile command limit: Line limit for ReadFile command
    Python
    Apache 2.0
  • Atom of Thoughts (AoT) is a tool for solving complex problems by decomposing them into independent, reusable atomic units of thought. Unlike traditional sequential thinking, this tool enables more powerful problem solving by allowing atomic units of thought to form dependencies with each other. When to use: - Solving problems requiring complex reasoning - Generating hypotheses that need verification from multiple perspectives - Deriving high-confidence conclusions in scenarios where accuracy is crucial - Minimizing logical errors in critical tasks - Decision-making requiring multiple verification steps Atom types: - premise: Basic assumptions or given information for problem solving - reasoning: Logical reasoning process based on other atoms - hypothesis: Proposed solutions or intermediate conclusions - verification: Process to evaluate the validity of other atoms (especially hypotheses) - conclusion: Verified hypotheses or final problem solutions Parameter descriptions: - atomId: Unique identifier for the atom (e.g., 'A1', 'H2') - content: Actual content of the atom - atomType: Type of atom (one of: premise, reasoning, hypothesis, verification, conclusion) - dependencies: List of IDs of other atoms this atom depends on - confidence: Confidence level of this atom (value between 0-1) - isVerified: Whether this atom has been verified - depth: Depth level of this atom (in the decomposition-contraction process) Additional features: 1. Decomposition-Contraction mechanism: - Decompose atoms into smaller sub-atoms and contract back after verification - startDecomposition(atomId): Start atom decomposition - addToDecomposition(decompositionId, atomId): Add sub-atom to decomposition - completeDecomposition(decompositionId): Complete decomposition process 2. Automatic termination mechanism: - Automatically terminate when reaching maximum depth or finding high-confidence conclusion - getTerminationStatus(): Return termination status and reason - getBestConclusion(): Return highest confidence conclusion Usage method: 1. Understand the problem and define necessary premise atoms 2. Create reasoning atoms based on premises 3. Create hypothesis atoms based on reasoning 4. Create verification atoms to verify hypotheses 5. Derive conclusion atoms based on verified hypotheses 6. Use atom decomposition to explore deeper when necessary 7. Present the high-confidence conclusion atom as the final answer
    JavaScript
    MIT License
  • Generate DecentSampler <groups> XML for drum kits. This tool supports two configuration types: BasicDrumKitConfig: - For simple presets with minimal features - No UI controls, effects, or routing - Only supports basic sample mapping and optional velocity layers - Recommended for straightforward drum kits AdvancedDrumKitConfig: - For complex setups combining multiple features - Supports UI controls, effects, and routing - Integrates with other tools (configure_drum_controls, configure_mic_routing, etc.) - Use when you need advanced features like round robin or multi-mic setups Best Practices: - IMPORTANT: Always use absolute paths (e.g., 'C:/Users/username/Documents/Samples/kick.wav') - Group all samples for a drum piece into a single group - When using multiple mic positions, include them all in the same group - Use velocity layers within a group to control dynamics Error Handling: - Validates all sample paths exist - Checks for valid MIDI note numbers - Ensures velocity layers don't overlap - Verifies muting group configurations - Returns specific errors for any invalid settings Example Configurations: 1. Basic Configuration (simple drum kit): { "globalSettings": { "velocityLayers": [ { "low": 1, "high": 42, "name": "soft" }, { "low": 43, "high": 85, "name": "medium" }, { "low": 86, "high": 127, "name": "hard" } ] }, "drumPieces": [{ "name": "Kick", "rootNote": 36, "samples": [ {"path": "C:/Samples/Kick_Soft.wav"}, {"path": "C:/Samples/Kick_Medium.wav"}, {"path": "C:/Samples/Kick_Hard.wav"} ] }] } 2. Advanced Configuration (multi-mic kit with controls): { "globalSettings": { "velocityLayers": [ { "low": 1, "high": 127, "name": "full" } ], "drumControls": { "kick": { "pitch": { "default": 0, "min": -12, "max": 12 }, "envelope": { "attack": 0.001, "decay": 0.5, "sustain": 0, "release": 0.1 } } }, "micBuses": [ { "name": "Close Mic", "outputTarget": "MAIN_OUTPUT", "volume": { "default": 0, "midiCC": 20 } } ] }, "drumPieces": [{ "name": "Kick", "rootNote": 36, "samples": [ { "path": "C:/Samples/Kick_Close.wav", "micConfig": { "position": "close", "busIndex": 0 } } ], "muting": { "tags": ["kick"], "silencedByTags": [] } }] } Success Response: Returns complete XML structure with: - Organized sample groups - Velocity layer mappings - Muting group configurations - All sample references and settings - Advanced features when using AdvancedDrumKitConfig
    TypeScript
    MIT License
  • Execute a Supabase Management API request. This tool allows you to make direct calls to the Supabase Management API, which provides programmatic access to manage your Supabase project settings, resources, and configurations. REQUEST FORMATTING: - Use paths exactly as defined in the API specification - The {ref} parameter will be automatically injected from settings - Format request bodies according to the API specification PARAMETERS: - method: HTTP method (GET, POST, PUT, PATCH, DELETE) - path: API path (e.g. /v1/projects/{ref}/functions) - path_params: Path parameters as dict (e.g. {"function_slug": "my-function"}) - use empty dict {} if not needed - request_params: Query parameters as dict (e.g. {"key": "value"}) - use empty dict {} if not needed - request_body: Request body as dict (e.g. {"name": "test"}) - use empty dict {} if not needed PATH PARAMETERS HANDLING: - The {ref} placeholder (project reference) is automatically injected - you don't need to provide it - All other path placeholders must be provided in the path_params dictionary - Common placeholders include: * {function_slug}: For Edge Functions operations * {id}: For operations on specific resources (API keys, auth providers, etc.) * {slug}: For organization operations * {branch_id}: For database branch operations * {provider_id}: For SSO provider operations * {tpa_id}: For third-party auth operations EXAMPLES: 1. GET request with path and query parameters: method: "GET" path: "/v1/projects/{ref}/functions/{function_slug}" path_params: {"function_slug": "my-function"} request_params: {"version": "1"} request_body: {} 2. POST request with body: method: "POST" path: "/v1/projects/{ref}/functions" path_params: {} request_params: {} request_body: {"name": "test-function", "slug": "test-function"} SAFETY SYSTEM: API operations are categorized by risk level: - LOW RISK: Read operations (GET) - allowed in SAFE mode - MEDIUM/HIGH RISK: Write operations (POST, PUT, PATCH, DELETE) - require UNSAFE mode - EXTREME RISK: Destructive operations - require UNSAFE mode and confirmation - BLOCKED: Some operations are completely blocked for safety reasons SAFETY CONSIDERATIONS: - By default, the API client starts in SAFE mode, allowing only read operations - To perform write operations, first use live_dangerously(service="api", enable=True) - High-risk operations will be rejected with a confirmation ID - Use confirm_destructive_operation with the provided ID after reviewing risks - Some operations may be completely blocked for safety reasons For a complete list of available API endpoints and their parameters, use the get_management_api_spec tool. For details on safety rules, use the get_management_api_safety_rules tool.
    Python
    Apache 2.0
  • Update a blog post with new data. Args: post_id: The ID of the post to update update_data: Dictionary containing the updated data and updated_at timestamp. Note: 'updated_at' is required. If 'lexical' is provided, it must be a valid JSON string. The lexical content must be a properly escaped JSON string in this format: { "root": { "children": [ { "children": [ { "detail": 0, "format": 0, "mode": "normal", "style": "", "text": "Your content here", "type": "text", "version": 1 } ], "direction": "ltr", "format": "", "indent": 0, "type": "paragraph", "version": 1 } ], "direction": "ltr", "format": "", "indent": 0, "type": "root", "version": 1 } } Example usage: update_data = { "post_id": "67abcffb7f82ac000179d76f", "update_data": { "updated_at": "2025-02-11T22:54:40.000Z", "lexical": "{"root":{"children":[{"children":[{"detail":0,"format":0,"mode":"normal","style":"","text":"Hello World","type":"text","version":1}],"direction":"ltr","format":"","indent":0,"type":"paragraph","version":1}],"direction":"ltr","format":"","indent":0,"type":"root","version":1}}" } } Updatable fields for a blog post: - slug: Unique URL slug for the post. - id: Identifier of the post. - uuid: Universally unique identifier for the post. - title: The title of the post. - lexical: JSON string representing the post content in lexical format. - html: HTML version of the post content. - comment_id: Identifier for the comment thread. - feature_image: URL to the post's feature image. - feature_image_alt: Alternate text for the feature image. - feature_image_caption: Caption for the feature image. - featured: Boolean flag indicating if the post is featured. - status: The publication status (e.g., published, draft). - visibility: Visibility setting (e.g., public, private). - created_at: Timestamp when the post was created. - updated_at: Timestamp when the post was last updated. - published_at: Timestamp when the post was published. - custom_excerpt: Custom excerpt text for the post. - codeinjection_head: Code to be injected into the head section. - codeinjection_foot: Code to be injected into the footer section. - custom_template: Custom template assigned to the post. - canonical_url: The canonical URL for SEO purposes. - tags: List of tag objects associated with the post. - authors: List of author objects for the post. - primary_author: The primary author object. - primary_tag: The primary tag object. - url: Direct URL link to the post. - excerpt: Short excerpt or summary of the post. - og_image: Open Graph image URL for social sharing. - og_title: Open Graph title for social sharing. - og_description: Open Graph description for social sharing. - twitter_image: Twitter-specific image URL. - twitter_title: Twitter-specific title. - twitter_description: Twitter-specific description. - meta_title: Meta title for SEO. - meta_description: Meta description for SEO. - email_only: Boolean flag indicating if the post is for email distribution only. - newsletter: Dictionary containing newsletter configuration details. - email: Dictionary containing email details related to the post. ctx: Optional context for logging Returns: Formatted string containing the updated post details Raises: GhostError: If there is an error accessing the Ghost API or missing required fields
    Python
    MIT License
  • ⚠️ WARNING: This tool returns a limited subset of results (default: 5 items) to protect the LLM's context window. DO NOT increase this limit unless explicitly confirmed by the user. Query data from an Azure Storage Table with optional filters. Supported OData Filter Examples: 1. Simple equality: filter: "PartitionKey eq 'COURSE'" filter: "email eq 'user@example.com'" 2. Compound conditions: filter: "PartitionKey eq 'USER' and email eq 'user@example.com'" filter: "PartitionKey eq 'COURSE' and title eq 'GDPR Training'" 3. Numeric comparisons: filter: "age gt 25" filter: "costPrice le 100" 4. Date comparisons (ISO 8601 format): filter: "createdDate gt datetime'2023-01-01T00:00:00Z'" filter: "timestamp lt datetime'2024-12-31T23:59:59Z'" Supported Operators: - eq: Equal - ne: Not equal - gt: Greater than - ge: Greater than or equal - lt: Less than - le: Less than or equal - and: Logical and - or: Logical or - not: Logical not
    JavaScript
    MIT License
  • Register a new user request and plan its associated tasks. You must provide 'originalRequest' and 'tasks', and optionally 'splitDetails'. This tool initiates a new workflow for handling a user's request. The workflow is as follows: 1. Use 'request_planning' to register a request and its tasks. 2. After adding tasks, you MUST use 'get_next_task' to retrieve the first task. A progress table will be displayed. 3. Use 'get_next_task' to retrieve the next uncompleted task. 4. **IMPORTANT:** After marking a task as done, the assistant MUST NOT proceed to another task without the user's approval. The user must explicitly approve the completed task using 'approve_task_completion'. A progress table will be displayed before each approval request. 5. Once a task is approved, you can proceed to 'get_next_task' again to fetch the next pending task. 6. Repeat this cycle until all tasks are done. 7. After all tasks are completed (and approved), 'get_next_task' will indicate that all tasks are done and that the request awaits approval for full completion. 8. The user must then approve the entire request's completion using 'approve_request_completion'. If the user does not approve and wants more tasks, you can again use 'request_planning' to add new tasks and continue the cycle. The critical point is to always wait for user approval after completing each task and after all tasks are done, wait for request completion approval. Do not proceed automatically.
    JavaScript
    MIT License
  • Update multiple blog posts that match the filter criteria. Args: filter_criteria: Dictionary containing fields to filter posts by, example: { "status": "draft", "tag": "news", "featured": True } Supported filter fields: - status: Post status (draft, published, etc) - tag: Filter by tag name - author: Filter by author name - featured: Boolean to filter featured posts - visibility: Post visibility (public, members, paid) update_data: Dictionary containing the fields to update. The updated_at field is required. All fields supported by the Ghost API can be updated: - slug: Unique URL slug for the post - title: The title of the post - lexical: JSON string representing the post content in lexical format - html: HTML version of the post content - comment_id: Identifier for the comment thread - feature_image: URL to the post's feature image - feature_image_alt: Alternate text for the feature image - feature_image_caption: Caption for the feature image - featured: Boolean flag indicating if the post is featured - status: The publication status (e.g., published, draft) - visibility: Visibility setting (e.g., public, private) - created_at: Timestamp when the post was created - updated_at: Timestamp when the post was last updated (REQUIRED) - published_at: Timestamp when the post was published - custom_excerpt: Custom excerpt text for the post - codeinjection_head: Code to be injected into the head section - codeinjection_foot: Code to be injected into the footer section - custom_template: Custom template assigned to the post - canonical_url: The canonical URL for SEO purposes - tags: List of tag objects associated with the post - authors: List of author objects for the post - primary_author: The primary author object - primary_tag: The primary tag object - og_image: Open Graph image URL for social sharing - og_title: Open Graph title for social sharing - og_description: Open Graph description for social sharing - twitter_image: Twitter-specific image URL - twitter_title: Twitter-specific title - twitter_description: Twitter-specific description - meta_title: Meta title for SEO - meta_description: Meta description for SEO - email_only: Boolean flag indicating if the post is for email distribution only - newsletter: Dictionary containing newsletter configuration details - email: Dictionary containing email details related to the post Example: { "updated_at": "2025-02-11T22:54:40.000Z", "status": "published", "featured": True, "tags": [{"name": "news"}, {"name": "featured"}], "meta_title": "My Updated Title", "og_description": "New social sharing description" } ctx: Optional context for logging Returns: Formatted string containing summary of updated posts Raises: GhostError: If there is an error accessing the Ghost API or missing required fields
    Python
    MIT License
  • A problem-solving tool inspired by Claude Shannon's systematic and iterative approach to complex problems. This tool helps break down problems using Shannon's methodology of problem definition, mathematical modeling, validation, and practical implementation. When to use this tool: - Complex system analysis - Information processing problems - Engineering design challenges - Problems requiring theoretical frameworks - Optimization problems - Systems requiring practical implementation - Problems that need iterative refinement - Cases where experimental validation complements theory Key features: - Systematic progression through problem definition → constraints → modeling → validation → implementation - Support for revising earlier steps as understanding evolves - Ability to mark steps for re-examination with new information - Experimental validation alongside formal proofs - Explicit tracking of assumptions and dependencies - Confidence levels for each step - Rich feedback and validation results Parameters explained: - thoughtType: Type of thinking step (PROBLEM_DEFINITION, CONSTRAINTS, MODEL, PROOF, IMPLEMENTATION) - uncertainty: Confidence level in the current thought (0-1) - dependencies: Which previous thoughts this builds upon - assumptions: Explicit listing of assumptions made - isRevision: Whether this revises an earlier thought - revisesThought: Which thought is being revised - recheckStep: For marking steps that need re-examination - proofElements: For formal validation steps - experimentalElements: For empirical validation - implementationNotes: For practical application steps The tool supports an iterative approach: 1. Define the problem's fundamental elements (revisable as understanding grows) 2. Identify system constraints and limitations (can be rechecked with new information) 3. Develop mathematical/theoretical models 4. Validate through proofs and/or experimental testing 5. Design and test practical implementations Each thought can build on, revise, or re-examine previous steps, creating a flexible yet rigorous problem-solving framework.
    TypeScript
  • Use this to check for files before deciding you don't have access to a file or image or resource. It pulls in a list of all of user's available Resources (i.e. image files and their URI's) so we can reference pre-existing images to manipulate or upload to Stability AI.
    TypeScript
    MIT License
  • Execute PostgreSQL statements against your Supabase database. IMPORTANT: All SQL statements must end with a semicolon (;). OPERATION TYPES AND REQUIREMENTS: 1. READ Operations (SELECT, EXPLAIN, etc.): - Can be executed directly without special requirements - Example: SELECT * FROM public.users LIMIT 10; 2. WRITE Operations (INSERT, UPDATE, DELETE): - Require UNSAFE mode (use live_dangerously('database', True) first) - Example: INSERT INTO public.users (email) VALUES ('user@example.com'); 3. SCHEMA Operations (CREATE, ALTER, DROP): - Require UNSAFE mode (use live_dangerously('database', True) first) - Destructive operations (DROP, TRUNCATE) require additional confirmation - Example: CREATE TABLE public.test_table (id SERIAL PRIMARY KEY, name TEXT); MIGRATION HANDLING: All queries that modify the database will be automatically version controlled by the server. You can provide optional migration name, if you want to name the migration. - Respect the following format: verb_noun_detail. Be descriptive and concise. - Examples: - create_users_table - add_email_to_profiles - enable_rls_on_users - If you don't provide a migration name, the server will generate one based on the SQL statement - The system will sanitize your provided name to ensure compatibility with database systems - Migration names are prefixed with a timestamp in the format YYYYMMDDHHMMSS SAFETY SYSTEM: Operations are categorized by risk level: - LOW RISK: Read operations (SELECT, EXPLAIN) - allowed in SAFE mode - MEDIUM RISK: Write operations (INSERT, UPDATE, DELETE) - require UNSAFE mode - HIGH RISK: Schema operations (CREATE, ALTER) - require UNSAFE mode - EXTREME RISK: Destructive operations (DROP, TRUNCATE) - require UNSAFE mode and confirmation TRANSACTION HANDLING: - DO NOT use transaction control statements (BEGIN, COMMIT, ROLLBACK) - The database client automatically wraps queries in transactions - The SQL validator will reject queries containing transaction control statements - This ensures atomicity and provides rollback capability for data modifications MULTIPLE STATEMENTS: - You can send multiple SQL statements in a single query - Each statement will be executed in order within the same transaction - Example: CREATE TABLE public.test_table (id SERIAL PRIMARY KEY, name TEXT); INSERT INTO public.test_table (name) VALUES ('test'); CONFIRMATION FLOW FOR HIGH-RISK OPERATIONS: - High-risk operations (DROP TABLE, TRUNCATE, etc.) will be rejected with a confirmation ID - The error message will explain what happened and provide a confirmation ID - Review the risks with the user before proceeding - Use the confirm_destructive_operation tool with the provided ID to execute the operation IMPORTANT GUIDELINES: - The database client starts in SAFE mode by default for safety - Only enable UNSAFE mode when you need to modify data or schema - Never mix READ and WRITE operations in the same transaction - For destructive operations, be prepared to confirm with the confirm_destructive_operation tool WHEN TO USE OTHER TOOLS INSTEAD: - For Auth operations (users, authentication, etc.): Use call_auth_admin_method instead of direct SQL The Auth Admin SDK provides safer, validated methods for user management - For project configuration, functions, storage, etc.: Use send_management_api_request The Management API handles Supabase platform features that aren't directly in the database Note: This tool operates on the PostgreSQL database only. API operations use separate safety controls.
    Python
    Apache 2.0
  • Get the complete Supabase Management API specification. Returns the full OpenAPI specification for the Supabase Management API, including: - All available endpoints and operations - Required and optional parameters for each operation - Request and response schemas - Authentication requirements - Safety information for each operation This tool can be used in four different ways: 1. Without parameters: Returns all domains (default) 2. With path and method: Returns the full specification for a specific API endpoint 3. With domain only: Returns all paths and methods within that domain 4. With all_paths=True: Returns all paths and methods Parameters: - params: Dictionary containing optional parameters: - path: Optional API path (e.g., "/v1/projects/{ref}/functions") - method: Optional HTTP method (e.g., "GET", "POST") - domain: Optional domain/tag name (e.g., "Auth", "Storage") - all_paths: Optional boolean, if True returns all paths and methods Available domains: - Analytics: Analytics-related endpoints - Auth: Authentication and authorization endpoints - Database: Database management endpoints - Domains: Custom domain configuration endpoints - Edge Functions: Serverless function management endpoints - Environments: Environment configuration endpoints - OAuth: OAuth integration endpoints - Organizations: Organization management endpoints - Projects: Project management endpoints - Rest: RESTful API endpoints - Secrets: Secret management endpoints - Storage: Storage management endpoints This specification is useful for understanding: - What operations are available through the Management API - How to properly format requests for each endpoint - Which operations require unsafe mode - What data structures to expect in responses SAFETY: This is a low-risk read operation that can be executed in SAFE mode.
    Python
    Apache 2.0
  • Get a specific node from a Figma file Args: file_key (str): The file key found in the shared Figma URL, e.g. if url is https://www.figma.com/proto/do4pJqHwNwH1nBrrscu6Ld/Untitled?page-id=0%3A1&node-id=0-3&viewport=361%2C361%2C0.08&t=9SVttILbgMlPWuL0-1&scaling=min-zoom&content-scaling=fixed&starting-point-node-id=0%3A3, then the file key is do4pJqHwNwH1nBrrscu6Ld node_id (str): The ID of the node to retrieve, has to be in format x:x, e.g. in url it will be like 0-3, but it should be 0:3 Returns: dict: The node data if found, empty dict if not found
    Python
    MIT License
  • Converts content between different formats. Transforms input content from any supported format into the specified output format. 🚨 CRITICAL REQUIREMENTS - PLEASE READ: 1. PDF Conversion: * You MUST install TeX Live BEFORE attempting PDF conversion: * Ubuntu/Debian: `sudo apt-get install texlive-xetex` * macOS: `brew install texlive` * Windows: Install MiKTeX or TeX Live from https://miktex.org/ or https://tug.org/texlive/ * PDF conversion will FAIL without this installation 2. File Paths - EXPLICIT REQUIREMENTS: * When asked to save or convert to a file, you MUST provide: - Complete directory path - Filename - File extension * Example request: 'Write a story and save as PDF' * You MUST specify: '/path/to/story.pdf' or 'C:\Documents\story.pdf' * The tool will NOT automatically generate filenames or extensions 3. File Location After Conversion: * After successful conversion, the tool will display the exact path where the file is saved * Look for message: 'Content successfully converted and saved to: [file_path]' * You can find your converted file at the specified location * If no path is specified, files may be saved in system temp directory (/tmp/ on Unix systems) * For better control, always provide explicit output file paths Supported formats: - Basic formats: txt, html, markdown - Advanced formats (REQUIRE complete file paths): pdf, docx, rst, latex, epub ✅ CORRECT Usage Examples: 1. 'Convert this text to HTML' (basic conversion) - Tool will show converted content 2. 'Save this text as PDF at /documents/story.pdf' - Correct: specifies path + filename + extension - Tool will show: 'Content successfully converted and saved to: /documents/story.pdf' ❌ INCORRECT Usage Examples: 1. 'Save this as PDF in /documents/' - Missing filename and extension 2. 'Convert to PDF' - Missing complete file path When requesting conversion, ALWAYS specify: 1. The content or input file 2. The desired output format 3. For advanced formats: complete output path + filename + extension Example: 'Convert this markdown to PDF and save as /path/to/output.pdf' Note: After conversion, always check the success message for the exact file location.
    Python
    MIT License
  • Use SequenceMatcher to return list of the best "good enough" matches. word is a sequence for which close matches are desired (typically a string). possibilities is a list of sequences against which to match word (typically a list of strings). Optional arg n (default 3) is the maximum number of close matches to return. n must be > 0. Optional arg cutoff (default 0.6) is a float in [0, 1]. Possibilities that don't score at least that similar to word are ignored. The best (no more than n) matches among the possibilities are returned in a list, sorted by similarity score, most similar first. >>> get_close_matches("appel", ["ape", "apple", "peach", "puppy"]) ['apple', 'ape'] >>> import keyword as _keyword >>> get_close_matches("wheel", _keyword.kwlist) ['while'] >>> get_close_matches("Apple", _keyword.kwlist) [] >>> get_close_matches("accept", _keyword.kwlist) ['except']
    Python
    MIT License
  • Given a 'requestId', return the next pending task (not done yet). If all tasks are completed, it will indicate that no more tasks are left and that you must wait for the request completion approval. A progress table showing the current status of all tasks will be displayed with each response. If the same task is returned again or if no new task is provided after a task was marked as done but not yet approved, you MUST NOT proceed. In such a scenario, you must prompt the user for approval via 'approve_task_completion' before calling 'get_next_task' again. Do not skip the user's approval step. In other words: - After calling 'mark_task_done', do not call 'get_next_task' again until 'approve_task_completion' is called by the user. - If 'get_next_task' returns 'all_tasks_done', it means all tasks have been completed. At this point, you must not start a new request or do anything else until the user decides to 'approve_request_completion' or possibly add more tasks via 'request_planning'.
    JavaScript
    MIT License
  • List all notes that contain a given keyword. The result does not include entire note bodies as they are truncated in 200 characters. You have to retrieve the full note content by calling `read-note`. Here are tips to specify keywords effectively: ## Use special qualifiers to narrow down results You can use special qualifiers to get more accurate results. See the qualifiers and their usage examples: - **book** `book:Blog`: Searches for notes in the 'Blog' notebook. - **tag** `tag:JavaScript`: Searches for all notes having the 'JavaScript' tag. Read more about [tags](https://docs.inkdrop.app/manual/write-notes#tag-notes). - **status** `status:onHold`: Searches for all notes with the 'On hold' status. Read more about [statuses](/reference/note-statuses). - **title** `title:"JavaScript setTimeout"`: Searches for the note with the specified title. - **body** `body:KEYWORD`: Searches for a specific word in all notes. Equivalent to a [global search](#search-for-notes-across-all-notebooks). ### Combine qualifiers You can combine the filter qualifiers to refine data even more. **Find notes that contain the word 'Hello' and have the 'Issue' tag.** ```text Hello tag:Issue ``` **Find notes that contain the word 'Typescript,' have the 'Contribution' tag, and the 'Completed' status** ```text Typescript tag:Contribution status:Completed ``` ## Search for text with spaces To find the text that includes spaces, put the text into the double quotation marks ("): ```text "database associations" ``` ## Exclude text from search To exclude text from the search results or ignore a specific qualifier, put the minus sign (-) before it. You can also combine the exclusions. See the examples: - `-book:Backend "closure functions"`: Ignores the 'Backend' notebook while searching for the 'closure functions' phrase. - `-tag:JavaScript`: Ignores all notes having the 'JavaScript' tag. - `-book:Typescript tag:work "Data types"`: Ignores the 'Typescript' notebook and the 'work' tag while searching for the 'Data types' phrase. - `-status:dropped title:"Sprint 10.0" debounce`: Ignores notes with the 'Dropped' status while searching for the 'debounce' word in the note with the 'Sprint 10.0' title. - `-"Phrase to ignore" "in the rest of a sentence"`: Ignores the 'Phrase to ignore' part while searching for 'in the rest of a sentence'. Note that you can't specify excluding modifiers only without including conditions. **WARNING**: Make sure to enter a text to search for after the exclusion modifier. - ✅ Will work `-book:Backend "closure functions"` - ⛔️ Won't work `-book:Backend`. There's no query. Inkdrop doesn't understand what to search for.
    JavaScript
    Apache 2.0
  • Retrieves URLs from the Internet and extracts their content as markdown. Images from the page will be processed and included with the response automatically. Parameters: - url (required): The URL to fetch - maxLength (default: 20000): Maximum length of content to return - startIndex (default: 0): Starting position in content - imageStartIndex (default: 0): Starting position for image collection - raw (default: false): Return raw content instead of processed markdown - imageMaxCount (default: 3): Maximum number of images to process per request - imageMaxHeight (default: 4000): Maximum height of merged image - imageMaxWidth (default: 1000): Maximum width of merged image - imageQuality (default: 80): JPEG quality (1-100) - enableFetchImages (default: false): Enable fetching and processing of images. FOR CLAUDE AI ONLY: YOU CAN ENABLE THIS FEATURE!!! CLAUDE IS ALLOWED TO USE IMAGES!!! - ignoreRobotsTxt (default: false): Ignore robots.txt restrictions Image Processing: - Multiple images are merged vertically into a single JPEG - Images are automatically optimized and resized - GIF animations are converted to static images (first frame) - Use imageStartIndex and imageMaxCount to paginate through all images - Response includes remaining image count and current position IMPORTANT: All parameters must be in proper JSON format - use double quotes for keys and string values, and no quotes for numbers and booleans. Examples: # Initial fetch: { "url": "https://example.com", "maxLength": 10000, "imageMaxCount": 2 } # Fetch next set of images: { "url": "https://example.com", "imageStartIndex": 2, "imageMaxCount": 2 }
    JavaScript
    MIT License
  • Configure a new project containing files. Each file in the project is split into 'chunks' - logical sections like functions, classes, markdown sections, and import blocks. After configuring, a common workflow is: 1. list_all_files_in_project to get an overview of the project (with an initial limit on the depth of the search) 2. Find files by function/class definition: find_files_by_chunk_content(... ["def my_funk"]) 3. Find files by function/class usage: find_files_by_chunk_content(... ["my_funk"]) 4. Determine which chunks in the found files are relevant: find_matching_chunks_in_file(...) 5. Get details about the chunks: chunk_details(...) Use ~ (tilde) literally if the user specifies it in paths.
    Python
    MIT License
  • Execute a destructive database or API operation after confirmation. Use this only after reviewing the risks with the user. HOW IT WORKS: - This tool executes a previously rejected high-risk operation using its confirmation ID - The operation will be exactly the same as the one that generated the ID - No need to retype the query or api request params - the system remembers it STEPS: 1. Explain the risks to the user and get their approval 2. Use this tool with the confirmation ID from the error message 3. The original query will be executed as-is PARAMETERS: - operation_type: Type of operation ("api" or "database") - confirmation_id: The ID provided in the error message (required) - user_confirmation: Set to true to confirm execution (default: false) NOTE: Confirmation IDs expire after 5 minutes for security
    Python
    Apache 2.0
  • Agrega pasajeros a un programa en LumbreTravel. Es importante que los pasajeros ya existan en LumbreTravel, si no existen se puede usar la tool create_passengers para crearlos. O si existen se puede usar la tool get_passengers_by_fullname o get_passengers_by_email para obtener el id de cada pasajero.
    TypeScript
  • Step 2: Find the actual matching chunks in a specific file. Required after find_files_by_chunk_content or list_all_files_in_project to see matches, as those tools only show files, not their contents. This can be used for things like: - Finding all chunks in a file that make reference to a specific function (e.g. find_matching_chunks_in_file(..., ["my_funk"]) - Finding a chunk where a specific function is defined (e.g. find_matching_chunks_in_file(..., ["def my_funk"]) Some chunks are split into multiple parts, because they are too large. This will look like 'chunkx_part1', 'chunkx_part2', ...
    Python
    MIT License
  • Retrieve a list of all migrations a user has from Supabase. Returns a list of migrations with the following information: - Version (timestamp) - Name - SQL statements (if requested) - Statement count - Version type (named or numbered) Parameters: - limit: Maximum number of migrations to return (default: 50, max: 100) - offset: Number of migrations to skip for pagination (default: 0) - name_pattern: Optional pattern to filter migrations by name. Uses SQL ILIKE pattern matching (case-insensitive). The pattern is automatically wrapped with '%' wildcards, so "users" will match "create_users_table", "add_email_to_users", etc. To search for an exact match, use the complete name. - include_full_queries: Whether to include the full SQL statements in the result (default: false) SAFETY: This is a low-risk read operation that can be executed in SAFE mode.
    Python
    Apache 2.0
  • A context-aware reasoning system that orchestrates structured thought processes through dynamic trajectories. Core Capabilities: - Maintains adaptive thought chains with branching and revision capabilities - Implements iterative hypothesis generation and validation cycles - Preserves context coherence across non-linear reasoning paths - Supports dynamic scope adjustment and trajectory refinement Reasoning Patterns: - Sequential analysis with backtracking capability - Parallel exploration through managed branch contexts - Recursive refinement via structured revision cycles - Hypothesis validation through multi-step verification Parameters: thought: Structured reasoning step that supports: • Primary analysis chains • Hypothesis formulation/validation • Branch exploration paths • Revision proposals • Context preservation markers • Verification checkpoints next_thought_needed: Signal for continuation of reasoning chain thought_number: Position in current reasoning trajectory total_thoughts: Dynamic scope indicator (adjustable) is_revision: Marks recursive refinement steps revises_thought: References target of refinement branch_from_thought: Indicates parallel exploration paths branch_id: Context identifier for parallel chains needs_more_thoughts: Signals scope expansion requirement Execution Protocol: 1. Initialize with scope estimation 2. Generate structured reasoning steps 3. Validate hypotheses through verification cycles 4. Maintain context coherence across branches 5. Implement revisions through recursive refinement 6. Signal completion on validation success The system maintains solution integrity through continuous validation cycles while supporting dynamic scope adjustment and non-linear exploration paths.
    JavaScript
    MIT License
  • Call an Auth Admin method from Supabase Python SDK. This tool provides a safe, validated interface to the Supabase Auth Admin SDK, allowing you to: - Manage users (create, update, delete) - List and search users - Generate authentication links - Manage multi-factor authentication - And more IMPORTANT NOTES: - Request bodies must adhere to the Python SDK specification - Some methods may have nested parameter structures - The tool validates all parameters against Pydantic models - Extra fields not defined in the models will be rejected AVAILABLE METHODS: - get_user_by_id: Retrieve a user by their ID - list_users: List all users with pagination - create_user: Create a new user - delete_user: Delete a user by their ID - invite_user_by_email: Send an invite link to a user's email - generate_link: Generate an email link for various authentication purposes - update_user_by_id: Update user attributes by ID - delete_factor: Delete a factor on a user EXAMPLES: 1. Get user by ID: method: "get_user_by_id" params: {"uid": "user-uuid-here"} 2. Create user: method: "create_user" params: { "email": "user@example.com", "password": "secure-password" } 3. Update user by ID: method: "update_user_by_id" params: { "uid": "user-uuid-here", "attributes": { "email": "new@email.com" } } For complete documentation of all methods and their parameters, use the get_auth_admin_methods_spec tool.
    Python
    Apache 2.0