Server Details
Connect your AI to 500+ apps like Gmail, Slack, GitHub, and Notion with streamable HTTP transport.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- ComposioHQ/Rube
- GitHub Stars
- 48
Available Tools
11 toolsRUBE_CREATE_UPDATE_RECIPETry in Inspector
Convert executed workflow into a reusable notebook. Only use when workflow is complete or user explicitly requests.
--- DESCRIPTION FORMAT (MARKDOWN) - MUST BE NEUTRAL ---
Description is for ANY user of this recipe, not just the creator. Keep it generic.
NO PII (no real emails, names, channel names, repo names)
NO user-specific defaults (defaults go in defaults_for_required_parameters only)
Use placeholder examples only
Generate rich markdown with these sections:
Overview
[2-3 sentences: what it does, what problem it solves]
How It Works
[End-to-end flow in plain language]
Key Features
[Feature 1]
[Feature 2]
Step-by-Step Flow
[Step]: [What happens]
[Step]: [What happens]
Apps & Integrations
App | Purpose |
[App] | [Usage] |
Inputs Required
Input | Description | Format |
channel_name | Slack channel to post to | WITHOUT # prefix |
(No default values here - just format guidance)
Output
[What the recipe produces]
Notes & Limitations
[Edge cases, rate limits, caveats]
--- CODE STRUCTURE ---
Code has 2 parts:
DOCSTRING HEADER (comments) - context, learnings, version history
EXECUTABLE CODE - clean Python that runs
DOCSTRING HEADER (preserve all history when updating):
""" RECIPE: [Name] FLOW: [App1] → [App2] → [Output]
VERSION HISTORY: v2 (current): [What changed] - [Why] v1: Initial version
API LEARNINGS:
[API_NAME]: [Quirk, e.g., Response nested at data.data]
KNOWN ISSUES:
[Issue and fix] """
Then EXECUTABLE CODE follows (keep code clean, learnings stay in docstring).
--- INPUT SCHEMA (USER-FRIENDLY) ---
Ask for: channel_name, repo_name, sheet_url, email_address Never ask for: channel_id, spreadsheet_id, user_id (resolve in code) Never ask for large inputs: use invoke_llm to generate content in code
GOOD DESCRIPTIONS (explicit format, generic examples - no PII): channel_name: Slack channel WITHOUT # prefix repo_name: Repository name only, NOT owner/repo google_sheet_url: Full URL from browser gmail_label: Label as shown in Gmail sidebar
REQUIRED vs OPTIONAL:
Required: things that change every run (channel name, date range, search terms)
Optional: generic settings with sensible defaults (sheet tab, row limits)
--- DEFAULTS FOR REQUIRED PARAMETERS ---
Provide in defaults_for_required_parameters for all required inputs
Use values from workflow context
Use empty string if no value available - never hallucinate
Match types: string param needs string default, number needs number
Defaults are private to creator, not shared when recipe is published
SCHEDULE-FRIENDLY DEFAULTS:
Use RELATIVE time references unless user asks otherwise, not absolute dates ✓ "last_24_hours", "past_week", "7" (days back) ✗ "2025-01-15", "December 18, 2025"
Never include timezone as an input parameter unless specifically asked
Test: "Will this default work if recipe runs tomorrow?"
--- CODING RULES ---
SINGLE EXECUTION: Generate complete notebook that runs in one invocation. CODE CORRECTNESS: Must be syntactically and semantically correct and executable. ENVIRONMENT VARIABLES: All inputs via os.environ.get(). Code is shared - no PII. TIMEOUT: 4 min hard limit. Use ThreadPoolExecutor for bulk operations. SCHEMA SAFETY: Never assume API response schema. Use invoke_llm to parse unknown responses. NESTED DATA: APIs often double-nest. Always extract properly before using. ID RESOLUTION: Convert names to IDs in code using FIND/SEARCH tools. FAIL LOUDLY: Raise Exception if expected data is empty. Never silently continue. CONTENT GENERATION: Never hardcode text. Use invoke_llm() for generated content. DEBUGGING: Timestamp all print statements. NO META LOOPS: Never call RUBE_* or RUBE_* meta tools via run_composio_tool. OUTPUT: End with just output variable (no print).
--- HELPERS ---
Available in notebook (dont import). See RUBE_REMOTE_WORKBENCH for details: run_composio_tool(slug, args) returns (result, error) invoke_llm(prompt, reasoning_effort="low") returns (response, error)
reasoning_effort: "low" (bulk classification), "medium" (summarization), "high" (creative/complex content)
Always specify based on task - use low by default, medium for analysis, high for creative generation
proxy_execute(method, endpoint, toolkit, ...) returns (result, error) upload_local_file(*paths) returns (result, error)
--- CHECKLIST ---
Description: Neutral, no PII, no defaults - for any user
Docstring header: Version history, API learnings (preserve on update)
Input schema: Human-friendly names, format guidance, no large inputs
Defaults: In defaults_for_required_parameters, type-matched, from context
Code: Single execution, os.environ.get(), no PII, fail loudly
Output: Ends with just output
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Name for the notebook / recipe. Please keep it short (ideally less than five words) Examples: "Get Github Contributors" "Send Weekly Gmail Report" "Analyze Slack Messages" | |
| recipe_id | No | Recipe id to update (optional). If not provided, will create a new recipe Example: "rcp_rBvLjfof_THF" | |
| description | Yes | Description for the notebook / recipe Examples: "Get contributors from Github repository and save to Google Sheet" "Send weekly Gmail report to all users by sending email to each user" "Analyze Slack messages from a particular channel and send summary to all users" | |
| input_schema | Yes | Expected input json schema for the Notebook / Recipe. Please keep the schema simple, avoid nested objects and arrays. Types of all input fields should be string only. Each key of this schema will be a single environment variable input to your Notebook Example: {"properties":{"repo_owner":{"description":"GitHub repository owner username","name":"repo_owner","required":true,"type":"string"},"repo_name":{"description":"GitHub repository name","name":"repo_name","required":true,"type":"string"},"google_sheet_url":{"description":"Google Sheet URL (e.g., https://docs.google.com/spreadsheets/d/SHEET_ID/edit)","name":"google_sheet_url","required":true,"type":"string"},"sheet_tab":{"description":"Sheet tab name to write data to","name":"sheet_tab","required":false,"type":"string"}},"type":"object"} | |
| output_schema | Yes | Expected output json schema of the Notebook / Recipe. If the schema has array, please ensure it has "items" in it, so we know what kind of array it is. If the schema has object, please ensure it has "properties" in it, so we know what kind of object it is Example: {"properties":{"contributors_count":{"description":"Count of contributors to Github repository","name":"contributors_count","type":"number"},"sheet_id":{"description":"ID of the sheet","name":"sheet_id","type":"string"},"sheet_updated":{"description":"Is the sheet updated?","name":"sheet_updated","type":"boolean"},"contributor_profiles":{"name":"contributor_profiles","type":"array","items":{"type":"object"},"description":"Profiles of top 10 contributors"}},"type":"object"} | |
| workflow_code | Yes | The Python code that implements the workflow, generated by the LLM based on the executed workflow. Should include all necessary imports, tool executions (via run_composio_tool), and proper error handling. Notebook should always end with output cell (not print) Example: "import os\nimport re\nfrom datetime import datetime\n\nprint(f\"[{datetime.utcnow().isoformat()}] Starting workflow\")\n\nrepo_owner = os.environ.get(\"repo_owner\")\nrepo_name = os.environ.get(\"repo_name\")\ngoogle_sheet_url = os.environ.get(\"google_sheet_url\")\n\nif not repo_owner or not repo_name or not google_sheet_url:\n raise ValueError(\"repo_owner, repo_name, and google_sheet_url are required\")\n\n# Extract spreadsheet ID from URL\nif \"docs.google.com\" in google_sheet_url:\n match = re.search(r'/d/([a-zA-Z0-9-_]+)', google_sheet_url)\n spreadsheet_id = match.group(1) if match else google_sheet_url\nelse:\n spreadsheet_id = google_sheet_url\n\nprint(f\"[{datetime.utcnow().isoformat()}] Fetching contributors from {repo_owner}/{repo_name}\")\ngithub_result, error = run_composio_tool(\n \"GITHUB_LIST_CONTRIBUTORS\",\n {\"owner\": repo_owner, \"repo\": repo_name}\n)\n\nif error:\n raise Exception(f\"Failed to fetch contributors: {error}\")\n\n# Handle nested data\ndata = github_result.get(\"data\", {})\nif \"data\" in data:\n data = data[\"data\"]\n\ncontributors = data.get(\"contributors\") or data if isinstance(data, list) else []\n\nif len(contributors) == 0:\n raise Exception(f\"No contributors found for {repo_owner}/{repo_name}\")\n\nprint(f\"[{datetime.utcnow().isoformat()}] Found {len(contributors)} contributors\")\n\n# Process data\nrows = []\nfor contributor in contributors:\n rows.append([\n contributor.get(\"login\"),\n contributor.get(\"email\", \"\"),\n contributor.get(\"location\", \"\"),\n contributor.get(\"contributions\", 0)\n ])\n\nprint(f\"[{datetime.utcnow().isoformat()}] Adding {len(rows)} rows to sheet\")\nsheets_result, sheets_error = run_composio_tool(\n \"GOOGLESHEETS_APPEND_DATA\",\n {\n \"spreadsheet_id\": spreadsheet_id,\n \"range\": \"A1\",\n \"values\": rows\n }\n)\n\nif sheets_error:\n raise Exception(f\"Failed to update sheet: {sheets_error}\")\n\nprint(f\"[{datetime.utcnow().isoformat()}] Workflow completed\")\n\noutput = {\n \"contributors_count\": len(contributors),\n \"sheet_updated\": True,\n \"spreadsheet_id\": spreadsheet_id\n}\noutput" | |
| defaults_for_required_parameters | No | Defaults for required parameters of the notebook / recipe. We store those PII related separately after encryption. Please ensure that the parameters you provide match the input schema for the recipe and all required inputs are covered. Fine to ignore optional parameters Example: {"repo_owner":"composiohq","repo_name":"composio","sheet_id":"1234567890"} |
RUBE_EXECUTE_RECIPETry in Inspector
Executes a Recipe
| Name | Required | Description | Default |
|---|---|---|---|
| recipe_id | Yes | Recipe id to update (optional). If not provided, will create a new recipe Example: "rcp_rBvLjfof_THF" | |
| input_data | Yes | Input object to pass to the Recipe |
RUBE_FIND_RECIPETry in Inspector
Find recipes using natural language search. Use this tool when:
User refers to a recipe by partial name, description, or keywords (e.g., "run my GitHub PR recipe", "the slack notification one")
User wants to find a recipe but doesn't know the exact name or ID
You need to find a recipe_id before executing it with RUBE_EXECUTE_RECIPE
The tool uses semantic matching to find the most relevant recipes based on the user's query.
Input:
query (required): Natural language search query (e.g., "GitHub PRs to Slack", "daily email summary")
limit (optional, default: 5): Maximum number of recipes to return (1-20)
include_details (optional, default: false): Include full details like description, toolkits, tools, and default params
Output:
successful: Whether the search completed successfully
recipes: Array of matching recipes sorted by relevance score, each containing:
recipe_id: Use this with RUBE_EXECUTE_RECIPE
name: Recipe name
description: What the recipe does
relevance_score: 0-100 match score
match_reason: Why this recipe matched
toolkits: Apps used (e.g., github, slack)
recipe_url: Link to view/edit
default_params: Default input parameters
total_recipes_searched: How many recipes were searched
query_interpretation: How the search query was understood
error: Error message if search failed
Example flow: User: "Run my recipe that sends GitHub PRs to Slack"
Call RUBE_FIND_RECIPE with query: "GitHub PRs to Slack"
Get matching recipe with recipe_id
Call RUBE_EXECUTE_RECIPE with that recipe_id
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of recipes to return | |
| query | Yes | Natural language query to find recipes | |
| include_details | No | Include full details (description, toolkits, tools, default params) |
RUBE_GET_RECIPE_DETAILSTry in Inspector
Get the details of the existing recipe for a given recipe id.
| Name | Required | Description | Default |
|---|---|---|---|
| recipe_id | Yes | Recipe id to update (optional). If not provided, will create a new recipe Example: "rcp_rBvLjfof_THF" |
RUBE_GET_TOOL_SCHEMASTry in Inspector
Retrieve input schemas for tools by slug. Returns complete parameter definitions required to execute each tool. Make sure to call this tool whenever the response of RUBE_SEARCH_TOOLS does not provide a complete schema for a tool - you must never invent or guess any input parameters.
| Name | Required | Description | Default |
|---|---|---|---|
| session_id | No | ALWAYS pass the session_id that was provided in the SEARCH_TOOLS response. | |
| tool_slugs | Yes | List of tool slugs to retrieve schemas for. Each slug MUST be a valid tool slug previously returned by COMPOSIO_SEARCH_TOOLS. Examples: ["GMAIL_SEND_EMAIL"] ["GMAIL_SEND_EMAIL","SLACK_SEND_MESSAGE"] |
RUBE_MANAGE_CONNECTIONSTry in Inspector
Create or manage connections to user's apps. Returns a branded authentication link that works for OAuth, API keys, and all other auth types.
Call policy:
First call RUBE_SEARCH_TOOLS for the user's query.
If RUBE_SEARCH_TOOLS indicates there is no active connection for a toolkit, call RUBE_MANAGE_CONNECTIONS with the exact toolkit name(s) returned.
Do not call RUBE_MANAGE_CONNECTIONS if RUBE_SEARCH_TOOLS returns no main tools and no related tools.
Toolkit names in toolkits must exactly match toolkit identifiers returned by RUBE_SEARCH_TOOLS; never invent names.
NEVER execute any toolkit tool without an ACTIVE connection.
Tool Behavior:
If a connection is Active, the tool returns the connection details. Always use this to verify connection status and fetch metadata.
If a connection is not Active, returns a authentication link (redirect_url) to create new connection.
If reinitiate_all is true, the tool forces reconnections for all toolkits, even if they already have active connections.
Workflow after initiating connection:
Always show the returned redirect_url as a FORMATTED MARKDOWN LINK to the user, and ask them to click on the link to finish authentication.
Begin executing tools only after the connection for that toolkit is confirmed Active.
| Name | Required | Description | Default |
|---|---|---|---|
| toolkits | Yes | List of toolkits to check or connect. Should be a valid toolkit returned by SEARCH_TOOLS (never invent one). If a toolkit is not connected, will initiate connection. Example: ['gmail', 'exa', 'github', 'outlook', 'reddit', 'googlesheets', 'one_drive'] | |
| session_id | No | ALWAYS pass the session_id that was provided in the SEARCH_TOOLS response. | |
| reinitiate_all | No | Force reconnection for ALL toolkits in the toolkits list, even if they already have Active connections. WHEN TO USE: - You suspect existing connections are stale or broken. - You want to refresh all connections with new credentials or settings. - You're troubleshooting connection issues across multiple toolkits. BEHAVIOR: - Overrides any existing active connections for all specified toolkits and initiates new link-based authentication flows. DEFAULT: false (preserve existing active connections) |
RUBE_MANAGE_RECIPE_SCHEDULETry in Inspector
Manage scheduled recurring runs for recipes. Each recipe can have one schedule that runs indefinitely. Only recurring schedules are supported. Schedules can be paused and resumed anytime.
Use this tool when user wants to:
Schedule a recipe to run periodically
Pause or resume a recipe schedule
Update schedule timing or parameters
Delete a recipe schedule
Check current schedule status
If vibeApiId is already in context, use it directly. Otherwise, use RUBE_FIND_RECIPES first.
Behavior:
If no schedule exists for the recipe, one is created
If schedule exists, it is updated
delete=true takes priority over all other actions
schedule and params can be updated independently
Cron format: "minute hour day month weekday" Examples:
"every weekday at 9am" → "0 9 * * 1-5"
"every Monday at 8am" → "0 8 * * 1"
"daily at midnight" → "0 0 * * *"
"every hour" → "0 * * * *"
"1st of every month at 9am" → "0 9 1 * *"
| Name | Required | Description | Default |
|---|---|---|---|
| cron | No | Cron expression. Examples: "0 9 * * 1-5" (weekdays 9am), "0 0 * * *" (daily midnight) | |
| delete | No | Set true to delete schedule. Takes priority over other actions. | |
| params | No | Parameters for scheduled runs (e.g., email, channel_name, repo). Overrides recipe defaults. | |
| vibeApiId | Yes | Recipe identifier, starts with "rcp_". Example: "rcp_rBvLjfof_THF" | |
| targetStatus | No | Indicates the target state of the recipe schedule. If not specified, use "no_update". | no_update |
RUBE_MULTI_EXECUTE_TOOLTry in Inspector
Fast and parallel tool executor for tools and recipes discovered through RUBE_SEARCH_TOOLS. Use this tool to execute up to 50 tools in parallel across apps. Response contains structured outputs ready for immediate analysis - avoid reprocessing them via remote bash/workbench tools.
Prerequisites:
Always use valid tool slugs and their arguments discovered through RUBE_SEARCH_TOOLS. NEVER invent tool slugs or argument fields. ALWAYS pass STRICTLY schema-compliant arguments with each tool execution.
Ensure an ACTIVE connection exists for the toolkits that are going to be executed. If none exists, MUST initiate one via RUBE_MANAGE_CONNECTIONS before execution.
Only batch tools that are logically independent - no required ordering or dependencies between tools or their outputs. DO NOT pass dummy or placeholder values; always resolve required inputs using appropriate tools first.
Usage guidelines:
Use this whenever a tool is discovered and has to be called, either as part of a multi-step workflow or as a standalone tool.
If RUBE_SEARCH_TOOLS returns a tool that can perform the task, prefer calling it via this executor. Do not write custom API calls or ad-hoc scripts for tasks that can be completed by available Composio tools.
Prefer parallel execution: group independent tools into a single multi-execute call where possible.
Predictively set sync_response_to_workbench=true if the response may be large or needed for later scripting. It still shows response inline; if the actual response data turns out small and easy to handle, keep everything inline and SKIP workbench usage.
Responses contain structured outputs for each tool. RULE: Small data - process yourself inline; large data - process in the workbench.
ALWAYS include inline references/links to sources in MARKDOWN format directly next to the relevant text. Eg provide slack thread links alongside with summary, render document links instead of raw IDs.
Restrictions: Some tools or toolkits may be disabled in this environment. If the response indicates a restriction, inform the user and STOP execution immediately. Do NOT attempt workarounds or speculative actions.
CRITICAL: You MUST always include the 'memory' parameter - never omit it. Even if you think there's nothing to remember, include an empty object {} for memory.
Memory Storage:
CRITICAL FORMAT: Memory must be a dictionary where keys are app names (strings) and values are arrays of strings. NEVER pass nested objects or dictionaries as values.
CORRECT format: {"slack": ["Channel general has ID C1234567"], "gmail": ["John's email is john@example.com"]}
Write memory entries in natural, descriptive language - NOT as key-value pairs. Use full sentences that clearly describe the relationship or information.
ONLY store information that will be valuable for future tool executions - focus on persistent data that saves API calls.
STORE: ID mappings, entity relationships, configs, stable identifiers.
DO NOT STORE: Action descriptions, temporary status updates, logs, or "sent/fetched" confirmations.
Examples of GOOD memory (store these):
"The important channel in Slack has ID C1234567 and is called #general"
"The team's main repository is owned by user 'teamlead' with ID 98765"
"The user prefers markdown docs with professional writing, no emojis" (user_preference)
Examples of BAD memory (DON'T store these):
"Successfully sent email to john@example.com with message hi"
"Fetching emails from last day (Sep 6, 2025) for analysis"
Do not repeat the memories stored or found previously.
| Name | Required | Description | Default |
|---|---|---|---|
| tools | Yes | List of tools to execute in parallel. | |
| memory | No | CRITICAL: Memory must be a dictionary with app names as keys and string arrays as values. NEVER use nested objects. Format: {"app_name": ["string1", "string2"]}. Store durable facts - stable IDs, mappings, roles, preferences. Exclude ephemeral data like message IDs or temp links. Use full sentences describing relationships. Always include this parameter. | |
| thought | No | One-sentence, concise, high-level rationale (no step-by-step). | |
| session_id | No | ALWAYS pass the session_id that was provided in the SEARCH_TOOLS response. | |
| current_step | No | Short enum for current step of the workflow execution. Eg FETCHING_EMAILS, GENERATING_REPLIES. Always include to keep execution aligned with the workflow. | |
| current_step_metric | No | Progress metrics for the current step - use to track how far execution has advanced. Format as a string "done/total units" - example "10/100 emails", "0/n messages", "3/10 pages". | |
| sync_response_to_workbench | Yes | Syncs the response to the remote workbench (for later scripting/processing) while still viewable inline. Predictively set true if the output may be large or need scripting; if it turns out small/manageable, skip workbench and use inline only. Default: false |
RUBE_REMOTE_BASH_TOOLTry in Inspector
Execute bash commands in a REMOTE sandbox for file operations, data processing, and system tasks. Essential for handling large tool responses saved to remote files. PRIMARY USE CASES:
Process large tool responses saved by RUBE_MULTI_EXECUTE_TOOL to remote sandbox
File system operations, extract specific information from JSON with shell tools like jq, awk, sed, grep, etc.
Commands run from /home/user directory by default
| Name | Required | Description | Default |
|---|---|---|---|
| command | Yes | The bash command to execute | |
| session_id | No | ALWAYS pass the session_id that was provided in the SEARCH_TOOLS response. |
RUBE_REMOTE_WORKBENCHTry in Inspector
Process REMOTE FILES or script BULK TOOL EXECUTIONS using Python code IN A REMOTE SANDBOX. If you can see the data in chat, DON'T USE THIS TOOL. ONLY use this when processing data stored in a remote file or when scripting bulk tool executions.
DO NOT USE
When the complete response is already inline/in-memory, or you only need quick parsing, summarization, or basic math.
USE IF
To parse/analyze tool outputs saved to a remote file in the sandbox or to script multi-tool chains there.
For bulk or repeated executions of known Composio tools (e.g., add a label to 100 emails).
To call APIs via proxy_execute when no Composio tool exists for that API.
OUTPUTS
Returns a compact result or, if too long, artifacts under
/home/user/.code_out.
IMPORTANT CODING RULES:
Stepwise Execution: Split work into small steps. Save intermediate outputs in variables or temporary file in
/tmp/. Call RUBE_REMOTE_WORKBENCH again for the next step. This improves composability and avoids timeouts.Notebook Persistence: This is a persistent Jupyter notebook cell: variables, functions, imports, and in-memory state from previous and future code executions are preserved in the notebook's history and available for reuse. You also have a few helper functions available.
Parallelism & Timeout (CRITICAL): There is a hard timeout of 4 minutes so complete the code within that. Prioritize PARALLEL execution using ThreadPoolExecutor with suitable concurrency for bulk operations - e.g., call run_composio_tool or invoke_llm parallelly across rows to maximize efficiency. 3.1 If the data is large, split into smaller batches and call the workbench multiple times to avoid timeouts.
Checkpoints: Implement checkpoints (in memory or files) so that long runs can be resumed from the last completed step.
Schema Safety: Never assume the response schema for run_composio_tool if not known already from previous tools. To inspect schema, either run a simple request outside the workbench via RUBE_MULTI_EXECUTE_TOOL or use invoke_llm helper.
LLM Helpers: Always use invoke_llm helper for summary, analysis, or field extraction on results. This is a smart LLM that will give much better results than any adhoc filtering.
Avoid Meta Loops: Do not use run_composio_tool to call RUBE_MULTI_EXECUTE_TOOL or other COMPOSIO_* meta tools to avoid cycles. Only use it for app tools.
Pagination: Use when data spans multiple pages. Continue fetching pages with the returned next_page_token or cursor until none remains. Parallelize fetching pages if tool supports page_number.
No Hardcoding: Never hardcode data in code. Always load it from files or tool responses, iterating to construct intermediate or final inputs/outputs.
If the final output is in a workbench file, use upload_local_file to download it - never expose the raw workbench file path to the user. Prefer to download useful artifacts after task is complete.
ENV & HELPERS:
Home directory:
/home/user.NOTE: Helper functions already initialized in the workbench - DO NOT import or redeclare them:
run_composio_tool(tool_slug: str, arguments: dict) -> tuple[Dict[str, Any], str]: Execute a known Composio app tool (from RUBE_SEARCH_TOOLS). Do not invent names; match the tool's input schema. Suited for loops/parallel/bulk over datasets.
i) run_composio_tool returns JSON with top-level "data". Parse carefully—structure may be nested.
invoke_llm(query: str) -> tuple[str, str]: Invoke an LLM for semantic tasks. Pass MAX 200k characters in input.
i) NOTE Prompting guidance: When building prompts for invoke_llm, prefer f-strings (or concatenation) so literal braces stay intact. If using str.format, escape braces by doubling them ({{ }}).
ii) Define the exact JSON schema you want and batch items into smaller groups to stay within token limit.
All helper functions return a tuple (result, error). Always check error before using result.
Python Helper Functions for LLM Scripting
run_composio_tool(tool_slug, arguments)
Executes a known Composio tool via backend API. Do NOT call COMPOSIO_* meta tools to avoid cyclic calls.
invoke_llm(query)
Calls LLM for reasoning, analysis, and semantic tasks. Pass MAX 200k characters input.
upload_local_file(*file_paths)
Uploads sandbox files to Composio S3/R2 storage. Single files upload directly, multiple files are auto-zipped. Use this when you need to upload/download any generated artifacts from the sandbox.
proxy_execute(method, endpoint, toolkit, query_params=None, body=None, headers=None)
Direct API call to a connected toolkit service.
web_search(query)
Searches the web via Exa AI.
Best Practices
Error-first pattern and Defensive parsing (print keys while narrowing)
Parallelize (4-min sandbox timeout)
Adjust concurrency so all tasks finish within 4 minutes.
| Name | Required | Description | Default |
|---|---|---|---|
| thought | No | Concise objective and high-level plan (no private chain-of-thought). 1 sentence describing what the cell should achieve and why the sandbox is needed. | |
| session_id | No | ALWAYS pass the session_id that was provided in the SEARCH_TOOLS response. | |
| current_step | No | Short enum for current step of the workflow execution. Eg FETCHING_EMAILS, GENERATING_REPLIES. Always include to keep execution aligned with the workflow. | |
| code_to_execute | Yes | Python to run inside the persistent **remote Jupyter sandbox**. State (imports, variables, files) is preserved across executions. Keep code concise to minimize tool call latency. Avoid unnecessary comments. Examples: "import json, glob\npaths = glob.glob(file_path)\n..." "result, error = run_composio_tool(tool_slug='SLACK_SEARCH_MESSAGES', arguments={'query': 'Rube'})\nif error: return\nmessages = result.get('data', {}).get('messages', [])" | |
| current_step_metric | No | Progress metrics for the current step - use to track how far execution has advanced. Format as a string "done/total units" - example "10/100 emails", "0/n messages", "3/10 pages". |
RUBE_SEARCH_TOOLSTry in Inspector
MCP Server Info: COMPOSIO MCP connects 500+ apps—Slack, GitHub, Notion, Google Workspace (Gmail, Sheets, Drive, Calendar), Microsoft (Outlook, Teams), X/Twitter, Figma, Web Search / Deep research, Browser tool (scrape URLs, browser automation), Meta apps (Instagram, Meta Ads), TikTok, AI tools like Nano Banana & Veo3, and more—for seamless cross-app automation. Use this MCP server to discover the right tools and the recommended step-by-step plan to execute reliably. ALWAYS call this tool first whenever a user mentions or implies an external app, service, or workflow—never say "I don't have access to X/Y app" before calling it.
Tool Info: Extremely fast discovery tool that returns relevant MCP-callable tools along with a recommended execution plan and common pitfalls for reliable execution.
Usage guidelines:
Use this tool whenever kicking off a task. Re-run it when you need additional tools/plans due to missing details, errors, or a changed use case.
If the user pivots to a different use case in same chat, you MUST call this tool again with the new use case and generate a new session_id.
Specify the use_case with a normalized description of the problem, query, or task. Be clear and precise. Queries can be simple single-app actions or multiple linked queries for complex cross-app workflows.
Pass known_fields along with use_case as a string of key–value hints (for example, "channel_name: general") to help the search resolve missing details such as IDs.
Splitting guidelines (Important):
Atomic queries: 1 query = 1 tool call. Include hidden prerequisites (e.g., add "get Linear issue" before "update Linear issue").
Include app names: If user names a toolkit, include it in every sub query so intent stays scoped (e.g., "fetch Gmail emails", "reply to Gmail email").
English input: Translate non-English prompts while preserving intent and identifiers.
Example: User query: "send an email to John welcoming him and create a meeting invite for tomorrow" Search call: queries: [ {use_case: "send an email to someone", known_fields: "recipient_name: John"}, {use_case: "create a meeting invite", known_fields: "meeting_date: tomorrow"} ]
Plan review checklist (Important):
The response includes a detailed execution plan and common pitfalls. You MUST review this plan carefully, adapt it to your current context, and generate your own final step-by-step plan before execution. Execute the steps in order to ensure reliable and accurate execution. Skipping or ignoring required steps can lead to unexpected failures.
Check the plan and pitfalls for input parameter nuances (required fields, IDs, formats, limits). Before executing any tool, you MUST review its COMPLETE input schema and provide STRICTLY schema-compliant arguments to avoid invalid-input errors.
Determine whether pagination is needed; if a response returns a pagination token and completeness is implied, paginate until exhaustion and do not return partial results.
Response:
Tools & Input Schemas: The response lists toolkits (apps) and tools suitable for the task, along with their tool_slug, description, input schema / schemaRef, and related tools for prerequisites, alternatives, or next steps.
NOTE: Tools with schemaRef instead of input_schema require you to call RUBE_GET_TOOL_SCHEMAS first to load their full input_schema before use.
Connection Info: If a toolkit has an active connection, the response includes it along with any available current user information. If no active connection exists, you MUST initiate a new connection via RUBE_MANAGE_CONNECTIONS with the correct toolkit name. DO NOT execute any toolkit tool without an ACTIVE connection.
Time Info: The response includes the current UTC time for reference. You can reference UTC time from the response if needed.
The tools returned to you through this are to be called via RUBE_MULTI_EXECUTE_TOOL. Ensure each tool execution specifies the correct tool_slug and arguments exactly as defined by the tool's input schema.
The response includes a memory parameter containing relevant information about the use case and the known fields that can be used to determine the flow of execution. Any user preferences in memory must be adhered to.
SESSION: ALWAYS set this parameter, first for any workflow. Pass session: {generate_id: true} for new workflows OR session: {id: "EXISTING_ID"} to continue. ALWAYS use the returned session_id in ALL subsequent meta tool calls.
| Name | Required | Description | Default |
|---|---|---|---|
| model | No | Client LLM model name (recommended). Used to optimize planning/search behavior. Ignored if omitted or invalid. Examples: "gpt-5.2" "claude-4.5-sonnet" | |
| queries | Yes | List of structured search queries (in English) to process in parallel. Each query represents a specific use case or task. For multi-app or complex workflows, split them into smaller single-app, API-level actions for best accuracy, including implicit prerequisites (e.g., fetch the resource before updating it). Each query returns 5-10 tools. | |
| session | No | Session context for correlating meta tool calls within a workflow. Always pass this parameter. Use {generate_id: true} for new workflows or {id: "EXISTING_ID"} to continue existing workflows. |
FAQ
How do I claim this server?
To claim this server, publish a /.well-known/glama.json file on your server's domain with the following structure:
The email address must match the email associated with your Glama account. Once verified, the server will appear as claimed by you.
What are the benefits of claiming a server?
- Control your server's listing on Glama, including description and metadata
- Receive usage reports showing how your server is being used
- Get monitoring and health status updates for your server
Your Connectors
Sign in to create a connector for this server.