Skip to main content
Glama

consult_gemini

Send queries to Google's Gemini AI via CLI to get AI-generated responses with configurable model, directory, and timeout settings.

Instructions

Send a query directly to the Gemini CLI.

Args: query: Prompt text forwarded verbatim to the CLI. directory: Working directory used for command execution. model: Optional model alias (``flash``, ``pro``) or full Gemini model id. timeout_seconds: Optional per-call timeout override in seconds. Returns: Gemini's response text or an explanatory error string.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
directoryYes
modelNo
timeout_secondsNo

Implementation Reference

  • The main handler function for the 'consult_gemini' tool. Registered with @mcp.tool(), defines input schema via type annotations and docstring, and implements logic by delegating to the execute_gemini_simple helper.
    @mcp.tool() def consult_gemini( query: str, directory: str, model: str | None = None, timeout_seconds: int | None = None, ) -> str: """Send a query directly to the Gemini CLI. Args: query: Prompt text forwarded verbatim to the CLI. directory: Working directory used for command execution. model: Optional model alias (``flash``, ``pro``) or full Gemini model id. timeout_seconds: Optional per-call timeout override in seconds. Returns: Gemini's response text or an explanatory error string. """ return execute_gemini_simple(query, directory, model, timeout_seconds)
  • Supporting helper function that performs the actual execution of the Gemini CLI subprocess for simple queries, including validation, model normalization, timeout handling, and error management.
    def execute_gemini_simple( query: str, directory: str = ".", model: Optional[str] = None, timeout_seconds: Optional[int] = None, ) -> str: """ Execute gemini CLI command for simple queries without file attachments. Args: query: The prompt to send to Gemini directory: Working directory for the command model: Optional model name (flash, pro, etc.) Returns: CLI output or error message """ # Check if gemini CLI is available if not shutil.which("gemini"): return "Error: Gemini CLI not found. Install with: npm install -g @google/gemini-cli" # Validate directory if not os.path.isdir(directory): return f"Error: Directory does not exist: {directory}" # Build command - use stdin for input to avoid hanging selected_model = _normalize_model_name(model) cmd = ["gemini", "-m", selected_model] # Execute CLI command - simple timeout, no retries timeout = _coerce_timeout(timeout_seconds) try: result = subprocess.run( cmd, cwd=directory, capture_output=True, text=True, timeout=timeout, input=query ) if result.returncode == 0: return result.stdout.strip() if result.stdout.strip() else "No output from Gemini CLI" else: return f"Gemini CLI Error: {result.stderr.strip()}" except subprocess.TimeoutExpired: return f"Error: Gemini CLI command timed out after {timeout} seconds" except Exception as e: return f"Error executing Gemini CLI: {str(e)}"
  • Utility helper for coercing and validating timeout values, used by execute_gemini_simple.
    def _coerce_timeout(timeout_seconds: Optional[int]) -> int: """Return a positive timeout, preferring explicit overrides.""" if timeout_seconds is None: return _get_timeout() try: timeout = int(timeout_seconds) except (TypeError, ValueError): logging.warning( "Invalid timeout override '%s' (must be integer). Using default.", timeout_seconds, ) return _get_timeout() if timeout <= 0: logging.warning( "Invalid timeout override '%s' (must be positive). Using default.", timeout_seconds, ) return _get_timeout() return timeout
  • Utility helper for normalizing model names to standard Gemini CLI formats, called within execute_gemini_simple.
    def _normalize_model_name(model: Optional[str]) -> str: """ Normalize user-provided model identifiers to canonical Gemini CLI model names. Defaults to gemini-2.5-flash when not provided or unrecognized. Accepted forms: - "flash", "2.5-flash", "gemini-2.5-flash" -> gemini-2.5-flash - "pro", "2.5-pro", "gemini-2.5-pro" -> gemini-2.5-pro - "3-pro", "gemini-3-pro", "gemini-3-pro-preview" -> gemini-3-pro-preview - "3-flash", "gemini-3-flash", "gemini-3-flash-preview" -> gemini-3-flash-preview - "auto" -> auto (model router, lets CLI choose optimal model) """ if not model: return "gemini-2.5-flash" value = model.strip().lower() # Gemini 2.5 aliases if value in {"flash", "2.5-flash", "gemini-2.5-flash"}: return "gemini-2.5-flash" if value in {"pro", "2.5-pro", "gemini-2.5-pro"}: return "gemini-2.5-pro" # Gemini 3 aliases (preview models) if value in {"3-pro", "gemini-3-pro", "gemini-3-pro-preview"}: return "gemini-3-pro-preview" if value in {"3-flash", "gemini-3-flash", "gemini-3-flash-preview"}: return "gemini-3-flash-preview" # Model router (let CLI choose best model) if value == "auto": return "auto" # Pass through any other gemini-* model name if value.startswith("gemini-"): return value # Fallback to flash for anything else return "gemini-2.5-flash"

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/eLyiN/gemini-bridge'

If you have feedback or need assistance with the MCP directory API, please join our Discord server