Skip to main content
Glama
alilxxey

openobserve-community-mcp

search_around

Fetch records surrounding a specific log entry using its timestamp. Supports Unix timestamps in seconds, milliseconds, microseconds, or nanoseconds. Returns nearby rows with configurable output format and record profile.

Instructions

Fetch records around a specific log entry. key accepts Unix timestamps in seconds, milliseconds, microseconds, or nanoseconds for convenience, but the best input is the exact _timestamp returned by search_logs; otherwise OpenObserve may return no nearby rows. output_format can be 'records' or 'columns' for a more token-efficient table shape. record_profile can be 'generic' or 'kubernetes_compact'.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
stream_nameYes
keyYes
sizeNo
regionsNo
timeoutNo
output_formatNorecords
record_profileNogeneric
include_rawNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The MCP tool 'search_around' is registered via @server.tool() decorator on the search_around function in create_server(). It accepts parameters like stream_name, key, size, regions, timeout, output_format, record_profile, and include_raw.
    @server.tool()
    def search_around(
        stream_name: str,
        key: int,
        size: int = 20,
        regions: str | None = None,
        timeout: int | None = None,
        output_format: str = "records",
        record_profile: str = "generic",
        include_raw: bool = False,
    ) -> dict[str, Any]:
        """Fetch records around a specific log entry. key accepts Unix timestamps in seconds, milliseconds, microseconds, or nanoseconds for convenience, but the best input is the exact `_timestamp` returned by search_logs; otherwise OpenObserve may return no nearby rows. output_format can be 'records' or 'columns' for a more token-efficient table shape. record_profile can be 'generic' or 'kubernetes_compact'."""
        client = client_provider.get()
        key = _normalize_unix_timestamp(key, field_name="key")
        raw = client.search_around(
            stream_name=stream_name,
            key=key,
            size=size,
            regions=regions,
            timeout=timeout,
        )
        return build_search_around_result(
            org_id=client.resolve_org_id(),
            stream_name=stream_name,
            size=size,
            raw=raw,
            output_format=output_format,
            record_profile=record_profile,
            include_raw=include_raw,
        )
  • The handler function that executes the tool logic: normalizes the key timestamp, calls client.search_around(), and returns build_search_around_result().
    def search_around(
        stream_name: str,
        key: int,
        size: int = 20,
        regions: str | None = None,
        timeout: int | None = None,
        output_format: str = "records",
        record_profile: str = "generic",
        include_raw: bool = False,
    ) -> dict[str, Any]:
        """Fetch records around a specific log entry. key accepts Unix timestamps in seconds, milliseconds, microseconds, or nanoseconds for convenience, but the best input is the exact `_timestamp` returned by search_logs; otherwise OpenObserve may return no nearby rows. output_format can be 'records' or 'columns' for a more token-efficient table shape. record_profile can be 'generic' or 'kubernetes_compact'."""
        client = client_provider.get()
        key = _normalize_unix_timestamp(key, field_name="key")
        raw = client.search_around(
            stream_name=stream_name,
            key=key,
            size=size,
            regions=regions,
            timeout=timeout,
        )
        return build_search_around_result(
            org_id=client.resolve_org_id(),
            stream_name=stream_name,
            size=size,
            raw=raw,
            output_format=output_format,
            record_profile=record_profile,
            include_raw=include_raw,
        )
  • Client method search_around() that makes an HTTP GET request to the OpenObserve API endpoint /api/{org_id}/{stream_name}/_around with query parameters key, size, regions, and timeout.
    def search_around(
        self,
        *,
        stream_name: str,
        key: int,
        size: int = 20,
        regions: str | None = None,
        timeout: int | None = None,
    ) -> Any:
        query: dict[str, str | int | float | bool] = {
            "key": key,
            "size": size,
        }
        if regions:
            query["regions"] = regions
        if timeout is not None:
            query["timeout"] = timeout
    
        return self.request_json(
            "GET",
            self._org_path("/api/{org_id}/{stream_name}/_around", stream_name=stream_name),
            query=query,
        )
  • Result builder function build_search_around_result() that formats the API response into a structured dict with org_id, stream_name, requested_size, hit_count, output_format, record_profile, and record/columnar payload.
    def build_search_around_result(
        *,
        org_id: str,
        stream_name: str,
        size: int,
        raw: Any,
        output_format: str,
        record_profile: str,
        include_raw: bool,
    ) -> dict[str, Any]:
        hits = raw.get("hits", []) if isinstance(raw, dict) else []
        records = [_apply_record_profile(summarize_search_record(hit), record_profile=record_profile) for hit in hits if isinstance(hit, dict)]
        result: dict[str, Any] = {
            "org_id": org_id,
            "stream_name": stream_name,
            "requested_size": size,
            "hit_count": len(hits),
            "output_format": _normalize_output_format(output_format),
            "record_profile": _normalize_record_profile(record_profile),
        }
        _attach_record_payload(result, records, output_format=output_format)
        return maybe_include_raw(result, raw, include_raw)
  • Input parameters defined as function arguments to the search_around tool: stream_name (str), key (int), size (int=20), regions (str|None), timeout (int|None), output_format (str='records'), record_profile (str='generic'), include_raw (bool=False).
    def search_around(
        stream_name: str,
        key: int,
        size: int = 20,
        regions: str | None = None,
        timeout: int | None = None,
        output_format: str = "records",
        record_profile: str = "generic",
        include_raw: bool = False,
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that input timestamps can be in multiple units and mentions output format options. However, it does not specify whether the operation is read-only, side effects, or authentication requirements, leaving significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each adding essential information without redundancy. The purpose is stated first, followed by important parameter details. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Although an output schema exists, the description leaves many parameters undocumented and does not clarify when to use this tool versus its sibling search_logs. Given 8 parameters and only 3 explained, the description is insufficient for complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has zero description coverage. The description explains three parameters (key, output_format, record_profile) with practical details, but the remaining five parameters (stream_name, size, regions, timeout, include_raw) are unexplained, so the description only partially compensates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches records around a specific log entry, which is a specific verb and resource. It distinguishes from sibling tools like search_logs by focusing on nearby records rather than general log search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on using the `key` parameter, including acceptable formats and the recommendation to use the exact `_timestamp` from search_logs. However, it does not explicitly state when to prefer this tool over alternatives like search_logs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/alilxxey/openobserve-community-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server