Skip to main content
Glama

dynamic_lisa

Analyze directional spatial patterns in geographic data to identify clusters and outliers with statistical significance testing.

Instructions

Run dynamic LISA (directional LISA) with giddy.directional.Rose.

Returns sector counts, angles, vector lengths, and (optionally) permutation p-values.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
shapefile_pathYes
value_columnsYes
target_crsNoEPSG:4326
weights_methodNoqueen
distance_thresholdNo
kNo
permutationsNo
alternativeNotwo.sided
relativeNo
drop_naNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function for 'dynamic_lisa' tool, implementing dynamic Local Indicators of Spatial Association using the giddy.directional.Rose class. Computes directional statistics between two time periods in shapefile data.
    @gis_mcp.tool()
    
    def dynamic_lisa(
        shapefile_path: str,
        value_columns: Union[str, List[str]],   # exactly two columns: [t0, t1]
        target_crs: str = "EPSG:4326",
        weights_method: str = "queen",          # 'queen'|'rook'|'distance'
        distance_threshold: float = 100000,     # meters (converted to degrees if 4326)
        k: int = 8,                             # number of rose sectors
        permutations: int = 99,                 # 0 = skip inference
        alternative: str = "two.sided",         # 'two.sided'|'positive'|'negative'
        relative: bool = True,                  # divide each column by its mean
        drop_na: bool = True                    # drop features with NA in either column
    ) -> Dict[str, Any]:
        """Run dynamic LISA (directional LISA) with giddy.directional.Rose.
    
        Returns sector counts, angles, vector lengths, and (optionally) permutation p-values.
        """
        try:
            # --- sanitize ---
            for s in ("shapefile_path","target_crs","weights_method","alternative"):
                v = locals()[s]
                if isinstance(v, str):
                    locals()[s] = v.replace("`","")
    
            if isinstance(value_columns, str):
                cols = [c.strip() for c in value_columns.replace("`","").split(",") if c.strip()]
            else:
                cols = list(value_columns)
    
            if not os.path.exists(shapefile_path):
                return {"status":"error","message":f"Shapefile not found: {shapefile_path}"}
            if len(cols) != 2:
                return {"status":"error","message":"value_columns must be exactly two columns: [start_time, end_time]."}
    
            # --- load + project ---
            gdf = gpd.read_file(shapefile_path)
            missing = [c for c in cols if c not in gdf.columns]
            if missing:
                return {"status":"error","message":f"Columns not found: {missing}"}
            gdf = gdf.to_crs(target_crs)
    
            # --- prepare Y (n x 2) ---
            Y = gdf[cols].astype(float).to_numpy(copy=True)
            if drop_na:
                mask = ~np.any(np.isnan(Y), axis=1)
                gdf = gdf.loc[mask].reset_index(drop=True)
                Y = Y[mask, :]
    
            if relative:
                col_means = Y.mean(axis=0)
                col_means[col_means == 0] = 1.0
                Y = Y / col_means
    
            # --- spatial weights ---
            import libpysal
            wm = weights_method.lower()
            if wm == "queen":
                w = libpysal.weights.Queen.from_dataframe(gdf, use_index=True)
            elif wm == "rook":
                w = libpysal.weights.Rook.from_dataframe(gdf, use_index=True)
            elif wm == "distance":
                thr = distance_threshold
                if target_crs.upper() == "EPSG:4326":
                    thr = distance_threshold / 111000.0  # meters → degrees
                w = libpysal.weights.DistanceBand.from_dataframe(gdf, threshold=thr, binary=False)
            else:
                return {"status":"error","message":f"Unknown weights_method: {weights_method}"}
            w.transform = "r"
    
            # drop islands (units with no neighbors)
            if w.islands:
                keep = [i for i in range(gdf.shape[0]) if i not in set(w.islands)]
                if not keep:
                    return {"status":"error","message":"All units are islands under current weights."}
                gdf = gdf.iloc[keep].reset_index(drop=True)
                Y = Y[keep, :]
                if wm == "queen":
                    w = libpysal.weights.Queen.from_dataframe(gdf, use_index=True)
                elif wm == "rook":
                    w = libpysal.weights.Rook.from_dataframe(gdf, use_index=True)
                else:
                    thr = distance_threshold if target_crs.upper() != "EPSG:4326" else distance_threshold/111000.0
                    w = libpysal.weights.DistanceBand.from_dataframe(gdf, threshold=thr, binary=False)
                w.transform = "r"
    
            # --- Dynamic LISA (Rose) ---
            from giddy.directional import Rose
            rose = Rose(Y, w, k=int(k))  # computes cuts, counts, theta, r, bins, lag internally
    
            # permutation inference (optional)
            pvals = None
            expected_perm = larger_perm = smaller_perm = None
            if permutations and permutations > 0:
                rose.permute(permutations=int(permutations), alternative=alternative)
                pvals = np.asarray(getattr(rose, "p", None)).tolist() if hasattr(rose, "p") else None
                expected_perm = np.asarray(getattr(rose, "expected_perm", [])).tolist()
                larger_perm = np.asarray(getattr(rose, "larger_perm", [])).tolist()
                smaller_perm = np.asarray(getattr(rose, "smaller_perm", [])).tolist()
    
            # preview
            def _wkt_safe(g):
                try:
                    return g.wkt
                except Exception:
                    return str(g)
    
            preview = gdf[[*cols, gdf.geometry.name]].head(5).copy()
            preview[gdf.geometry.name] = preview[gdf.geometry.name].apply(_wkt_safe)
            preview = preview.rename(columns={gdf.geometry.name: "geometry_wkt"}).to_dict(orient="records")
    
            result = {
                "n_regions": int(Y.shape[0]),
                "k_sectors": int(k),
                "weights_method": wm,
                "value_columns": cols,
                "cuts_radians": np.asarray(rose.cuts).tolist(),        # sector edges
                "sector_counts": np.asarray(rose.counts).tolist(),     # count per sector
                "angles_theta_rad": np.asarray(rose.theta).tolist(),   # one per region
                "vector_lengths_r": np.asarray(rose.r).tolist(),       # one per region
                "bins_used": np.asarray(rose.bins).tolist(),
                "inference": {
                    "permutations": int(permutations),
                    "alternative": alternative,
                    "p_values_by_sector": pvals,
                    "expected_counts_perm": expected_perm,
                    "larger_or_equal_counts": larger_perm,
                    "smaller_or_equal_counts": smaller_perm
                },
                "data_preview": preview
            }
    
            msg = "Dynamic LISA (Rose) completed successfully"
            if wm == "distance" and target_crs.upper() == "EPSG:4326":
                msg += f" (threshold interpreted as {distance_threshold/111000.0:.6f} degrees)."
    
            return {"status":"success","message":msg,"result":result}
    
        except Exception as e:
            return {"status":"error","message":f"Failed to run Dynamic LISA: {str(e)}"}
  • Imports pysal_functions.py, which triggers registration of the dynamic_lisa tool via @gis_mcp.tool() decorator.
    from .pysal_functions import * 
    from .save_tool import *
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states what the tool returns but doesn't describe computational behavior, performance characteristics, memory usage, error conditions, or side effects. For a complex statistical tool with 10 parameters, this is insufficient. The description doesn't contradict annotations (since none exist), but it fails to provide essential behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences that efficiently state the action and outputs. It's front-loaded with the core purpose. However, given the tool's complexity, this brevity comes at the cost of completeness rather than being optimally informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex statistical tool with 10 parameters, 0% schema description coverage, no annotations, and many sibling spatial analysis tools, the description is inadequate. While an output schema exists (which helps with return values), the description doesn't provide enough context about when to use this specific spatial statistic, what the parameters mean, or how it behaves computationally. It's insufficient for an AI agent to make informed decisions about tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage and 10 parameters (8 with defaults), the description provides no information about any parameters. It doesn't explain what 'shapefile_path' should contain, what 'value_columns' represent, what the various statistical parameters control, or how they interact. The description fails to compensate for the complete lack of schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Run dynamic LISA (directional LISA) with giddy.directional.Rose' and specifies what it returns ('sector counts, angles, vector lengths, and (optionally) permutation p-values'). This is specific about the statistical method and output, though it doesn't explicitly differentiate from sibling tools like 'moran_local' or 'getis_ord_g_local' which are also spatial statistics tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the optional permutation p-values but doesn't explain when this is needed or how this tool compares to other spatial analysis tools in the sibling list. There's no mention of prerequisites, typical use cases, or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mahdin75/gis-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server