Skip to main content
Glama

log1p

Apply log(X+1) transformation to single-cell RNA sequencing data matrices for preprocessing and normalization, enabling downstream analysis.

Instructions

Logarithmize the data matrix (X = log(X + 1))

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
baseNoBase of the logarithm. Natural logarithm is used by default.
chunkedNoProcess the data matrix in chunks, which will save memory.
chunk_sizeNoNumber of observations in the chunks to process the data in.
layerNoEntry of layers to transform.
obsmNoEntry of obsm to transform.

Implementation Reference

  • Generic handler that executes the scanpy pp function (sc.pp.log1p for 'log1p') on the active AnnData object with validated arguments and inplace=True, also logs the operation.
    def run_pp_func(ads, func, arguments): adata = ads.adata_dic[ads.active] if func not in pp_func: raise ValueError(f"不支持的函数: {func}") run_func = pp_func[func] parameters = inspect.signature(run_func).parameters arguments["inplace"] = True kwargs = {k: arguments.get(k) for k in parameters if k in arguments} try: res = run_func(adata, **kwargs) add_op_log(adata, run_func, kwargs) except KeyError as e: raise KeyError(f"Can not foud {e} column in adata.obs or adata.var") except Exception as e: raise e return res
  • Pydantic model for input validation of log1p tool parameters: base, chunked, chunk_size, layer, obsm.
    class Log1PModel(JSONParsingModel): """Input schema for the log1p preprocessing tool.""" base: Optional[Union[int, float]] = Field( default=None, description="Base of the logarithm. Natural logarithm is used by default." ) chunked: Optional[bool] = Field( default=None, description="Process the data matrix in chunks, which will save memory." ) chunk_size: Optional[int] = Field( default=None, description="Number of observations in the chunks to process the data in." ) layer: Optional[str] = Field( default=None, description="Entry of layers to transform." ) obsm: Optional[str] = Field( default=None, description="Entry of obsm to transform." ) @field_validator('chunk_size') def validate_chunk_size(cls, v: Optional[int]) -> Optional[int]: """Validate chunk_size is positive integer""" if v is not None and v <= 0: raise ValueError("chunk_size must be a positive integer") return v
  • Registers the 'log1p' tool using mcp.types.Tool with name, description, and reference to Log1PModel schema.
    log1p = types.Tool( name="log1p", description="Logarithmize the data matrix (X = log(X + 1))", inputSchema=Log1PModel.model_json_schema(), )
  • Maps 'log1p' tool name to the underlying sc.pp.log1p function used by the handler.
    pp_func = { "filter_genes": sc.pp.filter_genes, "filter_cells": sc.pp.filter_cells, "calculate_qc_metrics": partial(sc.pp.calculate_qc_metrics, inplace=True), "log1p": sc.pp.log1p, "normalize_total": sc.pp.normalize_total, "pca": sc.pp.pca, "highly_variable_genes": sc.pp.highly_variable_genes, "regress_out": sc.pp.regress_out, "scale": sc.pp.scale, "combat": sc.pp.combat, "scrublet": sc.pp.scrublet, "neighbors": sc.pp.neighbors, }
  • Adds the log1p Tool object to the pp_tools dictionary, which is used for listing tools in the MCP server.
    pp_tools = { "filter_genes": filter_genes, "filter_cells": filter_cells, "calculate_qc_metrics": calculate_qc_metrics, "log1p": log1p, "normalize_total": normalize_total, "pca": pca, "highly_variable_genes": highly_variable_genes, "regress_out": regress_out, "scale": scale, "combat": combat, "scrublet": scrublet, "neighbors": neighbors, }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/huang-sh/scmcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server