Skip to main content
Glama

log1p

Transform single-cell RNA sequencing data by applying the natural logarithm plus one (log(X + 1)) to stabilize variance and normalize expression values for downstream analysis.

Instructions

Logarithmize the data matrix (X = log(X + 1))

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
baseNoBase of the logarithm. Natural logarithm is used by default.
chunk_sizeNoNumber of observations in the chunks to process the data in.
chunkedNoProcess the data matrix in chunks, which will save memory.
layerNoEntry of layers to transform.
obsmNoEntry of obsm to transform.

Implementation Reference

  • Handler function that executes the log1p tool by retrieving sc.pp.log1p from pp_func and calling it with validated arguments on the active AnnData.
    def run_pp_func(ads, func, arguments): adata = ads.adata_dic[ads.active] if func not in pp_func: raise ValueError(f"不支持的函数: {func}") run_func = pp_func[func] parameters = inspect.signature(run_func).parameters arguments["inplace"] = True kwargs = {k: arguments.get(k) for k in parameters if k in arguments} try: res = run_func(adata, **kwargs) add_op_log(adata, run_func, kwargs) except KeyError as e: raise KeyError(f"Can not foud {e} column in adata.obs or adata.var") except Exception as e: raise e return res
  • Pydantic model defining the input schema and validation for the log1p tool parameters.
    class Log1PModel(JSONParsingModel): """Input schema for the log1p preprocessing tool.""" base: Optional[Union[int, float]] = Field( default=None, description="Base of the logarithm. Natural logarithm is used by default." ) chunked: Optional[bool] = Field( default=None, description="Process the data matrix in chunks, which will save memory." ) chunk_size: Optional[int] = Field( default=None, description="Number of observations in the chunks to process the data in." ) layer: Optional[str] = Field( default=None, description="Entry of layers to transform." ) obsm: Optional[str] = Field( default=None, description="Entry of obsm to transform." ) @field_validator('chunk_size') def validate_chunk_size(cls, v: Optional[int]) -> Optional[int]: """Validate chunk_size is positive integer""" if v is not None and v <= 0: raise ValueError("chunk_size must be a positive integer") return v
  • MCP Tool object creation and registration for the log1p tool using the Log1PModel schema.
    log1p = types.Tool( name="log1p", description="Logarithmize the data matrix (X = log(X + 1))", inputSchema=Log1PModel.model_json_schema(), )
  • Inclusion of log1p tool in the pp_tools dictionary, which is imported and used for tool registration in the MCP server.
    "filter_genes": filter_genes, "filter_cells": filter_cells, "calculate_qc_metrics": calculate_qc_metrics, "log1p": log1p, "normalize_total": normalize_total, "pca": pca, "highly_variable_genes": highly_variable_genes, "regress_out": regress_out, "scale": scale, "combat": combat, "scrublet": scrublet, "neighbors": neighbors, }
  • Mapping of tool names to underlying Scanpy functions, with log1p mapped to sc.pp.log1p for execution by the handler.
    pp_func = { "filter_genes": sc.pp.filter_genes, "filter_cells": sc.pp.filter_cells, "calculate_qc_metrics": partial(sc.pp.calculate_qc_metrics, inplace=True), "log1p": sc.pp.log1p, "normalize_total": sc.pp.normalize_total, "pca": sc.pp.pca, "highly_variable_genes": sc.pp.highly_variable_genes, "regress_out": sc.pp.regress_out, "scale": sc.pp.scale, "combat": sc.pp.combat, "scrublet": sc.pp.scrublet, "neighbors": sc.pp.neighbors, }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/huang-sh/scmcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server