Skip to main content
Glama

manus_usage_team_log

Retrieve per-user team statistics including task counts and credit totals. Filter by date range, sort by task count or credits, and paginate results. Each team member sees only their own data.

Instructions

Per-user team statistics (task counts + credit totals). Team accounts only; members see only their own row. Enterprise teams have T+1 latency.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
limitNo
cursorNo
start_dateNo
end_dateNo
sort_byNo
is_ascNo

Implementation Reference

  • Handler function for the manus_usage_team_log tool. Sends a GET request to /v2/usage.teamLog with query params from UsageTeamLogQuery, returning UsageTeamLogResponse.
    @manus_tool(
        name="manus_usage_team_log",
        description=(
            "Per-user team statistics (task counts + credit totals). Team accounts only; members "
            "see only their own row. Enterprise teams have T+1 latency."
        ),
        input_schema=UsageTeamLogQuery,
        output_schema=UsageTeamLogResponse,
    )
    async def usage_team_log(q: UsageTeamLogQuery, ctx: ToolCtx) -> UsageTeamLogResponse:
        return await ctx.client.call(
            "GET",
            "/v2/usage.teamLog",
            params=q.model_dump(exclude_none=True),
            response_model=UsageTeamLogResponse,
            rate_limit_key="usage.teamLog",
        )
  • Input/output schemas for the tool: UsageTeamLogQuery (with limit, cursor, date filters, sort options) and UsageTeamLogResponse (with list of TeamLogEntry items and pagination).
    class TeamLogEntry(ManusModel):
        model_config = ConfigDict(extra="allow")
        user_id: str | None = None
        user_name: str | None = None
        email: str | None = None
        task_count: IntField | None = None
        credits: IntField | None = None
    
    
    class UsageTeamLogQuery(ManusModel):
        limit: int | None = Field(default=None, ge=1, le=100)
        cursor: str | None = None
        start_date: int | None = None
        end_date: int | None = None
        sort_by: Literal["task_count", "credits"] | None = None
        is_asc: bool | None = None
    
    
    class UsageTeamLogResponse(ResponseEnvelope):
        data: list[TeamLogEntry] = []
        has_more: bool | None = None
        next_cursor: str | None = None
  • The @manus_tool() decorator registers tools into _REGISTRY. The manus_usage_team_log tool is registered when the decorator on usage_team_log fires at module import time.
    def wrap(
        handler: Callable[[TIn, ToolCtx], Awaitable[TOut]],
    ) -> Callable[[TIn, ToolCtx], Awaitable[TOut]]:
        if name in _REGISTRY:
            raise RuntimeError(f"Duplicate tool name: {name}")
        _REGISTRY[name] = ToolDef(
            name=name,
            description=description,
            input_schema=input_schema,
            output_schema=output_schema,
            handler=handler,
            rate_limit_key=rate_limit_key,
        )
  • The all_tools() helper retrieves all registered tools (including manus_usage_team_log) for the MCP server.
    def all_tools() -> list[ToolDef[Any, Any]]:
        """Return a stable-ordered copy of every registered tool."""
        return sorted(_REGISTRY.values(), key=lambda t: t.name)
    
    
    def clear_registry() -> None:
        """Test helper."""
        _REGISTRY.clear()
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must bear the full burden. It implies a read operation (statistics) and states latency for enterprise, but lacks explicit safety declarations (e.g., read-only, auth requirements, rate limits). Not misleading, but could be more thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words, and the most critical information (what it does, who can use it, latency) is front-loaded. Excellent conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so the description should explain return format, pagination (cursor, limit), date range usage, and sorting behavior. It only mentions 'task counts + credit totals' without structure or parameter guidance, leaving significant gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% with 6 parameters. The description does not mention any parameters, so it adds no meaning beyond the schema's names and types. For a tool with many parameters, this is a significant gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides per-user team statistics including task counts and credit totals, specifically for team accounts. It distinguishes from siblings like 'manus_usage_list' (individual usage) and 'manus_usage_team_statistic' (likely team aggregate), with explicit scope and constraints.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clarifies it is only for team accounts and that members see only their own row. It also mentions enterprise latency. However, it does not explicitly state when not to use it or name alternative tools for non-team scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aruxojuyu665/Manus-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server