Skip to main content
Glama
FOX2920

WeWork MCP Server

by FOX2920

analyze_project_tasks

Analyze tasks within a project to identify patterns, track progress, and optionally export data to CSV for further review.

Instructions

Phân tích các tasks trong dự án

Args:
    project_id: ID của dự án
    export_csv: Có xuất file CSV không (default: False)

Returns:
    Phân tích tasks dưới dạng dictionary

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYes
export_csvNo

Implementation Reference

  • The core handler function for the 'analyze_project_tasks' MCP tool. It is decorated with @mcp.tool() for automatic registration, fetches project details and task analysis from WeWorkClient, processes data with pandas, computes task status statistics, converts to dictionary format, and optionally exports results to CSV.
    @mcp.tool()
    def analyze_project_tasks(project_id: str, export_csv: bool = False) -> Dict[str, Any]:
        """
        Phân tích các tasks trong dự án
        
        Args:
            project_id: ID của dự án
            export_csv: Có xuất file CSV không (default: False)
        
        Returns:
            Phân tích tasks dưới dạng dictionary
        """
        try:
            if not wework_client:
                return {'error': 'WeWork client not initialized'}
            
            logger.info(f"Analyzing tasks for project ID: {project_id}")
            
            # Lấy thông tin dự án
            project_info = wework_client.get_project_info(project_id)
            if not project_info:
                return {
                    'error': f'Không tìm thấy dự án với ID: {project_id}',
                    'success': False
                }
            
            # Phân tích tasks
            df = wework_client.get_project_analysis(project_id)
            
            if df.empty:
                return {
                    'success': True,
                    'project_name': project_info['name'],
                    'project_id': project_id,
                    'tasks': [],
                    'summary': {
                        'total_tasks': 0,
                        'completed_tasks': 0,
                        'in_progress_tasks': 0,
                        'failed_tasks': 0
                    }
                }
            
            # Tính thống kê
            status_counts = df['Trạng thái'].value_counts().to_dict() if 'Trạng thái' in df.columns else {}
            
            # Chuyển DataFrame thành dictionary
            tasks_data = df.to_dict(orient='records')
            
            # Xuất CSV nếu được yêu cầu
            csv_filename = None
            if export_csv:
                csv_filename = f"{project_info['name']}_tasks_analysis.csv"
                try:
                    df.to_csv(csv_filename, index=False, encoding='utf-8-sig')
                    logger.info(f"CSV exported to: {csv_filename}")
                except Exception as csv_error:
                    logger.error(f"Failed to export CSV: {csv_error}")
            
            result = {
                'success': True,
                'project_name': project_info['name'],
                'project_id': project_id,
                'tasks': tasks_data,
                'total_tasks': len(df),
                'summary': {
                    'total_tasks': len(df),
                    'completed_tasks': status_counts.get('Hoàn thành', 0),
                    'in_progress_tasks': status_counts.get('Đang thực hiện', 0),
                    'failed_tasks': status_counts.get('Thất bại', 0),
                    'status_breakdown': status_counts
                }
            }
            
            if csv_filename:
                result['csv_file'] = csv_filename
                
            return result
            
        except Exception as e:
            logger.error(f"Error in analyze_project_tasks: {e}")
            return {'error': str(e), 'success': False}
  • Input schema defined by function parameters with type hints (project_id: str, export_csv: bool = False) and output as Dict[str, Any]. Detailed in the docstring with Args and Returns sections.
    def analyze_project_tasks(project_id: str, export_csv: bool = False) -> Dict[str, Any]:
        """
        Phân tích các tasks trong dự án
        
        Args:
            project_id: ID của dự án
            export_csv: Có xuất file CSV không (default: False)
        
        Returns:
            Phân tích tasks dưới dạng dictionary
        """
  • The @mcp.tool() decorator registers the analyze_project_tasks function as an MCP tool in the FastMCP server.
    @mcp.tool()
  • Import of the analyze_project_tasks tool function from the MCP server module for use in the HTTP server endpoints.
    from wework_mcp_server import (
        search_projects, get_project_details, analyze_project_tasks,
  • Usage of the analyze_project_tasks function within the HTTP server's project analysis endpoint.
    result = analyze_project_tasks(project_id, export_csv)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that the tool can export a CSV file, which implies a file generation or download behavior, but doesn't specify where the CSV is saved, if it's returned as data, or any permissions/rate limits. This leaves significant gaps for a tool with potential side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and structured with clear sections for Args and Returns, making it easy to scan. However, the first sentence 'Phân tích các tasks trong dự án' is somewhat redundant with the tool name and could be more specific to add value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of an analysis tool with 2 parameters, no annotations, and no output schema, the description is minimally adequate. It covers the basic purpose and parameters but lacks details on the analysis output format (beyond 'dictionary'), error handling, or integration with sibling tools, leaving room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context beyond the input schema, which has 0% description coverage. It explains that 'project_id' is the ID of the project and 'export_csv' controls whether to export a CSV file with a default of False. This clarifies the purpose of each parameter, compensating for the lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'analyzes tasks in a project' which provides a basic purpose, but it's vague about what specific analysis is performed (e.g., metrics, trends, completion rates). It doesn't clearly distinguish from siblings like 'get_project_statistics' which might overlap in functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'get_project_statistics' or 'get_project_details'. The description lacks context about prerequisites, such as needing an existing project, and doesn't mention when not to use it (e.g., for simple task listing vs. analysis).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/FOX2920/Aplus-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server