---
description:
globs: tests/tools/*
alwaysApply: false
---
# Unit Testing Guidelines
This document provides detailed guidelines for writing effective unit tests for MCP tool functions and related components.
**Test file organization:** Place tests for each MCP tool in its dedicated module under `tests/tools/<tool_category>/test_<tool_function>.py` (for example, `tests/tools/address/test_get_address_info.py`). Do not combine multiple tools in a single test file—create a new module whenever a new tool is introduced.
## **HIGH PRIORITY: Keep Unit Tests Simple and Focused**
**Each unit test must be narrow and specific.** A single test should verify one specific behavior or scenario. If a test attempts to cover multiple scenarios or different groups of input parameters, **split it into separate tests**.
**Simple tests are:**
- Easier to understand and maintain
- Faster to debug when they fail
- More reliable and less prone to false positives
- Better at pinpointing the exact cause of failures
**Example - Split complex tests:**
```python
# BAD: One test covering multiple scenarios
def test_lookup_token_complex():
# Tests both success and error cases
# Tests multiple input parameter combinations
# Hard to debug when it fails
# GOOD: Separate focused tests
def test_lookup_token_success():
# Tests only the success scenario
def test_lookup_token_invalid_symbol():
# Tests only invalid symbol error case
def test_lookup_token_network_error():
# Tests only network error handling
```
## Key Testing Patterns & Guidelines
### A. Use the `mock_ctx` Fixture
A reusable `pytest` fixture named `mock_ctx` is defined in `tests/conftest.py`. This fixture provides a pre-configured mock of the MCP `Context` object with an `AsyncMock` for `report_progress`.
**DO NOT** create a manual `MagicMock` for the context within your test functions.
**Correct Usage:**
```python
import pytest
@pytest.mark.asyncio
async def test_some_tool_success(mock_ctx): # Request the fixture as an argument
# ARRANGE
# The mock_ctx object is ready to use.
# ACT
result = await some_tool(..., ctx=mock_ctx)
# ASSERT
# You can now make assertions on the fixture
assert mock_ctx.report_progress.call_count > 0
```
### B. Testing Structured Tool Responses
For tools that return a `ToolResponse` object containing structured data, **DO NOT** parse JSON from string results in your test. Instead, **mock the serialization function (`json.dumps`)** if it's used internally, and make assertions on the structured `ToolResponse` object and its attributes.
However, the approach depends on the complexity of the tool.
### C. Handling Repetitive Data in Assertions (DAMP vs. DRY)
When testing tools that transform a list of items (e.g., `lookup_token_by_symbol`), explicitly writing out the entire `expected_result` can lead to large, repetitive, and hard-to-maintain test code.
In these cases, it is better to **programmatically generate the `expected_result`** from the `mock_api_response`. This keeps the test maintainable while still explicitly documenting the transformation logic itself.
**Correct Usage:**
```python
import copy
from blockscout_mcp_server.models import ToolResponse
@pytest.mark.asyncio
async def test_lookup_token_by_symbol_success(mock_ctx):
# ARRANGE
mock_api_response = {
"items": [
{"address_hash": "0xabc...", "name": "Token A"},
{"address_hash": "0xdef...", "name": "Token B"}
]
}
# Generate the expected result programmatically
expected_data = []
for item in mock_api_response["items"]:
new_item = copy.deepcopy(item)
# Explicitly document the transformation logic
new_item["address"] = new_item.pop("address_hash")
new_item["token_type"] = "" # Add default fields
expected_data.append(new_item)
# Wrap the expected data in a ToolResponse
expected_result = ToolResponse(data=expected_data)
# ... (patching and ACT phase)
# ASSERT
# The final assertion compares the entire ToolResponse structure.
assert result == expected_result
```
### D. General Assertions
- **Progress Tracking:** Always verify the number of calls to `mock_ctx.report_progress` to ensure the user is kept informed.
- **API Calls:** Assert that the mocked API helper functions (`make_blockscout_request`, etc.) are called exactly once with the correct `api_path` and `params`.
- **Wrapper Integration:** For tools using `make_request_with_periodic_progress`, mock the wrapper itself and assert that it was called with the correct arguments (`request_function`, `request_args`, etc.).
## Testing `direct_api_call` Handlers
Handlers for `direct_api_call` must be tested in isolation.
- **Location**: `tests/tools/direct_api/handlers/test_<handler_name>.py`.
- **Method**: Do not mock API calls. Instead, call the handler function directly with a sample `response_json` dict and a real `re.Match` created via `re.compile(pattern).fullmatch(endpoint_path)`.
- **Assertion**: Assert that the handler correctly transforms the input JSON into the expected `ToolResponse[SpecificModel]`.
## File Size Limitations
**Unit test files must not exceed 500 LOC.** If a file approaches this limit, split tests into multiple focused modules (e.g., `test_get_transactions_by_address.py` and `test_get_transactions_by_address_pagination.py`) so each file stays readable while still covering the same tool.
## Test Organization
- Write tests covering success scenarios, error cases, and edge cases.
- Ensure all external API calls are properly mocked using `unittest.mock.patch` and `AsyncMock`.
- Group related tests using descriptive class names or clear function naming patterns.