# Error Resolution Plan - UNLOCK MLS MCP Server
**Document Version**: 1.0
**Created**: July 27, 2025
**Status**: APPROVED FOR IMPLEMENTATION
**Estimated Completion**: 4-8 hours
## ๐ฏ **Executive Summary**
The UNLOCK MLS MCP Server has a critical authentication API mismatch preventing server startup and causing 70% test failure rate (137+ of 195+ tests). This plan provides a systematic approach to resolve all identified errors and restore full functionality.
### **Current Status**
- **Server Status**: โ Cannot start (AttributeError on OAuth method)
- **Test Success Rate**: 30% (69/195+ tests passing)
- **Working Components**: Core MCP tools (21 tests), OAuth2 implementation (19 tests), Data mapper (29 tests)
- **Critical Blocker**: OAuth2Handler missing `get_access_token()` method
### **Root Cause Analysis**
**Primary Issue**: OAuth2Handler implements `get_valid_token()` but 43 locations across 7 files expect `get_access_token()`
**Impact Chain**:
```
OAuth2Handler.get_access_token() missing
โโโ Server startup fails (server.py:1757)
โโโ All authentication-dependent operations fail
โโโ All tests with authentication mocks fail
โโโ Complete system non-functional
```
## ๐ง **Phase 1: Critical Authentication Fix**
**Priority**: CRITICAL
**Estimated Time**: 30 minutes
**Risk Level**: Low
### **1.1 Add Missing OAuth2Handler Method**
**Objective**: Add `get_access_token()` method as alias for existing `get_valid_token()`
**Implementation**:
```python
# File: src/auth/oauth2.py
# Add to OAuth2Handler class:
async def get_access_token(self) -> str:
"""
Get access token (alias for get_valid_token for API consistency).
This method provides API consistency for callers expecting get_access_token()
while maintaining the existing get_valid_token() functionality.
Returns:
Valid access token string
Raises:
OAuth2Error: If unable to obtain valid token
"""
return await self.get_valid_token()
```
**Files Affected**: 1 file (`src/auth/oauth2.py`)
**Impact**: Resolves 43 method calls across 7 files
**Backward Compatibility**: โ
Maintains existing `get_valid_token()` method
### **1.2 Validate Authentication Fix**
**Tests to Run**:
```bash
# 1. Test server startup
python -m main
# 2. Test core functionality
pytest tests/test_tools.py tests/test_oauth2.py tests/test_data_mapper.py -v
# 3. Verify OAuth functionality
python -c "from src.auth.oauth2 import OAuth2Handler; print([m for m in dir(OAuth2Handler) if 'token' in m])"
```
**Success Criteria**:
- โ
Server starts without AttributeError
- โ
69/69 core tests still passing
- โ
Both `get_access_token()` and `get_valid_token()` available
**Rollback Plan**: If issues arise, remove the new method and revert
## ๐งช **Phase 2: Test Infrastructure Recovery**
**Priority**: HIGH
**Estimated Time**: 2-3 hours
**Risk Level**: Medium
### **2.1 Fix Error Scenario Tests**
**File**: `tests/test_error_scenarios.py`
**Issue**: 24 tests failing due to OAuth mock configuration
**Current Pattern**: `server.oauth_handler.get_access_token.side_effect = Exception(...)`
**Expected Outcome**: All error scenarios testable with proper authentication mocking
**Validation**:
```bash
pytest tests/test_error_scenarios.py -v
# Target: 24/24 tests passing
```
### **2.2 Fix Integration Tests**
**File**: `tests/test_integration.py`
**Issue**: 4/10 tests failing due to authentication workflow problems
**Focus Areas**:
- `test_complete_property_search_workflow`
- `test_comprehensive_real_estate_research_workflow`
**Validation**:
```bash
pytest tests/test_integration.py -v
# Target: 10/10 tests passing
```
### **2.3 Fix Mock Data Type Issues**
**Files**: `tests/fixtures/test_utilities.py`, various test files
**Issue**: Mocks returning Mock objects instead of proper data types
**Example Problems**:
```python
# โ Current (returns Mock):
server.data_mapper.map_properties.return_value = Mock()
# โ
Fixed (returns actual list):
server.data_mapper.map_properties.return_value = [
{"listing_id": "123", "price": 500000},
{"listing_id": "456", "price": 600000}
]
```
**Common Fix Pattern**:
- `len()` operations expect lists/tuples, not Mock objects
- Property/agent data should return structured dictionaries
- Market analysis should return statistical data structures
## ๐ **Phase 3: Comprehensive Test Validation**
**Priority**: MEDIUM
**Estimated Time**: 1-2 hours
**Risk Level**: Low
### **3.1 Validate Unknown Test Modules**
**Files to Test**:
- `test_validators.py` (estimated ~15 tests)
- `test_reso_client.py` (estimated ~25 tests)
**Approach**:
```bash
# Test each module individually
pytest tests/test_validators.py -v
pytest tests/test_reso_client.py -v
```
**Expected Issues**: Likely OAuth-related, should be resolved by Phase 1 fix
### **3.2 Fix Performance and Load Tests**
**Files**: `test_performance.py`, `test_load.py`
**Current Issues**:
- Timeout due to authentication failures
- Cannot establish performance baselines
- Load testing blocked by server startup issues
**Post-Auth Fix Expected Outcomes**:
- Performance tests execute and provide metrics
- Load tests simulate concurrent users successfully
- Baseline performance benchmarks established
### **3.3 Full Test Suite Execution**
**Final Validation Command**:
```bash
pytest tests/ -v --tb=short
# Target: 195+ tests with 85%+ success rate
# Execution time: < 2 minutes
```
**Success Metrics**:
- Total tests: 195+
- Passing tests: 165+ (85%+)
- Failed tests: <30 (15%-)
- Execution time: <120 seconds
## ๐ **Phase 4: Quality Assurance**
**Priority**: MEDIUM
**Estimated Time**: 1-2 hours
**Risk Level**: Low
### **4.1 Server Functionality Validation**
**MCP Tools Testing**:
- `search_properties` - Natural language and structured search
- `get_property_details` - Property detail retrieval
- `analyze_market` - Market analysis and trends
- `find_agent` - Agent search and contact information
**MCP Resources Testing**:
- All 8 resources accessible via MCP protocol
- Content quality and markdown formatting
- Resource documentation accuracy
### **4.2 Documentation Updates**
**Files to Review**:
- `docs/configuration.md:272` - Contains OAuth method reference
- `README.md` - API examples and usage
- `CLAUDE.md` - Development guide updates
**Actions**:
- Update any references to correct OAuth method names
- Verify all code examples work with current API
- Update development command examples
### **4.3 Coverage Verification**
**Coverage Analysis**:
```bash
pytest --cov=src --cov-report=html --cov-report=term
```
**Targets**:
- Code coverage: 89% (maintain current level)
- Coverage includes all working tests
- HTML report generated for detailed analysis
## ๐ **Phase 5: Production Readiness Validation**
**Priority**: LOW
**Estimated Time**: 1-2 hours
**Risk Level**: Low
### **5.1 Real API Integration Test** (Optional)
**Condition**: If Bridge Interactive credentials available
**Actions**:
- Test authentication against real Bridge Interactive API
- Validate property search with actual MLS data
- Confirm market analysis with real market data
- Test agent search against real agent database
**Benefits**:
- Validates end-to-end integration
- Confirms API compatibility
- Tests real-world performance
### **5.2 Performance Benchmarking**
**Baseline Metrics to Establish**:
- Property search response time: <500ms average
- Market analysis response time: <1s for 1000+ properties
- Concurrent operations: 15+ operations/second
- Memory usage: Linear scaling for datasets up to 5000 properties
**Load Testing Validation**:
- Sustained load: 100+ operations with performance tracking
- Concurrent users: 20+ simultaneous users
- Peak load: 50+ concurrent requests with 80%+ success rate
## ๐ **Success Criteria & Milestones**
### **Milestone 1: Critical Fix (30 minutes)**
- โ
Server starts without errors: `python -m main`
- โ
Core tools tests still pass: 21/21
- โ
OAuth tests still pass: 19/19
- โ
Authentication method available: `get_access_token()` callable
### **Milestone 2: Test Recovery (3 hours)**
- โ
Error scenario tests pass: 24/24
- โ
Integration tests pass: 10/10
- โ
Test success rate: 70%+ (140+/195+ tests)
- โ
No authentication-related test failures
### **Milestone 3: Full Validation (5 hours)**
- โ
All test modules functional
- โ
Test success rate: 85%+ (165+/195+ tests)
- โ
Test suite executes in < 2 minutes
- โ
Performance tests providing metrics
### **Milestone 4: Production Ready (8 hours)**
- โ
Documentation updated and accurate
- โ
Code coverage maintained at 89%
- โ
Load tests operational
- โ
Real API integration validated (if possible)
## ๐ **Risk Mitigation & Rollback**
### **Risk Assessment**
| Risk | Probability | Impact | Mitigation |
|------|-------------|--------|------------|
| OAuth fix breaks existing functionality | Low | High | Thorough testing + rollback plan |
| Test fixes introduce new issues | Medium | Medium | Incremental validation |
| Performance degradation | Low | Medium | Before/after benchmarking |
| Integration issues with real API | Medium | Low | Optional validation step |
### **Rollback Procedures**
**Phase 1 Rollback (OAuth Fix)**:
```bash
# Option 1: Remove added method
git checkout HEAD -- src/auth/oauth2.py
# Option 2: Update all callers to use get_valid_token()
# (More extensive but cleaner long-term)
```
**Phase 2 Rollback (Test Fixes)**:
```bash
# Revert specific test file changes
git checkout HEAD -- tests/test_error_scenarios.py
git checkout HEAD -- tests/test_integration.py
```
**Complete Rollback**:
```bash
# Return to last known good state
git reset --hard HEAD~[number-of-commits]
```
### **Validation Strategy**
**Incremental Testing**:
1. Run core tests after each major change
2. Validate server startup after OAuth fix
3. Test individual modules before full suite
4. Monitor performance impact of changes
**Quality Gates**:
- No regression in working tests (69 tests must continue passing)
- Server startup must work after OAuth fix
- Test execution time must remain reasonable (<2 minutes)
- Code coverage must not decrease below 89%
## ๐ **Expected Outcomes & Benefits**
### **Technical Outcomes**
- โ
**Fully Functional MCP Server**: Can start, authenticate, and serve all tools/resources
- โ
**Reliable Test Suite**: 195+ tests with 85%+ success rate providing confidence in changes
- โ
**Production Readiness**: Comprehensive testing validates scalability and performance
- โ
**Developer Productivity**: Working test suite enables rapid iteration and validation
### **Business Impact**
- โ
**User Experience**: MCP server can be deployed and used by Claude users
- โ
**Quality Assurance**: Test infrastructure validates all functionality
- โ
**Development Velocity**: Developers can iterate quickly with working tests
- โ
**Technical Credibility**: Claims of 89% coverage and 195+ tests validated
### **Long-term Benefits**
- โ
**Maintainability**: Consistent OAuth API reduces confusion
- โ
**Reliability**: Comprehensive error testing ensures robust error handling
- โ
**Scalability**: Load testing validates performance under realistic conditions
- โ
**Documentation**: Accurate docs support user adoption and developer onboarding
## ๐ **Implementation Notes**
### **Best Practices**
- **Commit frequently**: Each phase should have dedicated commits for easy rollback
- **Test incrementally**: Validate changes before proceeding to next phase
- **Document issues**: Track any unexpected problems for future reference
- **Monitor performance**: Ensure changes don't degrade system performance
### **Communication**
- Update todo list after each completed phase
- Document any deviations from the plan
- Note lessons learned for future similar issues
- Provide clear status updates on progress
### **Quality Assurance**
- All changes must pass existing tests before adding new functionality
- New test failures must be addressed before proceeding
- Performance must not regress significantly
- Documentation must be updated to reflect any API changes
---
**This plan provides a comprehensive, systematic approach to resolving all identified critical errors while maintaining system quality and enabling future development velocity.**