README.mdβ’3.51 kB
# Axion Planetary AI Module
Foundation model integration for intelligent planetary data analysis.
## Overview
This module integrates Claude 3.5 Sonnet via AWS Bedrock to provide:
- Automated planetary data classification
- Spectral analysis and interpretation
- Anomaly detection in sensor data
- Mission planning assistance
- Natural language insights generation
## Architecture
```
ai/
βββ services/ # LLM client implementations
βββ processors/ # Analysis pipelines
βββ prompts/ # Prompt templates
βββ storage/ # Vector embeddings store
βββ model-config.json # Model configuration
```
## Setup
### Prerequisites
- AWS Bedrock access with Claude 3.5 Sonnet enabled
- AWS credentials configured
### Installation
```bash
cd ai
npm install
```
### Configuration
Edit `model-config.json` to customize:
- Model selection (Claude, Titan, etc.)
- Temperature and sampling parameters
- Rate limits
- Feature flags
## Usage
### Basic Analysis
```typescript
import { AIDataAnalyzer } from './ai/processors/data-analyzer';
const analyzer = new AIDataAnalyzer();
const result = await analyzer.analyzePlanetaryData(planetData);
console.log(result.classification, result.confidence);
```
### Streaming Responses
```typescript
import { BedrockClient } from './ai/services/bedrock-client';
const client = new BedrockClient();
const stream = await client.generateText(prompt, { streaming: true });
for await (const chunk of stream) {
process.stdout.write(chunk);
}
```
### Vector Search
```typescript
import { VectorStore } from './ai/storage/vector-store';
const store = new VectorStore();
await store.addDocument('doc1', 'Mars surface composition...', { source: 'MRO' });
const results = await store.search('iron oxide deposits');
console.log(results);
```
## API Endpoints
### POST `/api/ai/analyze`
Analyze planetary data using foundation models.
**Request:**
```json
{
"type": "planetary",
"data": {
"name": "Kepler-442b",
"radius": 1.34,
"mass": 2.3,
"temperature": 233
}
}
```
**Response:**
```json
{
"success": true,
"result": {
"classification": "Super-Earth",
"confidence": 0.87,
"features": { ... },
"recommendations": [ ... ]
}
}
```
### POST `/api/ai/stream`
Stream AI responses in real-time.
**Request:**
```json
{
"prompt": "Explain the significance of water ice on Mars"
}
```
**Response:** Server-Sent Events stream
## Model Selection
Current models:
- **Primary:** Claude 3.5 Sonnet (200K context)
- **Fallback:** Claude 3 Haiku (fast responses)
- **Embeddings:** Titan Embeddings v2 (1024 dimensions)
## Prompt Engineering
Prompts are optimized for:
- Structured output (JSON)
- Scientific accuracy
- Confidence scoring
- Actionable recommendations
See `prompts/planetary-analysis.ts` for templates.
## Performance
- Response time: ~2-5s (non-streaming)
- Streaming latency: ~200ms to first token
- Embedding generation: ~100ms
- Cache hit ratio: ~40% (typical)
## Cost Management
Bedrock pricing (approximate):
- Claude 3.5 Sonnet: $3/M input tokens, $15/M output tokens
- Claude 3 Haiku: $0.25/M input tokens, $1.25/M output tokens
- Titan Embeddings: $0.0001/1K tokens
Implement caching and rate limiting to control costs.
## Roadmap
- [ ] Multi-modal analysis (images + text)
- [ ] Fine-tuned models for domain-specific tasks
- [ ] Agent workflows with tool use
- [ ] Batch processing optimization
- [ ] Real-time collaboration features
## License
MIT