111 lines
No EOL
3.4 KiB
Markdown
111 lines
No EOL
3.4 KiB
Markdown
## **Your Complete Trading Infrastructure Overview**
|
|
|
|
### **🖥️ Server Architecture**
|
|
|
|
```yaml
|
|
9900K (ITX) - LIVE TRADING ENGINE
|
|
├── Purpose: Live + Paper Trading (Isolated)
|
|
├── Hardware: 8C/16T, 32GB RAM, NVMe storage
|
|
├── Databases:
|
|
│ ├── QuestDB: Hot OHLC (90 days)
|
|
│ └── Redis: Positions/Signals/State
|
|
├── Processes:
|
|
│ ├── Live Trading (cores 0-3)
|
|
│ ├── Paper Trading (cores 4-6)
|
|
│ └── Shared Services (core 7)
|
|
└── Network: 1GbE (sufficient for trading)
|
|
|
|
12900K - DATA PIPELINE & ANALYTICS
|
|
├── Purpose: ML, Backtesting, Data Processing
|
|
├── Hardware: 16C/24T, 64GB RAM, Multi-NVMe
|
|
├── Databases:
|
|
│ ├── PostgreSQL: Metadata/Fundamentals
|
|
│ ├── MongoDB: Raw provider data (8GB cache limited ✓)
|
|
│ └── QuestDB: Historical features
|
|
├── Processes:
|
|
│ ├── Data Pipeline
|
|
│ ├── ML Training
|
|
│ ├── Backtesting Engine
|
|
│ └── Dask Workers
|
|
└── Network: 10GbE (planned)
|
|
|
|
TRUENAS - STORAGE
|
|
├── Purpose: Cold Storage, Backups, Archives
|
|
├── Hardware: 6700K, 32TB (6x8TB RAIDZ1), 2.5GbE
|
|
├── Services:
|
|
│ ├── MinIO/S3: Parquet files
|
|
│ ├── Frigate: 2 cameras + Google Coral
|
|
│ └── Backup targets
|
|
└── Consideration: RAIDZ1 → RAIDZ2 upgrade
|
|
|
|
PROXMOX - INFRASTRUCTURE
|
|
├── Hardware: Ryzen 9 8945HS, 8C/16T
|
|
├── VMs/Containers:
|
|
│ ├── OPNSense: 4 threads (reduced from 8)
|
|
│ ├── GitLab: 3 threads
|
|
│ ├── Pihole: 1 thread
|
|
│ └── Monitoring: 4 threads (Prometheus/Grafana/Loki)
|
|
└── Purpose: Network, CI/CD, Monitoring
|
|
```
|
|
|
|
### **📊 Data Flow**
|
|
|
|
```
|
|
1. INGESTION: Providers → 12900K MongoDB/Pipeline
|
|
2. PROCESSING: Raw → Features → 12900K QuestDB
|
|
3. HOT SYNC: Last 90 days → 9900K QuestDB
|
|
4. TRADING: 9900K reads hot data → Execute trades
|
|
5. ARCHIVE: Old data → TrueNAS MinIO (Parquet)
|
|
```
|
|
|
|
### **🔑 Key Design Decisions**
|
|
|
|
```python
|
|
architecture_principles = {
|
|
"Isolation": "Live/Paper completely separated",
|
|
"Performance": "Hot data on NVMe, cold on HDD",
|
|
"Scalability": "200K symbols, 40+ years data",
|
|
"Processing": "Dask over Spark (Python native)",
|
|
"Safety": "Live trading NEVER competes for resources"
|
|
}
|
|
```
|
|
|
|
### **💾 Database Strategy**
|
|
|
|
| Database | Location | Purpose | Size |
|
|
|----------|----------|---------|------|
|
|
| QuestDB | 9900K | Hot OHLC/Ticks | ~200GB |
|
|
| Redis | 9900K | Live State | ~2GB |
|
|
| PostgreSQL | 12900K | Metadata/Config | ~100GB |
|
|
| MongoDB | 12900K | Raw Data (8GB cache) | ~500GB |
|
|
| QuestDB | 12900K | Historical | ~500GB |
|
|
| MinIO | TrueNAS | Archives | ~TBs |
|
|
|
|
### **🚀 Performance Optimizations**
|
|
|
|
```yaml
|
|
Completed:
|
|
✓ MongoDB cache limited to 8GB
|
|
✓ CPU affinity for live/paper trading
|
|
✓ Network architecture planned
|
|
✓ Database separation by access pattern
|
|
✓ Ubuntu 24.04 LTS chosen
|
|
|
|
Next Steps:
|
|
- Set up Dask cluster on 12900K
|
|
- Implement data sync pipeline
|
|
- Configure monitoring stack
|
|
- Test 20-year backtest performance
|
|
```
|
|
|
|
### **📈 Capacity**
|
|
|
|
```
|
|
- Symbols: 200,000 total (10-20K active)
|
|
- History: 40+ years
|
|
- Storage: 32TB available
|
|
- Backtest: 20 years feasible in ~2 days
|
|
- Live latency: <1ms target
|
|
```
|
|
|
|
Your infrastructure is well-designed for institutional-grade algo trading with proper separation of concerns and room to scale! The MongoDB fix freed up significant resources for actual trading/analysis work. |