lowered memory on mong and updated md's

This commit is contained in:
Boki 2025-07-07 12:19:37 -04:00
parent f69181a8bc
commit 93d3ac134a
3 changed files with 129 additions and 0 deletions

View file

@ -85,6 +85,7 @@ services: # Dragonfly - Redis replacement for caching and events
MONGO_INITDB_DATABASE: stock
ports:
- "27017:27017"
command: --wiredTigerCacheSizeGB 8
volumes:
- mongodb_data:/data/db
- ./database/mongodb/init:/docker-entrypoint-initdb.d

111
docs/servers.md Normal file
View file

@ -0,0 +1,111 @@
## **Your Complete Trading Infrastructure Overview**
### **🖥️ Server Architecture**
```yaml
9900K (ITX) - LIVE TRADING ENGINE
├── Purpose: Live + Paper Trading (Isolated)
├── Hardware: 8C/16T, 32GB RAM, NVMe storage
├── Databases:
│ ├── QuestDB: Hot OHLC (90 days)
│ └── Redis: Positions/Signals/State
├── Processes:
│ ├── Live Trading (cores 0-3)
│ ├── Paper Trading (cores 4-6)
│ └── Shared Services (core 7)
└── Network: 1GbE (sufficient for trading)
12900K - DATA PIPELINE & ANALYTICS
├── Purpose: ML, Backtesting, Data Processing
├── Hardware: 16C/24T, 64GB RAM, Multi-NVMe
├── Databases:
│ ├── PostgreSQL: Metadata/Fundamentals
│ ├── MongoDB: Raw provider data (8GB cache limited ✓)
│ └── QuestDB: Historical features
├── Processes:
│ ├── Data Pipeline
│ ├── ML Training
│ ├── Backtesting Engine
│ └── Dask Workers
└── Network: 10GbE (planned)
TRUENAS - STORAGE
├── Purpose: Cold Storage, Backups, Archives
├── Hardware: 6700K, 32TB (6x8TB RAIDZ1), 2.5GbE
├── Services:
│ ├── MinIO/S3: Parquet files
│ ├── Frigate: 2 cameras + Google Coral
│ └── Backup targets
└── Consideration: RAIDZ1 → RAIDZ2 upgrade
PROXMOX - INFRASTRUCTURE
├── Hardware: Ryzen 9 8945HS, 8C/16T
├── VMs/Containers:
│ ├── OPNSense: 4 threads (reduced from 8)
│ ├── GitLab: 3 threads
│ ├── Pihole: 1 thread
│ └── Monitoring: 4 threads (Prometheus/Grafana/Loki)
└── Purpose: Network, CI/CD, Monitoring
```
### **📊 Data Flow**
```
1. INGESTION: Providers → 12900K MongoDB/Pipeline
2. PROCESSING: Raw → Features → 12900K QuestDB
3. HOT SYNC: Last 90 days → 9900K QuestDB
4. TRADING: 9900K reads hot data → Execute trades
5. ARCHIVE: Old data → TrueNAS MinIO (Parquet)
```
### **🔑 Key Design Decisions**
```python
architecture_principles = {
"Isolation": "Live/Paper completely separated",
"Performance": "Hot data on NVMe, cold on HDD",
"Scalability": "200K symbols, 40+ years data",
"Processing": "Dask over Spark (Python native)",
"Safety": "Live trading NEVER competes for resources"
}
```
### **💾 Database Strategy**
| Database | Location | Purpose | Size |
|----------|----------|---------|------|
| QuestDB | 9900K | Hot OHLC/Ticks | ~200GB |
| Redis | 9900K | Live State | ~2GB |
| PostgreSQL | 12900K | Metadata/Config | ~100GB |
| MongoDB | 12900K | Raw Data (8GB cache) | ~500GB |
| QuestDB | 12900K | Historical | ~500GB |
| MinIO | TrueNAS | Archives | ~TBs |
### **🚀 Performance Optimizations**
```yaml
Completed:
✓ MongoDB cache limited to 8GB
✓ CPU affinity for live/paper trading
✓ Network architecture planned
✓ Database separation by access pattern
✓ Ubuntu 24.04 LTS chosen
Next Steps:
- Set up Dask cluster on 12900K
- Implement data sync pipeline
- Configure monitoring stack
- Test 20-year backtest performance
```
### **📈 Capacity**
```
- Symbols: 200,000 total (10-20K active)
- History: 40+ years
- Storage: 32TB available
- Backtest: 20 years feasible in ~2 days
- Live latency: <1ms target
```
Your infrastructure is well-designed for institutional-grade algo trading with proper separation of concerns and room to scale! The MongoDB fix freed up significant resources for actual trading/analysis work.

View file

@ -6,3 +6,20 @@ Data Ingestion
- CEO - watch all and save them to posts. do it every minute
- Test our handler specific rate limits
- Fix up handler worker counts
Servers
Proxmox
- move gitlab to router server
- set up monitoring solution
Truenas
- switch truenas to RAIDz2
- set up frigate with coral
- set up minio
9900k
- set up new servers with XFS/LVM + ubuntu
12900k
- set up new servers with XFS/LVM + ubuntu