# Stock Bot Multi-Database Architecture Documentation ## Overview The Stock Bot platform uses a sophisticated multi-database architecture designed to handle different types of data efficiently. This document outlines the configuration system, database choices, and monitoring setup. ## Configuration System ### Migration from Custom Config to Envalid The platform has migrated from a complex Valibot-based configuration system to a simpler, more maintainable **envalid** approach: ```typescript // New configuration pattern used throughout export const configName = cleanEnv(process.env, { ENV_VAR: str({ default: 'value', desc: 'Description' }), NUMERIC_VAR: num({ default: 3000, desc: 'Port number' }), BOOLEAN_VAR: bool({ default: false, desc: 'Feature flag' }), }); ``` ### Configuration Modules | Module | Purpose | File | |--------|---------|------| | `database` | PostgreSQL operational data | `libs/config/src/database.ts` | | `questdb` | Time-series data storage | `libs/config/src/questdb.ts` | | `mongodb` | Document and unstructured data | `libs/config/src/mongodb.ts` | | `dragonfly` | Caching and event streaming | `libs/config/src/dragonfly.ts` | | `monitoring` | Prometheus and Grafana | `libs/config/src/monitoring.ts` | | `loki` | Log aggregation | `libs/config/src/loki.ts` | | `logging` | Application logging | `libs/config/src/logging.ts` | ## Database Architecture ### 1. PostgreSQL - Operational Data Store **Purpose**: Primary relational database for structured operational data - **Data Types**: Orders, positions, strategies, user accounts, trading rules - **Strengths**: ACID compliance, complex queries, transactions - **Configuration**: `libs/config/src/database.ts` ```typescript // Example usage import { databaseConfig } from '@trading-bot/config'; // Connects to operational PostgreSQL instance ``` ### 2. QuestDB - Time-Series Database **Purpose**: High-performance time-series data storage - **Data Types**: OHLCV data, technical indicators, performance metrics, tick data - **Strengths**: Fast ingestion, SQL queries on time-series, columnar storage - **Configuration**: `libs/config/src/questdb.ts` ```typescript // Example usage import { questdbConfig } from '@trading-bot/config'; // Optimized for time-series queries and analytics ``` ### 3. MongoDB - Document Store **Purpose**: Flexible document storage for unstructured data - **Data Types**: Market sentiment, news articles, research reports, ML model outputs - **Strengths**: Schema flexibility, horizontal scaling, complex document queries - **Configuration**: `libs/config/src/mongodb.ts` ```typescript // Example usage import { mongodbConfig } from '@trading-bot/config'; // Handles variable schema and complex nested data ``` ### 4. Dragonfly - Cache & Event Store **Purpose**: High-performance caching and real-time event streaming - **Data Types**: Market data cache, session data, real-time events, pub/sub messages - **Strengths**: Redis compatibility, better performance, memory efficiency - **Configuration**: `libs/config/src/dragonfly.ts` ```typescript // Example usage import { dragonflyConfig } from '@trading-bot/config'; // Drop-in Redis replacement with better performance ``` ## Monitoring & Observability Stack ### Prometheus - Metrics Collection - **Purpose**: Time-series metrics and monitoring - **Metrics**: System performance, trading metrics, database metrics - **Configuration**: `libs/config/src/monitoring.ts` ### Grafana - Visualization - **Purpose**: Dashboards and alerting - **Dashboards**: Trading performance, system health, database monitoring - **Configuration**: `libs/config/src/monitoring.ts` ### Loki - Log Aggregation - **Purpose**: Centralized log collection and analysis - **Logs**: Application logs, database logs, system logs - **Configuration**: `libs/config/src/loki.ts` ### Application Logging - **Purpose**: Structured application logging - **Features**: Multiple formats, file rotation, log levels - **Configuration**: `libs/config/src/logging.ts` ## Environment Files ### `.env` - Development (Local) - **Purpose**: Local development with services on localhost - **Databases**: All services running on localhost with standard ports - **Logging**: Pretty-formatted console output ### `.env.docker` - Docker Compose - **Purpose**: Container orchestration with Docker Compose - **Databases**: Container names as hostnames (e.g., `postgres`, `mongodb`) - **Features**: Health checks, resource limits, volume mounts ### `.env.complete` - Full Development - **Purpose**: Complete feature set for development testing - **Features**: All services enabled, verbose logging, debug mode - **Use Case**: Testing the full platform locally ### `.env.prod` - Production - **Purpose**: Production deployment configuration - **Security**: Environment variable references, secure defaults - **Features**: Optimized logging, monitoring enabled ## Admin Interfaces ### PgAdmin - PostgreSQL Management - **URL**: http://localhost:8080 (development) - **Purpose**: Database administration, query execution, monitoring ### Mongo Express - MongoDB Management - **URL**: http://localhost:8081 (development) - **Purpose**: Document browsing, collection management, query testing ### Redis Insight - Dragonfly/Redis Management - **URL**: http://localhost:8001 (development) - **Purpose**: Cache monitoring, key browsing, performance analysis ## Data Flow Architecture ``` ┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐ │ Market Data │───▶│ Dragonfly │───▶│ QuestDB │ │ Feed │ │ (Cache/Events) │ │ (Time-Series) │ └─────────────────┘ └──────────────────┘ └─────────────────┘ │ ▼ ┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐ │ Trading │◀──▶│ PostgreSQL │ │ MongoDB │ │ Engine │ │ (Operational) │ │ (Documents) │ └─────────────────┘ └──────────────────┘ └─────────────────┘ │ ▲ ▼ │ ┌─────────────────┐ ┌──────────────────┐ │ │ Monitoring │◀───│ Prometheus │ │ │ Dashboard │ │ (Metrics) │ │ └─────────────────┘ └──────────────────┘ │ │ ┌─────────────────┐ ┌──────────────────┐ │ │ Log Analysis │◀───│ Loki │───────────┘ │ │ │ (Logs) │ └─────────────────┘ └──────────────────┘ ``` ## Best Practices ### Database Selection Guidelines 1. **PostgreSQL**: Use for transactional data requiring ACID properties - Orders, positions, account balances, strategy configurations 2. **QuestDB**: Use for time-series data requiring fast analytics - OHLCV data, technical indicators, performance metrics 3. **MongoDB**: Use for flexible, document-based data - Market sentiment, news articles, ML model outputs 4. **Dragonfly**: Use for temporary data requiring fast access - Real-time market data cache, session data, event streams ### Configuration Best Practices 1. **Environment Separation**: Use appropriate `.env` file for each environment 2. **Security**: Never commit sensitive credentials to version control 3. **Validation**: All configuration uses envalid for runtime validation 4. **Documentation**: Each config variable includes descriptive help text ### Monitoring Best Practices 1. **Metrics**: Monitor database performance, trading metrics, system health 2. **Logging**: Use structured logging with appropriate log levels 3. **Alerting**: Set up Grafana alerts for critical system metrics 4. **Log Retention**: Configure appropriate retention periods for each environment ## Migration Guide If migrating from the old configuration system: 1. **Update Imports**: Change from custom config to new envalid-based modules 2. **Environment Variables**: Update `.env` files to include all new services 3. **Docker Setup**: Use `.env.docker` for container-based deployments 4. **Monitoring**: Enable Prometheus, Grafana, and Loki for observability ## Troubleshooting ### Common Issues 1. **Connection Failures**: Check container names in Docker environments 2. **Port Conflicts**: Verify port mappings in environment files 3. **Permission Errors**: Ensure proper database credentials and permissions 4. **Memory Issues**: Adjust resource limits in Docker configuration ### Debug Commands ```bash # Check container status docker-compose ps # View container logs docker-compose logs [service-name] # Test database connections docker-compose exec postgres pg_isready docker-compose exec mongodb mongosh --eval "db.runCommand('ping')" docker-compose exec questdb curl -f http://localhost:9000/status docker-compose exec dragonfly redis-cli ping ``` ## Future Enhancements 1. **Database Sharding**: Implement horizontal scaling for high-volume data 2. **Read Replicas**: Add read replicas for improved query performance 3. **Backup Strategy**: Implement automated backup and recovery procedures 4. **Security**: Add encryption at rest and in transit 5. **Performance**: Implement connection pooling and query optimization