updated some docs and removed some

This commit is contained in:
Bojan Kucera 2025-06-08 20:06:35 -04:00
parent 2aaeba2f6c
commit fe96cf6679
7 changed files with 109 additions and 734 deletions

View file

@ -1,238 +0,0 @@
# Stock Bot Multi-Database Architecture Documentation
## Overview
The Stock Bot platform uses a sophisticated multi-database architecture designed to handle different types of data efficiently. This document outlines the configuration system, database choices, and monitoring setup.
## Configuration System
### Migration from Custom Config to Envalid
The platform has migrated from a complex Valibot-based configuration system to a simpler, more maintainable **envalid** approach:
```typescript
// New configuration pattern used throughout
export const configName = cleanEnv(process.env, {
ENV_VAR: str({ default: 'value', desc: 'Description' }),
NUMERIC_VAR: num({ default: 3000, desc: 'Port number' }),
BOOLEAN_VAR: bool({ default: false, desc: 'Feature flag' }),
});
```
### Configuration Modules
| Module | Purpose | File |
|--------|---------|------|
| `database` | PostgreSQL operational data | `libs/config/src/database.ts` |
| `questdb` | Time-series data storage | `libs/config/src/questdb.ts` |
| `mongodb` | Document and unstructured data | `libs/config/src/mongodb.ts` |
| `dragonfly` | Caching and event streaming | `libs/config/src/dragonfly.ts` |
| `monitoring` | Prometheus and Grafana | `libs/config/src/monitoring.ts` |
| `loki` | Log aggregation | `libs/config/src/loki.ts` |
| `logging` | Application logging | `libs/config/src/logging.ts` |
## Database Architecture
### 1. PostgreSQL - Operational Data Store
**Purpose**: Primary relational database for structured operational data
- **Data Types**: Orders, positions, strategies, user accounts, trading rules
- **Strengths**: ACID compliance, complex queries, transactions
- **Configuration**: `libs/config/src/database.ts`
```typescript
// Example usage
import { databaseConfig } from '@trading-bot/config';
// Connects to operational PostgreSQL instance
```
### 2. QuestDB - Time-Series Database
**Purpose**: High-performance time-series data storage
- **Data Types**: OHLCV data, technical indicators, performance metrics, tick data
- **Strengths**: Fast ingestion, SQL queries on time-series, columnar storage
- **Configuration**: `libs/config/src/questdb.ts`
```typescript
// Example usage
import { questdbConfig } from '@trading-bot/config';
// Optimized for time-series queries and analytics
```
### 3. MongoDB - Document Store
**Purpose**: Flexible document storage for unstructured data
- **Data Types**: Market sentiment, news articles, research reports, ML model outputs
- **Strengths**: Schema flexibility, horizontal scaling, complex document queries
- **Configuration**: `libs/config/src/mongodb.ts`
```typescript
// Example usage
import { mongodbConfig } from '@trading-bot/config';
// Handles variable schema and complex nested data
```
### 4. Dragonfly - Cache & Event Store
**Purpose**: High-performance caching and real-time event streaming
- **Data Types**: Market data cache, session data, real-time events, pub/sub messages
- **Strengths**: Redis compatibility, better performance, memory efficiency
- **Configuration**: `libs/config/src/dragonfly.ts`
```typescript
// Example usage
import { dragonflyConfig } from '@trading-bot/config';
// Drop-in Redis replacement with better performance
```
## Monitoring & Observability Stack
### Prometheus - Metrics Collection
- **Purpose**: Time-series metrics and monitoring
- **Metrics**: System performance, trading metrics, database metrics
- **Configuration**: `libs/config/src/monitoring.ts`
### Grafana - Visualization
- **Purpose**: Dashboards and alerting
- **Dashboards**: Trading performance, system health, database monitoring
- **Configuration**: `libs/config/src/monitoring.ts`
### Loki - Log Aggregation
- **Purpose**: Centralized log collection and analysis
- **Logs**: Application logs, database logs, system logs
- **Configuration**: `libs/config/src/loki.ts`
### Application Logging
- **Purpose**: Structured application logging
- **Features**: Multiple formats, file rotation, log levels
- **Configuration**: `libs/config/src/logging.ts`
## Environment Files
### `.env` - Development (Local)
- **Purpose**: Local development with services on localhost
- **Databases**: All services running on localhost with standard ports
- **Logging**: Pretty-formatted console output
### `.env.docker` - Docker Compose
- **Purpose**: Container orchestration with Docker Compose
- **Databases**: Container names as hostnames (e.g., `postgres`, `mongodb`)
- **Features**: Health checks, resource limits, volume mounts
### `.env.complete` - Full Development
- **Purpose**: Complete feature set for development testing
- **Features**: All services enabled, verbose logging, debug mode
- **Use Case**: Testing the full platform locally
### `.env.prod` - Production
- **Purpose**: Production deployment configuration
- **Security**: Environment variable references, secure defaults
- **Features**: Optimized logging, monitoring enabled
## Admin Interfaces
### PgAdmin - PostgreSQL Management
- **URL**: http://localhost:8080 (development)
- **Purpose**: Database administration, query execution, monitoring
### Mongo Express - MongoDB Management
- **URL**: http://localhost:8081 (development)
- **Purpose**: Document browsing, collection management, query testing
### Redis Insight - Dragonfly/Redis Management
- **URL**: http://localhost:8001 (development)
- **Purpose**: Cache monitoring, key browsing, performance analysis
## Data Flow Architecture
```
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Market Data │───▶│ Dragonfly │───▶│ QuestDB │
│ Feed │ │ (Cache/Events) │ │ (Time-Series) │
└─────────────────┘ └──────────────────┘ └─────────────────┘
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Trading │◀──▶│ PostgreSQL │ │ MongoDB │
│ Engine │ │ (Operational) │ │ (Documents) │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│ ▲
▼ │
┌─────────────────┐ ┌──────────────────┐ │
│ Monitoring │◀───│ Prometheus │ │
│ Dashboard │ │ (Metrics) │ │
└─────────────────┘ └──────────────────┘ │
┌─────────────────┐ ┌──────────────────┐ │
│ Log Analysis │◀───│ Loki │───────────┘
│ │ │ (Logs) │
└─────────────────┘ └──────────────────┘
```
## Best Practices
### Database Selection Guidelines
1. **PostgreSQL**: Use for transactional data requiring ACID properties
- Orders, positions, account balances, strategy configurations
2. **QuestDB**: Use for time-series data requiring fast analytics
- OHLCV data, technical indicators, performance metrics
3. **MongoDB**: Use for flexible, document-based data
- Market sentiment, news articles, ML model outputs
4. **Dragonfly**: Use for temporary data requiring fast access
- Real-time market data cache, session data, event streams
### Configuration Best Practices
1. **Environment Separation**: Use appropriate `.env` file for each environment
2. **Security**: Never commit sensitive credentials to version control
3. **Validation**: All configuration uses envalid for runtime validation
4. **Documentation**: Each config variable includes descriptive help text
### Monitoring Best Practices
1. **Metrics**: Monitor database performance, trading metrics, system health
2. **Logging**: Use structured logging with appropriate log levels
3. **Alerting**: Set up Grafana alerts for critical system metrics
4. **Log Retention**: Configure appropriate retention periods for each environment
## Migration Guide
If migrating from the old configuration system:
1. **Update Imports**: Change from custom config to new envalid-based modules
2. **Environment Variables**: Update `.env` files to include all new services
3. **Docker Setup**: Use `.env.docker` for container-based deployments
4. **Monitoring**: Enable Prometheus, Grafana, and Loki for observability
## Troubleshooting
### Common Issues
1. **Connection Failures**: Check container names in Docker environments
2. **Port Conflicts**: Verify port mappings in environment files
3. **Permission Errors**: Ensure proper database credentials and permissions
4. **Memory Issues**: Adjust resource limits in Docker configuration
### Debug Commands
```bash
# Check container status
docker-compose ps
# View container logs
docker-compose logs [service-name]
# Test database connections
docker-compose exec postgres pg_isready
docker-compose exec mongodb mongosh --eval "db.runCommand('ping')"
docker-compose exec questdb curl -f http://localhost:9000/status
docker-compose exec dragonfly redis-cli ping
```
## Future Enhancements
1. **Database Sharding**: Implement horizontal scaling for high-volume data
2. **Read Replicas**: Add read replicas for improved query performance
3. **Backup Strategy**: Implement automated backup and recovery procedures
4. **Security**: Add encryption at rest and in transit
5. **Performance**: Implement connection pooling and query optimization

View file

@ -1,153 +0,0 @@
# Bun Test Output Customization Guide
## Command Line Options
### Basic Output Control
```bash
# Run with coverage report
bun test --coverage
# Use specific coverage reporter (text or lcov)
bun test --coverage --coverage-reporter=text
bun test --coverage --coverage-reporter=lcov
bun test --coverage --coverage-reporter=text,lcov
# Set custom coverage directory
bun test --coverage --coverage-dir=my-coverage
# Run only specific test name patterns
bun test --test-name-pattern="HttpClient"
bun test -t "should handle"
# Stop after first failure
bun test --bail
bun test --bail=3 # Stop after 3 failures
# Run only tests marked with .only()
bun test --only
# Include todo tests
bun test --todo
```
### Reporters
```bash
# Use JUnit XML reporter for CI/CD
bun test --reporter=junit --reporter-outfile=test-results.xml
# Default console output (no flag needed)
bun test
```
### Test Filtering
```bash
# Run specific test files
bun test http.test.ts
bun test "**/*integration*"
# Run tests multiple times to catch flaky tests
bun test --rerun-each=3
```
## Configuration in bunfig.toml
You can set default options in your bunfig.toml file:
```toml
[test]
# Always generate coverage
coverage = true
# Set custom timeout
timeout = "10s"
# Set default reporter
reporter = "junit"
reporter-outfile = "test-results.xml"
# Set coverage options
coverage-reporter = "text,lcov"
coverage-dir = "coverage"
# Test environment
preload = ["./test/setup.ts"]
[test.env]
NODE_ENV = "test"
LOG_LEVEL = "silent"
```
## Custom Test Setup
You can customize test output through your test setup files:
### In test/setup.ts:
```typescript
// Customize console output
console.log = (message) => {
// Custom formatting
if (typeof message === 'string') {
console.info(`[TEST] ${new Date().toISOString()} ${message}`);
}
};
// Add custom test utilities
global.testHelpers = {
logTestStart: (testName: string) => {
console.info(`🧪 Starting test: ${testName}`);
},
logTestEnd: (testName: string, passed: boolean) => {
const icon = passed ? '✅' : '❌';
console.info(`${icon} Completed test: ${testName}`);
}
};
```
### In individual test files:
```typescript
import { describe, test, expect, beforeEach } from 'bun:test';
describe('HttpClient', () => {
beforeEach(() => {
global.testHelpers?.logTestStart('HttpClient test');
});
test('should work', () => {
// Your test code
global.testHelpers?.logTestEnd('should work', true);
});
});
```
## Package.json Scripts
Add custom test scripts with different output configurations:
```json
{
"scripts": {
"test": "bun test",
"test:coverage": "bun test --coverage",
"test:junit": "bun test --reporter=junit --reporter-outfile=test-results.xml",
"test:watch": "bun test --watch",
"test:verbose": "bun test --coverage --coverage-reporter=text",
"test:ci": "bun test --coverage --reporter=junit --reporter-outfile=test-results.xml --bail",
"test:specific": "bun test --test-name-pattern"
}
}
```
## Environment Variables
Control output through environment variables:
```bash
# Set log level for tests
LOG_LEVEL=debug bun test
# Silent mode
LOG_LEVEL=silent bun test
# Custom formatting
TEST_OUTPUT_FORMAT=json bun test
```

View file

@ -1,18 +1,16 @@
# Loki Logging for Stock Bot
This document outlines how to use the Loki logging system integrated with the Stock Bot platform.
This document outlines how to use the Loki logging system integrated with the Stock Bot platform (Updated June 2025).
## Overview
Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. Unlike other logging systems, Loki is built around the idea of only indexing metadata about your logs (labels), not the full text. This makes Loki more resource-efficient than traditional log storage systems.
Loki provides centralized logging for all Stock Bot services with:
For Stock Bot, Loki provides:
1. Centralized logging for all services
2. Log aggregation and filtering by service, level, and custom labels
3. Integration with Grafana for visualization
4. Query capabilities for log analysis
5. Alert capabilities for critical issues
1. **Centralized logging** for all microservices
2. **Log aggregation** and filtering by service, level, and custom labels
3. **Grafana integration** for visualization and dashboards
4. **Query capabilities** using LogQL for log analysis
5. **Alert capabilities** for critical issues
## Getting Started
@ -23,7 +21,7 @@ For Stock Bot, Loki provides:
scripts\docker.ps1 monitoring
```
You can also start Loki directly using Docker Compose:
Or start services individually:
```cmd
# Start Loki service only
@ -43,21 +41,21 @@ Once started:
## Using the Logger in Your Services
The Stock Bot logger has been enhanced to automatically send logs to Loki. Here's how to use it:
The Stock Bot logger automatically sends logs to Loki using the updated pattern:
```typescript
import { Logger, LogLevel } from '@stock-bot/utils';
import { getLogger } from '@stock-bot/logger';
// Create a logger for your service
const logger = new Logger('your-service-name', LogLevel.INFO);
const logger = getLogger('your-service-name');
// Log at different levels
logger.debug('Detailed information for debugging');
logger.info('General information about operations');
logger.warn('Potential issues that don't affect operation');
logger.warn('Potential issues that don\'t affect operation');
logger.error('Critical errors that require attention');
// Log with structured data (will be searchable in Loki)
// Log with structured data (searchable in Loki)
logger.info('Processing trade', {
symbol: 'MSFT',
price: 410.75,
@ -107,10 +105,10 @@ Inside Grafana, you can use these LogQL queries to analyze your logs:
## Testing the Logging Integration
A test script is provided to verify the logging integration:
Test the logging integration using Bun:
```bash
# Run from project root
```cmd
# Run from project root using Bun (current runtime)
bun run tools/test-loki-logging.ts
```
@ -120,8 +118,8 @@ Our logging implementation follows this architecture:
```
┌─────────────────┐ ┌─────────────────┐
│ Trading Services│────►│ @stock-bot/utils
└─────────────────┘ │ Logger
│ Trading Services│────►│ @stock-bot/logger
└─────────────────┘ │ getLogger()
└────────┬────────┘
@ -163,9 +161,9 @@ If logs aren't appearing in Grafana:
type .env | findstr "LOKI_"
```
4. Ensure your service has the latest @stock-bot/utils package
4. Ensure your service has the latest @stock-bot/logger package
5. Check for errors in the Loki container logs:
```cmd
docker logs trading-bot-loki
docker logs stock-bot-loki
```

View file

@ -1,12 +1,12 @@
# Testing with Bun in Stock Bot Platform
This project uses [Bun Test](https://bun.sh/docs/cli/test) for all testing needs. Bun Test provides a fast, modern testing experience with Jest-like API compatibility.
The Stock Bot platform uses [Bun Test](https://bun.sh/docs/cli/test) as the primary testing framework (Updated June 2025). Bun Test provides fast, modern testing with Jest-like API compatibility.
## Getting Started
To run tests:
Run tests using these commands:
```bash
```cmd
# Run all tests (using Turbo)
bun test