This commit is contained in:
Boki 2025-06-22 17:55:51 -04:00
parent d858222af7
commit 7d9044ab29
202 changed files with 10755 additions and 10972 deletions

Binary file not shown.

View file

@ -0,0 +1,58 @@
# Code Style and Conventions
## TypeScript Configuration
- **Strict mode enabled**: All strict checks are on
- **Target**: ES2022
- **Module**: ESNext with bundler resolution
- **Path aliases**: `@stock-bot/*` maps to `libs/*/src`
- **Decorators**: Enabled for dependency injection
## Code Style Rules (ESLint)
- **No unused variables**: Error (except prefixed with `_`)
- **No explicit any**: Warning
- **No non-null assertion**: Warning
- **No console**: Warning (except in tests)
- **Prefer const**: Enforced
- **Strict equality**: Always use `===`
- **Curly braces**: Required for all blocks
## Formatting (Prettier)
- **Semicolons**: Always
- **Single quotes**: Yes
- **Trailing comma**: ES5
- **Print width**: 100 characters
- **Tab width**: 2 spaces
- **Arrow parens**: Avoid when possible
- **End of line**: LF
## Import Order
1. Node built-ins
2. Third-party modules
3. `@stock-bot/*` imports
4. Relative imports (parent directories first)
5. Current directory imports
## Naming Conventions
- **Files**: kebab-case (e.g., `database-setup.ts`)
- **Classes**: PascalCase
- **Functions/Variables**: camelCase
- **Constants**: UPPER_SNAKE_CASE
- **Interfaces/Types**: PascalCase with 'I' or 'T' prefix optional
## Library Standards
- **Named exports only**: No default exports
- **Factory patterns**: For complex initialization
- **Singleton pattern**: For global services (config, logger)
- **Direct class exports**: For DI-managed services
## Testing
- **File naming**: `*.test.ts` or `*.spec.ts`
- **Test structure**: Bun's built-in test runner
- **Integration tests**: Use TestContainers for databases
- **Mocking**: Mock external dependencies
## Documentation
- **JSDoc**: For all public APIs
- **README.md**: Required for each library
- **Usage examples**: Include in documentation
- **Error messages**: Descriptive with context

View file

@ -0,0 +1,41 @@
# Current Refactoring Context
## Data Ingestion Service Refactor
The project is currently undergoing a major refactoring to move away from singleton patterns to a dependency injection approach using service containers.
### What's Been Done
- Created connection pool pattern with `ServiceContainer`
- Refactored data-ingestion service to use DI container
- Updated handlers to accept container parameter
- Added proper resource disposal with `ctx.dispose()`
### Migration Status
- QM handler: ✅ Fully migrated to container pattern
- IB handler: ⚠️ Partially migrated (using migration helper)
- Proxy handler: ✅ Updated to accept container
- WebShare handler: ✅ Updated to accept container
### Key Patterns
1. **Service Container**: Central DI container managing all connections
2. **Operation Context**: Provides scoped database access within operations
3. **Factory Pattern**: Connection factories for different databases
4. **Resource Disposal**: Always call `ctx.dispose()` after operations
### Example Pattern
```typescript
const ctx = OperationContext.create('handler', 'operation', { container });
try {
// Use databases through context
await ctx.mongodb.insertOne(data);
await ctx.postgres.query('...');
return { success: true };
} finally {
await ctx.dispose(); // Always cleanup
}
```
### Next Steps
- Complete migration of remaining IB operations
- Remove migration helper once complete
- Apply same pattern to other services
- Add monitoring for connection pools

View file

@ -0,0 +1,55 @@
# Stock Bot Trading Platform
## Project Purpose
This is an advanced trading bot platform with a microservice architecture designed for automated stock trading. The system includes:
- Market data ingestion from multiple providers (Yahoo Finance, QuoteMedia, Interactive Brokers, WebShare)
- Data processing and technical indicator calculation
- Trading strategy development and backtesting
- Order execution and risk management
- Portfolio tracking and performance analytics
- Web dashboard for monitoring
## Architecture Overview
The project follows a **microservices architecture** with shared libraries:
### Core Services (apps/)
- **data-ingestion**: Ingests market data from multiple providers
- **data-pipeline**: Processes and transforms data
- **web-api**: REST API service
- **web-app**: React-based dashboard
### Shared Libraries (libs/)
**Core Libraries:**
- config: Environment configuration with Zod validation
- logger: Structured logging with Loki integration
- di: Dependency injection container
- types: Shared TypeScript types
- handlers: Common handler patterns
**Data Libraries:**
- postgres: PostgreSQL client for transactional data
- questdb: Time-series database for market data
- mongodb: Document storage for configurations
**Service Libraries:**
- queue: BullMQ-based job processing
- event-bus: Dragonfly/Redis event bus
- shutdown: Graceful shutdown management
**Utils:**
- Financial calculations and technical indicators
- Date utilities
- Position sizing calculations
## Database Strategy
- **PostgreSQL**: Transactional data (orders, positions, strategies)
- **QuestDB**: Time-series data (OHLCV, indicators, performance metrics)
- **MongoDB**: Document storage (configurations, raw API responses)
- **Dragonfly/Redis**: Event bus and caching layer
## Current Development Phase
Phase 1: Data Foundation Layer (In Progress)
- Enhancing data provider reliability
- Implementing data validation
- Optimizing time-series storage
- Building robust HTTP client with circuit breakers

View file

@ -0,0 +1,62 @@
# Project Structure
## Root Directory
```
stock-bot/
├── apps/ # Microservice applications
│ ├── data-ingestion/ # Market data ingestion service
│ ├── data-pipeline/ # Data processing pipeline
│ ├── web-api/ # REST API service
│ └── web-app/ # React dashboard
├── libs/ # Shared libraries
│ ├── core/ # Core functionality
│ │ ├── config/ # Configuration management
│ │ ├── logger/ # Logging infrastructure
│ │ ├── di/ # Dependency injection
│ │ ├── types/ # Shared TypeScript types
│ │ └── handlers/ # Common handler patterns
│ ├── data/ # Database clients
│ │ ├── postgres/ # PostgreSQL client
│ │ ├── questdb/ # QuestDB time-series client
│ │ └── mongodb/ # MongoDB document storage
│ ├── services/ # Service utilities
│ │ ├── queue/ # BullMQ job processing
│ │ ├── event-bus/ # Dragonfly event bus
│ │ └── shutdown/ # Graceful shutdown
│ └── utils/ # Utility functions
├── database/ # Database schemas and migrations
├── scripts/ # Build and utility scripts
├── config/ # Configuration files
├── monitoring/ # Monitoring configurations
├── docs/ # Documentation
└── test/ # Global test utilities
## Key Files
- `package.json` - Root package configuration
- `turbo.json` - Turbo monorepo configuration
- `tsconfig.json` - TypeScript configuration
- `eslint.config.js` - ESLint rules
- `.prettierrc` - Prettier formatting rules
- `docker-compose.yml` - Infrastructure setup
- `.env` - Environment variables
## Monorepo Structure
- Uses Bun workspaces with Turbo for orchestration
- Each app and library has its own package.json
- Shared dependencies at root level
- Libraries published as `@stock-bot/*` packages
## Service Architecture Pattern
Each service typically follows:
```
service/
├── src/
│ ├── index.ts # Entry point
│ ├── routes/ # API routes (Hono)
│ ├── handlers/ # Business logic
│ ├── services/ # Service layer
│ └── types/ # Service-specific types
├── test/ # Tests
├── package.json
└── tsconfig.json
```

View file

@ -0,0 +1,73 @@
# Suggested Commands for Development
## Package Management (Bun)
- `bun install` - Install all dependencies
- `bun add <package>` - Add a new dependency
- `bun add -D <package>` - Add a dev dependency
- `bun update` - Update dependencies
## Development
- `bun run dev` - Start all services in development mode (uses Turbo)
- `bun run dev:full` - Start infrastructure + admin tools + dev mode
- `bun run dev:clean` - Reset infrastructure and start fresh
## Building
- `bun run build` - Build all services and libraries
- `bun run build:libs` - Build only shared libraries
- `bun run build:all:clean` - Clean build with cache removal
- `./scripts/build-all.sh` - Custom build script with options
## Testing
- `bun test` - Run all tests
- `bun test --watch` - Run tests in watch mode
- `bun run test:coverage` - Run tests with coverage report
- `bun run test:libs` - Test only shared libraries
- `bun run test:apps` - Test only applications
- `bun test <file>` - Run specific test file
## Code Quality (IMPORTANT - Run before committing!)
- `bun run lint` - Check for linting errors
- `bun run lint:fix` - Auto-fix linting issues
- `bun run format` - Format code with Prettier
- `./scripts/format.sh` - Alternative format script
## Infrastructure Management
- `bun run infra:up` - Start databases (PostgreSQL, QuestDB, MongoDB, Dragonfly)
- `bun run infra:down` - Stop infrastructure
- `bun run infra:reset` - Reset with clean volumes
- `bun run docker:admin` - Start admin GUIs (pgAdmin, Mongo Express, Redis Insight)
- `bun run docker:monitoring` - Start monitoring stack
## Database Operations
- `bun run db:setup-ib` - Setup Interactive Brokers database schema
- `bun run db:init` - Initialize all database schemas
## Utility Commands
- `bun run clean` - Clean build artifacts
- `bun run clean:all` - Deep clean including node_modules
- `turbo run <task>` - Run task across monorepo
## Git Commands (Linux)
- `git status` - Check current status
- `git add .` - Stage all changes
- `git commit -m "message"` - Commit changes
- `git push` - Push to remote
- `git pull` - Pull from remote
- `git checkout -b <branch>` - Create new branch
## System Commands (Linux)
- `ls -la` - List files with details
- `cd <directory>` - Change directory
- `grep -r "pattern" .` - Search for pattern
- `find . -name "*.ts"` - Find files by pattern
- `which <command>` - Find command location
## MCP Setup (for database access in IDE)
- `./scripts/setup-mcp.sh` - Setup Model Context Protocol servers
- Requires infrastructure to be running first
## Service URLs
- Dashboard: http://localhost:4200
- QuestDB Console: http://localhost:9000
- Grafana: http://localhost:3000
- pgAdmin: http://localhost:8080

View file

@ -0,0 +1,55 @@
# Task Completion Checklist
When you complete any coding task, ALWAYS run these commands in order:
## 1. Code Quality Checks (MANDATORY)
```bash
# Run linting to catch code issues
bun run lint
# If there are errors, fix them automatically
bun run lint:fix
# Format the code
bun run format
```
## 2. Testing (if applicable)
```bash
# Run tests if you modified existing functionality
bun test
# Run specific test file if you added/modified tests
bun test <path-to-test-file>
```
## 3. Build Verification (for significant changes)
```bash
# Build the affected libraries/apps
bun run build:libs # if you changed libraries
bun run build # for full build
```
## 4. Final Verification Steps
- Ensure no TypeScript errors in the IDE
- Check that imports are properly ordered (Prettier should handle this)
- Verify no console.log statements in production code
- Confirm all new code follows the established patterns
## 5. Git Commit Guidelines
- Stage changes: `git add .`
- Write descriptive commit messages
- Reference issue numbers if applicable
- Use conventional commit format when possible:
- `feat:` for new features
- `fix:` for bug fixes
- `refactor:` for code refactoring
- `docs:` for documentation
- `test:` for tests
- `chore:` for maintenance
## Important Notes
- NEVER skip the linting and formatting steps
- The project uses ESLint and Prettier - let them do their job
- If lint errors persist after auto-fix, they need manual attention
- Always test your changes, even if just running the service locally

View file

@ -0,0 +1,49 @@
# Technology Stack
## Runtime & Package Manager
- **Bun**: v1.1.0+ (primary runtime and package manager)
- **Node.js**: v18.0.0+ (compatibility)
- **TypeScript**: v5.8.3
## Core Technologies
- **Turbo**: Monorepo build system
- **ESBuild**: Fast bundling (integrated with Bun)
- **Hono**: Lightweight web framework for services
## Databases
- **PostgreSQL**: Primary transactional database
- **QuestDB**: Time-series database for market data
- **MongoDB**: Document storage
- **Dragonfly**: Redis-compatible cache and event bus
## Queue & Messaging
- **BullMQ**: Job queue processing
- **IORedis**: Redis client for Dragonfly
## Web Technologies
- **React**: Frontend framework (web-app)
- **Angular**: (based on polyfills.ts reference)
- **PrimeNG**: UI component library
- **TailwindCSS**: CSS framework
## Testing
- **Bun Test**: Built-in test runner
- **TestContainers**: Database integration testing
- **Supertest**: API testing
## Monitoring & Observability
- **Loki**: Log aggregation
- **Prometheus**: Metrics collection
- **Grafana**: Visualization dashboards
## Development Tools
- **ESLint**: Code linting
- **Prettier**: Code formatting
- **Docker Compose**: Local infrastructure
- **Model Context Protocol (MCP)**: Database access in IDE
## Key Dependencies
- **Awilix**: Dependency injection container
- **Zod**: Schema validation
- **pg**: PostgreSQL client
- **Playwright**: Browser automation for proxy testing

66
.serena/project.yml Normal file
View file

@ -0,0 +1,66 @@
# language of the project (csharp, python, rust, java, typescript, javascript, go, cpp, or ruby)
# Special requirements:
# * csharp: Requires the presence of a .sln file in the project folder.
language: typescript
# whether to use the project's gitignore file to ignore files
# Added on 2025-04-07
ignore_all_files_in_gitignore: true
# list of additional paths to ignore
# same syntax as gitignore, so you can use * and **
# Was previously called `ignored_dirs`, please update your config if you are using that.
# Added (renamed)on 2025-04-07
ignored_paths: []
# whether the project is in read-only mode
# If set to true, all editing tools will be disabled and attempts to use them will result in an error
# Added on 2025-04-18
read_only: false
# list of tool names to exclude. We recommend not excluding any tools, see the readme for more details.
# Below is the complete list of tools for convenience.
# To make sure you have the latest list of tools, and to view their descriptions,
# execute `uv run scripts/print_tool_overview.py`.
#
# * `activate_project`: Activates a project by name.
# * `check_onboarding_performed`: Checks whether project onboarding was already performed.
# * `create_text_file`: Creates/overwrites a file in the project directory.
# * `delete_lines`: Deletes a range of lines within a file.
# * `delete_memory`: Deletes a memory from Serena's project-specific memory store.
# * `execute_shell_command`: Executes a shell command.
# * `find_referencing_code_snippets`: Finds code snippets in which the symbol at the given location is referenced.
# * `find_referencing_symbols`: Finds symbols that reference the symbol at the given location (optionally filtered by type).
# * `find_symbol`: Performs a global (or local) search for symbols with/containing a given name/substring (optionally filtered by type).
# * `get_current_config`: Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
# * `get_symbols_overview`: Gets an overview of the top-level symbols defined in a given file or directory.
# * `initial_instructions`: Gets the initial instructions for the current project.
# Should only be used in settings where the system prompt cannot be set,
# e.g. in clients you have no control over, like Claude Desktop.
# * `insert_after_symbol`: Inserts content after the end of the definition of a given symbol.
# * `insert_at_line`: Inserts content at a given line in a file.
# * `insert_before_symbol`: Inserts content before the beginning of the definition of a given symbol.
# * `list_dir`: Lists files and directories in the given directory (optionally with recursion).
# * `list_memories`: Lists memories in Serena's project-specific memory store.
# * `onboarding`: Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
# * `prepare_for_new_conversation`: Provides instructions for preparing for a new conversation (in order to continue with the necessary context).
# * `read_file`: Reads a file within the project directory.
# * `read_memory`: Reads the memory with the given name from Serena's project-specific memory store.
# * `remove_project`: Removes a project from the Serena configuration.
# * `replace_lines`: Replaces a range of lines within a file with new content.
# * `replace_symbol_body`: Replaces the full definition of a symbol.
# * `restart_language_server`: Restarts the language server, may be necessary when edits not through Serena happen.
# * `search_for_pattern`: Performs a search for a pattern in the project.
# * `summarize_changes`: Provides instructions for summarizing the changes made to the codebase.
# * `switch_modes`: Activates modes by providing a list of their names
# * `think_about_collected_information`: Thinking tool for pondering the completeness of collected information.
# * `think_about_task_adherence`: Thinking tool for determining whether the agent is still on track with the current task.
# * `think_about_whether_you_are_done`: Thinking tool for determining whether the task is truly completed.
# * `write_memory`: Writes a named memory (for future reference) to Serena's project-specific memory store.
excluded_tools: []
# initial prompt for the project. It will always be given to the LLM upon activating the project
# (contrary to the memories, which are loaded on demand).
initial_prompt: ""
project_name: "stock-bot"

20
.vscode/mcp.json vendored
View file

@ -1,21 +1,3 @@
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://trading_user:trading_pass_dev@localhost:5432/trading_bot"
]
},
"mongodb": {
"command": "npx",
"args": [
"-y",
"mongodb-mcp-server",
"--connectionString",
"mongodb://trading_admin:trading_mongo_dev@localhost:27017/stock?authSource=admin"
]
}
}
}

171
CLAUDE.md
View file

@ -1,171 +0,0 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Development Commands
**Package Manager**: Bun (v1.1.0+)
**Build & Development**:
- `bun install` - Install dependencies
- `bun run dev` - Start all services in development mode (uses Turbo)
- `bun run build` - Build all services and libraries
- `bun run build:libs` - Build only shared libraries
- `./scripts/build-all.sh` - Custom build script with options
**Testing**:
- `bun test` - Run all tests
- `bun run test:libs` - Test only shared libraries
- `bun run test:apps` - Test only applications
- `bun run test:coverage` - Run tests with coverage
**Code Quality**:
- `bun run lint` - Lint TypeScript files
- `bun run lint:fix` - Auto-fix linting issues
- `bun run format` - Format code using Prettier
- `./scripts/format.sh` - Format script
**Infrastructure**:
- `bun run infra:up` - Start database infrastructure (PostgreSQL, QuestDB, MongoDB, Dragonfly)
- `bun run infra:down` - Stop infrastructure
- `bun run infra:reset` - Reset infrastructure with clean volumes
- `bun run docker:admin` - Start admin GUIs (pgAdmin, Mongo Express, Redis Insight)
**Database Setup**:
- `bun run db:setup-ib` - Setup Interactive Brokers database schema
- `bun run db:init` - Initialize database schemas
## Architecture Overview
**Microservices Architecture** with shared libraries and multi-database storage:
### Core Services (`apps/`)
- **data-ingestion** - Market data ingestion from multiple providers (Yahoo, QuoteMedia, IB)
- **processing-service** - Data cleaning, validation, and technical indicators
- **strategy-service** - Trading strategies and backtesting (multi-mode: live, event-driven, vectorized, hybrid)
- **execution-service** - Order management and risk controls
- **portfolio-service** - Position tracking and performance analytics
- **web-app** - React dashboard with real-time updates
### Shared Libraries (`libs/`)
- **config** - Environment configuration with Zod validation
- **logger** - Loki-integrated structured logging (use `getLogger()` pattern)
- **http** - HTTP client with proxy support and rate limiting
- **cache** - Redis/Dragonfly caching layer
- **queue** - BullMQ-based job processing with batch support
- **postgres-client** - PostgreSQL operations with transactions
- **questdb-client** - Time-series data storage
- **mongodb-client** - Document storage operations
- **utils** - Financial calculations and technical indicators
### Database Strategy
- **PostgreSQL** - Transactional data (orders, positions, strategies)
- **QuestDB** - Time-series data (OHLCV, indicators, performance metrics)
- **MongoDB** - Document storage (configurations, raw responses)
- **Dragonfly** - Event bus and caching (Redis-compatible)
## Key Patterns & Conventions
**Library Usage**:
- Import from shared libraries: `import { getLogger } from '@stock-bot/logger'`
- Use configuration: `import { databaseConfig } from '@stock-bot/config'`
- Logger pattern: `const logger = getLogger('service-name')`
**Service Structure**:
- Each service has `src/index.ts` as entry point
- Routes in `src/routes/` using Hono framework
- Handlers/services in `src/handlers/` or `src/services/`
- Use dependency injection pattern
**Data Processing**:
- Raw data → QuestDB via handlers
- Processed data → PostgreSQL via processing service
- Event-driven communication via Dragonfly
- Queue-based batch processing for large datasets
**Multi-Mode Backtesting**:
- **Live Mode** - Real-time trading with brokers
- **Event-Driven** - Realistic simulation with market conditions
- **Vectorized** - Fast mathematical backtesting for optimization
- **Hybrid** - Validation by comparing vectorized vs event-driven results
## Development Workflow
1. **Start Infrastructure**: `bun run infra:up`
2. **Build Libraries**: `bun run build:libs`
3. **Start Development**: `bun run dev`
4. **Access UIs**:
- Dashboard: http://localhost:4200
- QuestDB Console: http://localhost:9000
- Grafana: http://localhost:3000
- pgAdmin: http://localhost:8080
## Important Files & Locations
**Configuration**:
- Environment variables in `.env` files
- Service configs in `libs/config/src/`
- Database init scripts in `database/postgres/init/`
**Key Scripts**:
- `scripts/build-all.sh` - Production build with cleanup
- `scripts/docker.sh` - Docker management
- `scripts/format.sh` - Code formatting
- `scripts/setup-mcp.sh` - Setup Model Context Protocol servers for database access
**Documentation**:
- `SIMPLIFIED-ARCHITECTURE.md` - Detailed architecture overview
- `DEVELOPMENT-ROADMAP.md` - Development phases and priorities
- Individual library READMEs in `libs/*/README.md`
## Current Development Phase
**Phase 1: Data Foundation Layer** (In Progress)
- Enhancing data provider reliability and rate limiting
- Implementing data validation and quality metrics
- Optimizing QuestDB storage for time-series data
- Building robust HTTP client with circuit breakers
Focus on data quality and provider fault tolerance before advancing to strategy implementation.
## Testing & Quality
- Use Bun's built-in test runner
- Integration tests with TestContainers for databases
- ESLint for code quality with TypeScript rules
- Prettier for code formatting
- All services should have health check endpoints
## Model Context Protocol (MCP) Setup
**MCP Database Servers** are configured in `.vscode/mcp.json` for direct database access:
- **PostgreSQL MCP Server**: Provides read-only access to PostgreSQL database
- Connection: `postgresql://trading_user:trading_pass_dev@localhost:5432/trading_bot`
- Package: `@modelcontextprotocol/server-postgres`
- **MongoDB MCP Server**: Official MongoDB team server for database and Atlas interaction
- Connection: `mongodb://trading_admin:trading_mongo_dev@localhost:27017/stock?authSource=admin`
- Package: `mongodb-mcp-server` (official MongoDB JavaScript team package)
**Setup Commands**:
- `./scripts/setup-mcp.sh` - Setup and test MCP servers
- `bun run infra:up` - Start database infrastructure (required for MCP)
**Usage**: Once configured, Claude Code can directly query and inspect database schemas and data through natural language commands.
## Environment Variables
Key environment variables (see `.env` example):
- `NODE_ENV` - Environment (development/production)
- `DATA_SERVICE_PORT` - Port for data service
- `DRAGONFLY_HOST/PORT` - Cache/event bus connection
- Database connection strings for PostgreSQL, QuestDB, MongoDB
## Monitoring & Observability
- **Logging**: Structured JSON logs to Loki
- **Metrics**: Prometheus metrics collection
- **Visualization**: Grafana dashboards
- **Queue Monitoring**: Bull Board for job queues
- **Health Checks**: All services expose `/health` endpoints

View file

@ -1,183 +0,0 @@
# Migration Guide: From Singleton to Connection Pool Pattern
## Overview
This guide explains how to migrate from the singleton anti-pattern to a proper connection pool pattern using the new `@stock-bot/connection-factory` library.
## Current State (Singleton Anti-Pattern)
```typescript
// ❌ Old pattern - global singleton
import { connectMongoDB, getMongoDBClient } from '@stock-bot/mongodb-client';
import { connectPostgreSQL, getPostgreSQLClient } from '@stock-bot/postgres-client';
// Initialize once at startup
await connectMongoDB(config);
await connectPostgreSQL(config);
// Use everywhere
const mongo = getMongoDBClient();
const postgres = getPostgreSQLClient();
```
### Problems with this approach:
- Global state makes testing difficult
- All operations share the same connection pool
- Can't optimize pool sizes for different use cases
- Memory leaks from persistent connections
- Hard to implement graceful shutdown
## New Pattern (Connection Factory + Service Container)
### Step 1: Set up Connection Factory
```typescript
// ✅ New pattern - connection factory
import { setupServiceContainer } from './setup/database-setup';
// Initialize service container at startup
const container = await setupServiceContainer();
// Register cleanup
shutdown.register(async () => {
await container.dispose();
});
```
### Step 2: Update Handlers to Use Container
```typescript
// ✅ Use OperationContext with container
export class MyHandler {
constructor(private readonly container: ServiceContainer) {}
async handleOperation(data: any) {
const context = OperationContext.create('my-handler', 'operation', {
container: this.container
});
try {
// Connections are managed by the container
await context.mongodb.insertOne(data);
await context.postgres.query('...');
await context.cache.set('key', 'value');
} finally {
// Clean up resources
await context.dispose();
}
}
}
```
### Step 3: Update Route Handlers
```typescript
// Pass container to route handlers
export function createRoutes(container: ServiceContainer) {
const router = new Hono();
const handler = new MyHandler(container);
router.get('/data', async (c) => {
const result = await handler.handleOperation(c.req.query());
return c.json(result);
});
return router;
}
```
## Migration Checklist
### For Each Service:
1. **Create database setup module**
```typescript
// apps/[service-name]/src/setup/database-setup.ts
export async function setupServiceContainer(): Promise<ServiceContainer> {
// Configure connection pools based on service needs
}
```
2. **Update main index.ts**
- Remove direct `connectMongoDB()` and `connectPostgreSQL()` calls
- Replace with `setupServiceContainer()`
- Pass container to route handlers and job processors
3. **Update handlers**
- Accept `ServiceContainer` in constructor
- Create `OperationContext` with container
- Remove direct database client imports
- Add `context.dispose()` in finally blocks
4. **Update job handlers**
```typescript
// Before
export async function myJobHandler(job: Job) {
const mongo = getMongoDBClient();
// ...
}
// After
export function createMyJobHandler(container: ServiceContainer) {
return async (job: Job) => {
const context = OperationContext.create('job', job.name, {
container
});
try {
// Use context.mongodb, context.postgres, etc.
} finally {
await context.dispose();
}
};
}
```
## Pool Size Recommendations
The `PoolSizeCalculator` provides optimal pool sizes based on service type:
| Service | Min | Max | Use Case |
|---------|-----|-----|----------|
| data-ingestion | 5 | 50 | High-volume batch imports |
| data-pipeline | 3 | 30 | Data processing pipelines |
| web-api | 2 | 10 | Low-latency API requests |
| processing-service | 2 | 20 | CPU-intensive operations |
| portfolio-service | 2 | 15 | Portfolio calculations |
| strategy-service | 3 | 25 | Strategy backtesting |
## Benefits After Migration
1. **Better Resource Management**
- Each service gets appropriately sized connection pools
- Automatic cleanup with dispose pattern
- No more connection leaks
2. **Improved Testing**
- Easy to mock containers for tests
- No global state to reset between tests
- Can test with different configurations
3. **Enhanced Performance**
- Optimized pool sizes per service
- Isolated pools for heavy operations
- Better connection reuse
4. **Operational Benefits**
- Connection pool metrics per service
- Graceful shutdown handling
- Better error isolation
## Backward Compatibility
The `OperationContext` maintains backward compatibility:
- If no container is provided, it falls back to singleton pattern
- This allows gradual migration service by service
- Warning logs indicate when fallback is used
## Example: Complete Service Migration
See `/apps/data-ingestion/src/handlers/example-handler.ts` for a complete example of:
- Using the service container
- Creating operation contexts
- Handling batch operations with scoped containers
- Proper resource cleanup

View file

@ -1,10 +1,14 @@
import { getRandomUserAgent } from '@stock-bot/utils';
import type { CeoHandler } from '../ceo.handler';
export async function processIndividualSymbol(this: CeoHandler, payload: any, _context: any): Promise<unknown> {
export async function processIndividualSymbol(
this: CeoHandler,
payload: any,
_context: any
): Promise<unknown> {
const { ceoId, symbol, timestamp } = payload;
const proxy = this.proxy?.getProxy();
if(!proxy) {
if (!proxy) {
this.logger.warn('No proxy available for processing individual CEO symbol');
return;
}
@ -15,15 +19,16 @@ export async function processIndividualSymbol(this: CeoHandler, payload: any, _c
});
try {
// Fetch detailed information for the individual symbol
const response = await this.http.get(`https://api.ceo.ca/api/get_spiels?channel=${ceoId}&load_more=top`
+ (timestamp ? `&until=${timestamp}` : ''),
const response = await this.http.get(
`https://api.ceo.ca/api/get_spiels?channel=${ceoId}&load_more=top` +
(timestamp ? `&until=${timestamp}` : ''),
{
proxy: proxy,
headers: {
'User-Agent': getRandomUserAgent()
}
});
'User-Agent': getRandomUserAgent(),
},
}
);
if (!response.ok) {
throw new Error(`Failed to fetch details for ceoId ${ceoId}: ${response.statusText}`);
@ -32,7 +37,7 @@ export async function processIndividualSymbol(this: CeoHandler, payload: any, _c
const data = await response.json();
const spielCount = data.spiels.length;
if(spielCount === 0) {
if (spielCount === 0) {
this.logger.warn(`No spiels found for ceoId ${ceoId}`);
return null; // No data to process
}
@ -67,44 +72,45 @@ export async function processIndividualSymbol(this: CeoHandler, payload: any, _c
savedId: spiel.saved_id,
savedTimestamp: spiel.saved_timestamp,
poll: spiel.poll,
votedInPoll: spiel.voted_in_poll
votedInPoll: spiel.voted_in_poll,
}));
await this.mongodb.batchUpsert('ceoPosts', posts, ['spielId']);
this.logger.info(`Fetched ${spielCount} spiels for ceoId ${ceoId}`);
// Update Shorts
const shortRes = await this.http.get(`https://api.ceo.ca/api/short_positions/one?symbol=${symbol}`,
const shortRes = await this.http.get(
`https://api.ceo.ca/api/short_positions/one?symbol=${symbol}`,
{
proxy: proxy,
headers: {
'User-Agent': getRandomUserAgent()
}
});
'User-Agent': getRandomUserAgent(),
},
}
);
if (shortRes.ok) {
const shortData = await shortRes.json();
if(shortData && shortData.positions) {
if (shortData && shortData.positions) {
await this.mongodb.batchUpsert('ceoShorts', shortData.positions, ['id']);
}
await this.scheduleOperation('process-individual-symbol', {
ceoId: ceoId,
timestamp: latestSpielTime
timestamp: latestSpielTime,
});
}
this.logger.info(`Successfully processed channel ${ceoId} and added channel ${ceoId} at timestamp ${latestSpielTime}`);
this.logger.info(
`Successfully processed channel ${ceoId} and added channel ${ceoId} at timestamp ${latestSpielTime}`
);
return { ceoId, spielCount, timestamp };
} catch (error) {
this.logger.error('Failed to process individual symbol', {
error,
ceoId,
timestamp
timestamp,
});
throw error;
}

View file

@ -1,38 +1,43 @@
import { getRandomUserAgent } from '@stock-bot/utils';
import type { CeoHandler } from '../ceo.handler';
export async function updateCeoChannels(this: CeoHandler, payload: number | undefined): Promise<unknown> {
export async function updateCeoChannels(
this: CeoHandler,
payload: number | undefined
): Promise<unknown> {
const proxy = this.proxy?.getProxy();
if(!proxy) {
if (!proxy) {
this.logger.warn('No proxy available for CEO channels update');
return;
}
let page;
if(payload === undefined) {
page = 1
}else{
if (payload === undefined) {
page = 1;
} else {
page = payload;
}
this.logger.info(`Fetching CEO channels for page ${page} with proxy ${proxy}`);
const response = await this.http.get('https://api.ceo.ca/api/home?exchange=all&sort_by=symbol&sector=All&tab=companies&page='+page, {
proxy: proxy,
headers: {
'User-Agent': getRandomUserAgent()
const response = await this.http.get(
'https://api.ceo.ca/api/home?exchange=all&sort_by=symbol&sector=All&tab=companies&page=' + page,
{
proxy: proxy,
headers: {
'User-Agent': getRandomUserAgent(),
},
}
})
);
const results = await response.json();
const channels = results.channel_categories[0].channels;
const totalChannels = results.channel_categories[0].total_channels;
const totalPages = Math.ceil(totalChannels / channels.length);
const exchanges: {exchange: string, countryCode: string}[] = []
const symbols = channels.map((channel: any) =>{
const exchanges: { exchange: string; countryCode: string }[] = [];
const symbols = channels.map((channel: any) => {
// check if exchange is in the exchanges array object
if(!exchanges.find((e: any) => e.exchange === channel.exchange)) {
if (!exchanges.find((e: any) => e.exchange === channel.exchange)) {
exchanges.push({
exchange: channel.exchange,
countryCode: 'CA'
countryCode: 'CA',
});
}
const details = channel.company_details || {};
@ -49,16 +54,16 @@ export async function updateCeoChannels(this: CeoHandler, payload: number | unde
issueType: details.issue_type,
sharesOutstanding: details.shares_outstanding,
float: details.float,
}
})
};
});
await this.mongodb.batchUpsert('ceoSymbols', symbols, ['symbol', 'exchange']);
await this.mongodb.batchUpsert('ceoExchanges', exchanges, ['exchange']);
if(page === 1) {
for( let i = 2; i <= totalPages; i++) {
if (page === 1) {
for (let i = 2; i <= totalPages; i++) {
this.logger.info(`Scheduling page ${i} of ${totalPages} for CEO channels`);
await this.scheduleOperation('update-ceo-channels', i)
await this.scheduleOperation('update-ceo-channels', i);
}
}

View file

@ -1,6 +1,10 @@
import type { CeoHandler } from '../ceo.handler';
export async function updateUniqueSymbols(this: CeoHandler, _payload: unknown, _context: any): Promise<unknown> {
export async function updateUniqueSymbols(
this: CeoHandler,
_payload: unknown,
_context: any
): Promise<unknown> {
this.logger.info('Starting update to get unique CEO symbols by ceoId');
try {
@ -12,7 +16,8 @@ export async function updateUniqueSymbols(this: CeoHandler, _payload: unknown, _
// Get detailed records for each unique ceoId (latest/first record)
const uniqueSymbols = [];
for (const ceoId of uniqueCeoIds) {
const symbol = await this.mongodb.collection('ceoSymbols')
const symbol = await this.mongodb
.collection('ceoSymbols')
.findOne({ ceoId }, { sort: { _id: -1 } }); // Get latest record
if (symbol) {
@ -41,21 +46,24 @@ export async function updateUniqueSymbols(this: CeoHandler, _payload: unknown, _
this.logger.info(`Successfully scheduled ${scheduledJobs} individual symbol update jobs`);
// Cache the results for monitoring
await this.cacheSet('unique-symbols-last-run', {
timestamp: new Date().toISOString(),
totalUniqueIds: uniqueCeoIds.length,
totalRecords: uniqueSymbols.length,
scheduledJobs
}, 1800); // Cache for 30 minutes
await this.cacheSet(
'unique-symbols-last-run',
{
timestamp: new Date().toISOString(),
totalUniqueIds: uniqueCeoIds.length,
totalRecords: uniqueSymbols.length,
scheduledJobs,
},
1800
); // Cache for 30 minutes
return {
success: true,
uniqueCeoIds: uniqueCeoIds.length,
uniqueRecords: uniqueSymbols.length,
scheduledJobs,
timestamp: new Date().toISOString()
timestamp: new Date().toISOString(),
};
} catch (error) {
this.logger.error('Failed to update unique CEO symbols', { error });
throw error;

View file

@ -3,13 +3,9 @@ import {
Handler,
Operation,
ScheduledOperation,
type IServiceContainer
type IServiceContainer,
} from '@stock-bot/handlers';
import {
processIndividualSymbol,
updateCeoChannels,
updateUniqueSymbols
} from './actions';
import { processIndividualSymbol, updateCeoChannels, updateUniqueSymbols } from './actions';
@Handler('ceo')
// @Disabled()
@ -21,7 +17,7 @@ export class CeoHandler extends BaseHandler {
@ScheduledOperation('update-ceo-channels', '0 */15 * * *', {
priority: 7,
immediately: false,
description: 'Get all CEO symbols and exchanges'
description: 'Get all CEO symbols and exchanges',
})
updateCeoChannels = updateCeoChannels;
@ -29,7 +25,7 @@ export class CeoHandler extends BaseHandler {
@ScheduledOperation('process-unique-symbols', '0 0 1 * *', {
priority: 5,
immediately: false,
description: 'Process unique CEO symbols and schedule individual jobs'
description: 'Process unique CEO symbols and schedule individual jobs',
})
updateUniqueSymbols = updateUniqueSymbols;

View file

@ -12,12 +12,9 @@ export class ExampleHandler {
*/
async performOperation(data: any): Promise<void> {
// Create operation context with container
const context = new OperationContext(
'example-handler',
'perform-operation',
this.container,
{ data }
);
const context = new OperationContext('example-handler', 'perform-operation', this.container, {
data,
});
try {
// Log operation start
@ -30,16 +27,16 @@ export class ExampleHandler {
// Use PostgreSQL through service resolution
const postgres = context.resolve<any>('postgres');
await postgres.query(
'INSERT INTO operations (id, status) VALUES ($1, $2)',
[result.insertedId, 'completed']
);
await postgres.query('INSERT INTO operations (id, status) VALUES ($1, $2)', [
result.insertedId,
'completed',
]);
// Use cache through service resolution
const cache = context.resolve<any>('cache');
await cache.set(`operation:${result.insertedId}`, {
status: 'completed',
timestamp: new Date()
timestamp: new Date(),
});
context.logger.info('Operation completed successfully');
@ -56,12 +53,9 @@ export class ExampleHandler {
// Create a scoped container for this batch operation
const scopedContainer = this.container.createScope();
const context = new OperationContext(
'example-handler',
'batch-operation',
scopedContainer,
{ itemCount: items.length }
);
const context = new OperationContext('example-handler', 'batch-operation', scopedContainer, {
itemCount: items.length,
});
try {
context.logger.info('Starting batch operation', { itemCount: items.length });
@ -90,7 +84,6 @@ export class ExampleHandler {
await Promise.all(promises);
context.logger.info('Batch operation completed');
} finally {
// Clean up scoped resources
await scopedContainer.dispose();

View file

@ -9,7 +9,7 @@ import {
Operation,
ScheduledOperation,
type ExecutionContext,
type IServiceContainer
type IServiceContainer,
} from '@stock-bot/handlers';
@Handler('example')
@ -42,13 +42,13 @@ export class ExampleHandler extends BaseHandler {
*/
@ScheduledOperation('cleanup-old-items', '0 2 * * *', {
priority: 5,
description: 'Clean up items older than 30 days'
description: 'Clean up items older than 30 days',
})
async cleanupOldItems(): Promise<{ deleted: number }> {
const thirtyDaysAgo = new Date(Date.now() - 30 * 24 * 60 * 60 * 1000);
const result = await this.collection('items').deleteMany({
createdAt: { $lt: thirtyDaysAgo }
createdAt: { $lt: thirtyDaysAgo },
});
this.log('info', 'Cleanup completed', { deleted: result.deletedCount });
@ -73,7 +73,7 @@ export class ExampleHandler extends BaseHandler {
// Use HTTP client with proxy
const response = await this.http.get(input.url, {
proxy: proxyUrl,
timeout: 10000
timeout: 10000,
});
// Cache the result

View file

@ -0,0 +1,38 @@
import type { IbHandler } from '../ib.handler';
export async function fetchExchangesAndSymbols(this: IbHandler): Promise<unknown> {
this.logger.info('Starting IB exchanges and symbols fetch job');
try {
// Fetch session headers first
const sessionHeaders = await this.fetchSession();
if (!sessionHeaders) {
this.logger.error('Failed to get session headers for IB job');
return { success: false, error: 'No session headers' };
}
this.logger.info('Session headers obtained, fetching exchanges...');
// Fetch exchanges
const exchanges = await this.fetchExchanges();
this.logger.info('Fetched exchanges from IB', { count: exchanges?.length || 0 });
// Fetch symbols
this.logger.info('Fetching symbols...');
const symbols = await this.fetchSymbols();
this.logger.info('Fetched symbols from IB', { count: symbols?.length || 0 });
return {
success: true,
exchangesCount: exchanges?.length || 0,
symbolsCount: symbols?.length || 0,
};
} catch (error) {
this.logger.error('Failed to fetch IB exchanges and symbols', { error });
return {
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
};
}
}

View file

@ -1,16 +1,15 @@
/**
* IB Exchanges Operations - Fetching exchange data from IB API
*/
import { OperationContext } from '@stock-bot/di';
import type { ServiceContainer } from '@stock-bot/di';
import type { IbHandler } from '../ib.handler';
import { IB_CONFIG } from '../shared/config';
export async function fetchExchanges(sessionHeaders: Record<string, string>, container: ServiceContainer): Promise<unknown[] | null> {
const ctx = OperationContext.create('ib', 'exchanges', { container });
export async function fetchExchanges(this: IbHandler): Promise<unknown[] | null> {
try {
ctx.logger.info('🔍 Fetching exchanges with session headers...');
// First get session headers
const sessionHeaders = await this.fetchSession();
if (!sessionHeaders) {
throw new Error('Failed to get session headers');
}
this.logger.info('🔍 Fetching exchanges with session headers...');
// The URL for the exchange data API
const exchangeUrl = IB_CONFIG.BASE_URL + IB_CONFIG.EXCHANGE_API;
@ -28,7 +27,7 @@ export async function fetchExchanges(sessionHeaders: Record<string, string>, con
'X-Requested-With': 'XMLHttpRequest',
};
ctx.logger.info('📤 Making request to exchange API...', {
this.logger.info('📤 Making request to exchange API...', {
url: exchangeUrl,
headerCount: Object.keys(requestHeaders).length,
});
@ -41,7 +40,7 @@ export async function fetchExchanges(sessionHeaders: Record<string, string>, con
});
if (!response.ok) {
ctx.logger.error('❌ Exchange API request failed', {
this.logger.error('❌ Exchange API request failed', {
status: response.status,
statusText: response.statusText,
});
@ -50,19 +49,18 @@ export async function fetchExchanges(sessionHeaders: Record<string, string>, con
const data = await response.json();
const exchanges = data?.exchanges || [];
ctx.logger.info('✅ Exchange data fetched successfully');
this.logger.info('✅ Exchange data fetched successfully');
ctx.logger.info('Saving IB exchanges to MongoDB...');
await ctx.mongodb.batchUpsert('ibExchanges', exchanges, ['id', 'country_code']);
ctx.logger.info('✅ Exchange IB data saved to MongoDB:', {
this.logger.info('Saving IB exchanges to MongoDB...');
await this.mongodb.batchUpsert('ibExchanges', exchanges, ['id', 'country_code']);
this.logger.info('✅ Exchange IB data saved to MongoDB:', {
count: exchanges.length,
});
return exchanges;
} catch (error) {
ctx.logger.error('❌ Failed to fetch exchanges', { error });
this.logger.error('❌ Failed to fetch exchanges', { error });
return null;
} finally {
await ctx.dispose();
}
}

View file

@ -0,0 +1,83 @@
import { Browser } from '@stock-bot/browser';
import type { IbHandler } from '../ib.handler';
import { IB_CONFIG } from '../shared/config';
export async function fetchSession(this: IbHandler): Promise<Record<string, string> | undefined> {
try {
await Browser.initialize({
headless: true,
timeout: IB_CONFIG.BROWSER_TIMEOUT,
blockResources: false,
});
this.logger.info('✅ Browser initialized');
const { page } = await Browser.createPageWithProxy(
IB_CONFIG.BASE_URL + IB_CONFIG.PRODUCTS_PAGE,
IB_CONFIG.DEFAULT_PROXY
);
this.logger.info('✅ Page created with proxy');
const headersPromise = new Promise<Record<string, string> | undefined>(resolve => {
let resolved = false;
page.onNetworkEvent(event => {
if (event.url.includes('/webrest/search/product-types/summary')) {
if (event.type === 'request') {
try {
resolve(event.headers);
} catch (e) {
resolve(undefined);
this.logger.debug('Raw Summary Response error', { error: (e as Error).message });
}
}
}
});
// Timeout fallback
setTimeout(() => {
if (!resolved) {
resolved = true;
this.logger.warn('Timeout waiting for headers');
resolve(undefined);
}
}, IB_CONFIG.HEADERS_TIMEOUT);
});
this.logger.info('⏳ Waiting for page load...');
await page.waitForLoadState('domcontentloaded', { timeout: IB_CONFIG.PAGE_LOAD_TIMEOUT });
this.logger.info('✅ Page loaded');
//Products tabs
this.logger.info('🔍 Looking for Products tab...');
const productsTab = page.locator('#productSearchTab[role="tab"][href="#products"]');
await productsTab.waitFor({ timeout: IB_CONFIG.ELEMENT_TIMEOUT });
this.logger.info('✅ Found Products tab');
this.logger.info('🖱️ Clicking Products tab...');
await productsTab.click();
this.logger.info('✅ Products tab clicked');
// New Products Checkbox
this.logger.info('🔍 Looking for "New Products Only" radio button...');
const radioButton = page.locator('span.checkbox-text:has-text("New Products Only")');
await radioButton.waitFor({ timeout: IB_CONFIG.ELEMENT_TIMEOUT });
this.logger.info(`🎯 Found "New Products Only" radio button`);
await radioButton.first().click();
this.logger.info('✅ "New Products Only" radio button clicked');
// Wait for and return headers immediately when captured
this.logger.info('⏳ Waiting for headers to be captured...');
const headers = await headersPromise;
page.close();
if (headers) {
this.logger.info('✅ Headers captured successfully');
} else {
this.logger.warn('⚠️ No headers were captured');
}
return headers;
} catch (error) {
this.logger.error('Failed to fetch IB symbol summary', { error });
return;
}
}

View file

@ -1,17 +1,15 @@
/**
* IB Symbols Operations - Fetching symbol data from IB API
*/
import { OperationContext } from '@stock-bot/di';
import type { ServiceContainer } from '@stock-bot/di';
import type { IbHandler } from '../ib.handler';
import { IB_CONFIG } from '../shared/config';
// Fetch symbols from IB using the session headers
export async function fetchSymbols(sessionHeaders: Record<string, string>, container: ServiceContainer): Promise<unknown[] | null> {
const ctx = OperationContext.create('ib', 'symbols', { container });
export async function fetchSymbols(this: IbHandler): Promise<unknown[] | null> {
try {
ctx.logger.info('🔍 Fetching symbols with session headers...');
// First get session headers
const sessionHeaders = await this.fetchSession();
if (!sessionHeaders) {
throw new Error('Failed to get session headers');
}
this.logger.info('🔍 Fetching symbols with session headers...');
// Prepare headers - include all session headers plus any additional ones
const requestHeaders = {
@ -39,18 +37,15 @@ export async function fetchSymbols(sessionHeaders: Record<string, string>, conta
};
// Get Summary
const summaryResponse = await fetch(
IB_CONFIG.BASE_URL + IB_CONFIG.SUMMARY_API,
{
method: 'POST',
headers: requestHeaders,
proxy: IB_CONFIG.DEFAULT_PROXY,
body: JSON.stringify(requestBody),
}
);
const summaryResponse = await fetch(IB_CONFIG.BASE_URL + IB_CONFIG.SUMMARY_API, {
method: 'POST',
headers: requestHeaders,
proxy: IB_CONFIG.DEFAULT_PROXY,
body: JSON.stringify(requestBody),
});
if (!summaryResponse.ok) {
ctx.logger.error('❌ Summary API request failed', {
this.logger.error('❌ Summary API request failed', {
status: summaryResponse.status,
statusText: summaryResponse.statusText,
});
@ -58,36 +53,33 @@ export async function fetchSymbols(sessionHeaders: Record<string, string>, conta
}
const summaryData = await summaryResponse.json();
ctx.logger.info('✅ IB Summary data fetched successfully', {
this.logger.info('✅ IB Summary data fetched successfully', {
totalCount: summaryData[0].totalCount,
});
const symbols = [];
requestBody.pageSize = IB_CONFIG.PAGE_SIZE;
const pageCount = Math.ceil(summaryData[0].totalCount / IB_CONFIG.PAGE_SIZE) || 0;
ctx.logger.info('Fetching Symbols for IB', { pageCount });
this.logger.info('Fetching Symbols for IB', { pageCount });
const symbolPromises = [];
for (let page = 1; page <= pageCount; page++) {
requestBody.pageNumber = page;
// Fetch symbols for the current page
const symbolsResponse = fetch(
IB_CONFIG.BASE_URL + IB_CONFIG.PRODUCTS_API,
{
method: 'POST',
headers: requestHeaders,
proxy: IB_CONFIG.DEFAULT_PROXY,
body: JSON.stringify(requestBody),
}
);
const symbolsResponse = fetch(IB_CONFIG.BASE_URL + IB_CONFIG.PRODUCTS_API, {
method: 'POST',
headers: requestHeaders,
proxy: IB_CONFIG.DEFAULT_PROXY,
body: JSON.stringify(requestBody),
});
symbolPromises.push(symbolsResponse);
}
const responses = await Promise.all(symbolPromises);
for (const response of responses) {
if (!response.ok) {
ctx.logger.error('❌ Symbols API request failed', {
this.logger.error('❌ Symbols API request failed', {
status: response.status,
statusText: response.statusText,
});
@ -98,29 +90,28 @@ export async function fetchSymbols(sessionHeaders: Record<string, string>, conta
if (symJson && symJson.length > 0) {
symbols.push(...symJson);
} else {
ctx.logger.warn('⚠️ No symbols found in response');
this.logger.warn('⚠️ No symbols found in response');
continue;
}
}
if (symbols.length === 0) {
ctx.logger.warn('⚠️ No symbols fetched from IB');
this.logger.warn('⚠️ No symbols fetched from IB');
return null;
}
ctx.logger.info('✅ IB symbols fetched successfully, saving to DB...', {
this.logger.info('✅ IB symbols fetched successfully, saving to DB...', {
totalSymbols: symbols.length,
});
await ctx.mongodb.batchUpsert('ib_symbols', symbols, ['symbol', 'exchangeId']);
ctx.logger.info('Saved IB symbols to DB', {
await this.mongodb.batchUpsert('ib_symbols', symbols, ['symbol', 'exchangeId']);
this.logger.info('Saved IB symbols to DB', {
totalSymbols: symbols.length,
});
return symbols;
} catch (error) {
ctx.logger.error('❌ Failed to fetch symbols', { error });
this.logger.error('❌ Failed to fetch symbols', { error });
return null;
} finally {
await ctx.dispose();
}
}

View file

@ -0,0 +1,5 @@
export { fetchSession } from './fetch-session.action';
export { fetchExchanges } from './fetch-exchanges.action';
export { fetchSymbols } from './fetch-symbols.action';
export { fetchExchangesAndSymbols } from './fetch-exchanges-and-symbols.action';

View file

@ -1,90 +1,33 @@
/**
* Interactive Brokers Provider for new queue system
*/
import { getLogger } from '@stock-bot/logger';
import {
createJobHandler,
handlerRegistry,
type HandlerConfigWithSchedule,
} from '@stock-bot/queue';
import type { ServiceContainer } from '@stock-bot/di';
BaseHandler,
Handler,
Operation,
ScheduledOperation,
type IServiceContainer,
} from '@stock-bot/handlers';
import { fetchExchanges, fetchExchangesAndSymbols, fetchSession, fetchSymbols } from './actions';
const logger = getLogger('ib-provider');
@Handler('ib')
export class IbHandler extends BaseHandler {
constructor(services: IServiceContainer) {
super(services);
}
// Initialize and register the IB provider
export function initializeIBProvider(container: ServiceContainer) {
logger.debug('Registering IB provider with scheduled jobs...');
@Operation('fetch-session')
fetchSession = fetchSession;
const ibProviderConfig: HandlerConfigWithSchedule = {
name: 'ib',
operations: {
'fetch-session': createJobHandler(async () => {
// payload contains session configuration (not used in current implementation)
logger.debug('Processing session fetch request');
const { fetchSession } = await import('./operations/session.operations');
return fetchSession(container);
}),
@Operation('fetch-exchanges')
fetchExchanges = fetchExchanges;
'fetch-exchanges': createJobHandler(async () => {
// payload should contain session headers
logger.debug('Processing exchanges fetch request');
const { fetchSession } = await import('./operations/session.operations');
const { fetchExchanges } = await import('./operations/exchanges.operations');
const sessionHeaders = await fetchSession(container);
if (sessionHeaders) {
return fetchExchanges(sessionHeaders, container);
}
throw new Error('Failed to get session headers');
}),
@Operation('fetch-symbols')
fetchSymbols = fetchSymbols;
'fetch-symbols': createJobHandler(async () => {
// payload should contain session headers
logger.debug('Processing symbols fetch request');
const { fetchSession } = await import('./operations/session.operations');
const { fetchSymbols } = await import('./operations/symbols.operations');
const sessionHeaders = await fetchSession(container);
if (sessionHeaders) {
return fetchSymbols(sessionHeaders, container);
}
throw new Error('Failed to get session headers');
}),
'ib-exchanges-and-symbols': createJobHandler(async () => {
// Legacy operation for scheduled jobs
logger.info('Fetching symbol summary from IB');
const { fetchSession } = await import('./operations/session.operations');
const { fetchExchanges } = await import('./operations/exchanges.operations');
const { fetchSymbols } = await import('./operations/symbols.operations');
const sessionHeaders = await fetchSession(container);
logger.info('Fetched symbol summary from IB');
if (sessionHeaders) {
logger.debug('Fetching exchanges from IB');
const exchanges = await fetchExchanges(sessionHeaders, container);
logger.info('Fetched exchanges from IB', { count: exchanges?.length });
logger.debug('Fetching symbols from IB');
const symbols = await fetchSymbols(sessionHeaders, container);
logger.info('Fetched symbols from IB', { symbols });
return { exchangesCount: exchanges?.length, symbolsCount: symbols?.length };
}
return null;
}),
},
scheduledJobs: [
{
type: 'ib-exchanges-and-symbols',
operation: 'ib-exchanges-and-symbols',
cronPattern: '0 0 * * 0', // Every Sunday at midnight
priority: 5,
description: 'Fetch and update IB exchanges and symbols data',
// immediately: true, // Don't run immediately during startup to avoid conflicts
},
],
};
handlerRegistry.registerWithSchedule(ibProviderConfig);
logger.debug('IB provider registered successfully with scheduled jobs');
@Operation('ib-exchanges-and-symbols')
@ScheduledOperation('ib-exchanges-and-symbols', '0 0 * * 0', {
priority: 5,
description: 'Fetch and update IB exchanges and symbols data',
immediately: false,
})
fetchExchangesAndSymbols = fetchExchangesAndSymbols;
}

View file

@ -1,91 +0,0 @@
/**
* IB Session Operations - Browser automation for session headers
*/
import { Browser } from '@stock-bot/browser';
import { OperationContext } from '@stock-bot/di';
import type { ServiceContainer } from '@stock-bot/di';
import { IB_CONFIG } from '../shared/config';
export async function fetchSession(container: ServiceContainer): Promise<Record<string, string> | undefined> {
const ctx = OperationContext.create('ib', 'session', { container });
try {
await Browser.initialize({
headless: true,
timeout: IB_CONFIG.BROWSER_TIMEOUT,
blockResources: false
});
ctx.logger.info('✅ Browser initialized');
const { page } = await Browser.createPageWithProxy(
IB_CONFIG.BASE_URL + IB_CONFIG.PRODUCTS_PAGE,
IB_CONFIG.DEFAULT_PROXY
);
ctx.logger.info('✅ Page created with proxy');
const headersPromise = new Promise<Record<string, string> | undefined>(resolve => {
let resolved = false;
page.onNetworkEvent(event => {
if (event.url.includes('/webrest/search/product-types/summary')) {
if (event.type === 'request') {
try {
resolve(event.headers);
} catch (e) {
resolve(undefined);
ctx.logger.debug('Raw Summary Response error', { error: (e as Error).message });
}
}
}
});
// Timeout fallback
setTimeout(() => {
if (!resolved) {
resolved = true;
ctx.logger.warn('Timeout waiting for headers');
resolve(undefined);
}
}, IB_CONFIG.HEADERS_TIMEOUT);
});
ctx.logger.info('⏳ Waiting for page load...');
await page.waitForLoadState('domcontentloaded', { timeout: IB_CONFIG.PAGE_LOAD_TIMEOUT });
ctx.logger.info('✅ Page loaded');
//Products tabs
ctx.logger.info('🔍 Looking for Products tab...');
const productsTab = page.locator('#productSearchTab[role=\"tab\"][href=\"#products\"]');
await productsTab.waitFor({ timeout: IB_CONFIG.ELEMENT_TIMEOUT });
ctx.logger.info('✅ Found Products tab');
ctx.logger.info('🖱️ Clicking Products tab...');
await productsTab.click();
ctx.logger.info('✅ Products tab clicked');
// New Products Checkbox
ctx.logger.info('🔍 Looking for \"New Products Only\" radio button...');
const radioButton = page.locator('span.checkbox-text:has-text(\"New Products Only\")');
await radioButton.waitFor({ timeout: IB_CONFIG.ELEMENT_TIMEOUT });
ctx.logger.info(`🎯 Found \"New Products Only\" radio button`);
await radioButton.first().click();
ctx.logger.info('✅ \"New Products Only\" radio button clicked');
// Wait for and return headers immediately when captured
ctx.logger.info('⏳ Waiting for headers to be captured...');
const headers = await headersPromise;
page.close();
if (headers) {
ctx.logger.info('✅ Headers captured successfully');
} else {
ctx.logger.warn('⚠️ No headers were captured');
}
return headers;
} catch (error) {
ctx.logger.error('Failed to fetch IB symbol summary', { error });
return;
} finally {
await ctx.dispose();
}
}

View file

@ -21,3 +21,4 @@ export const IB_CONFIG = {
PRODUCT_COUNTRIES: ['CA', 'US'],
PRODUCT_TYPES: ['STK'],
};

View file

@ -6,11 +6,12 @@
import type { IServiceContainer } from '@stock-bot/handlers';
import { autoRegisterHandlers } from '@stock-bot/handlers';
import { getLogger } from '@stock-bot/logger';
// Import handlers for bundling (ensures they're included in the build)
import './qm/qm.handler';
import './webshare/webshare.handler';
import './ceo/ceo.handler';
import './ib/ib.handler';
// Add more handler imports as needed
const logger = getLogger('handler-init');
@ -21,19 +22,15 @@ const logger = getLogger('handler-init');
export async function initializeAllHandlers(serviceContainer: IServiceContainer): Promise<void> {
try {
// Auto-register all handlers in this directory
const result = await autoRegisterHandlers(
__dirname,
serviceContainer,
{
pattern: '.handler.',
exclude: ['test', 'spec'],
dryRun: false
}
);
const result = await autoRegisterHandlers(__dirname, serviceContainer, {
pattern: '.handler.',
exclude: ['test', 'spec'],
dryRun: false,
});
logger.info('Handler auto-registration complete', {
registered: result.registered,
failed: result.failed
failed: result.failed,
});
if (result.failed.length > 0) {
@ -62,7 +59,6 @@ async function manualHandlerRegistration(serviceContainer: any): Promise<void> {
// const webShareHandler = new WebShareHandler(serviceContainer);
// webShareHandler.register();
logger.info('Manual handler registration complete');
} catch (error) {
logger.error('Manual handler registration failed', { error });

View file

@ -1,11 +1,10 @@
/**
* Proxy Check Operations - Checking proxy functionality
*/
import type { ProxyInfo } from '@stock-bot/proxy';
import { OperationContext } from '@stock-bot/di';
import { getLogger } from '@stock-bot/logger';
import type { ProxyInfo } from '@stock-bot/proxy';
import { fetch } from '@stock-bot/utils';
import { PROXY_CONFIG } from '../shared/config';
/**
@ -16,7 +15,7 @@ export async function checkProxy(proxy: ProxyInfo): Promise<ProxyInfo> {
logger: getLogger('proxy-check'),
resolve: <T>(_name: string) => {
throw new Error(`Service container not available for proxy operations`);
}
},
} as any;
let success = false;
@ -28,14 +27,15 @@ export async function checkProxy(proxy: ProxyInfo): Promise<ProxyInfo> {
try {
// Test the proxy using fetch with proxy support
const proxyUrl = proxy.username && proxy.password
? `${proxy.protocol}://${encodeURIComponent(proxy.username)}:${encodeURIComponent(proxy.password)}@${proxy.host}:${proxy.port}`
: `${proxy.protocol}://${proxy.host}:${proxy.port}`;
const proxyUrl =
proxy.username && proxy.password
? `${proxy.protocol}://${encodeURIComponent(proxy.username)}:${encodeURIComponent(proxy.password)}@${proxy.host}:${proxy.port}`
: `${proxy.protocol}://${proxy.host}:${proxy.port}`;
const response = await fetch(PROXY_CONFIG.CHECK_URL, {
proxy: proxyUrl,
signal: AbortSignal.timeout(PROXY_CONFIG.CHECK_TIMEOUT),
logger: ctx.logger
logger: ctx.logger,
} as any);
const data = await response.text();
@ -94,7 +94,11 @@ export async function checkProxy(proxy: ProxyInfo): Promise<ProxyInfo> {
/**
* Update proxy data in cache with working/total stats and average response time
*/
async function updateProxyInCache(proxy: ProxyInfo, isWorking: boolean, ctx: OperationContext): Promise<void> {
async function updateProxyInCache(
proxy: ProxyInfo,
isWorking: boolean,
ctx: OperationContext
): Promise<void> {
const _cacheKey = `${PROXY_CONFIG.CACHE_KEY}:${proxy.protocol}://${proxy.host}:${proxy.port}`;
try {

View file

@ -1,9 +1,8 @@
/**
* Proxy Query Operations - Getting active proxies from cache
*/
import type { ProxyInfo } from '@stock-bot/proxy';
import { OperationContext } from '@stock-bot/di';
import type { ProxyInfo } from '@stock-bot/proxy';
import { PROXY_CONFIG } from '../shared/config';
/**
@ -56,7 +55,10 @@ export async function getRandomActiveProxy(
return proxyData;
}
} catch (error) {
ctx.logger.debug('Error reading proxy from cache', { key, error: (error as Error).message });
ctx.logger.debug('Error reading proxy from cache', {
key,
error: (error as Error).message,
});
continue;
}
}

View file

@ -1,9 +1,9 @@
/**
* Proxy Queue Operations - Queueing proxy operations
*/
import { OperationContext } from '@stock-bot/di';
import type { ProxyInfo } from '@stock-bot/proxy';
import { QueueManager } from '@stock-bot/queue';
import { OperationContext } from '@stock-bot/di';
export async function queueProxyFetch(): Promise<string> {
const ctx = OperationContext.create('proxy', 'queue-fetch');

View file

@ -1,10 +1,14 @@
/**
* Proxy Provider for new queue system
*/
import type { ProxyInfo } from '@stock-bot/proxy';
import { getLogger } from '@stock-bot/logger';
import { handlerRegistry, createJobHandler, type HandlerConfigWithSchedule } from '@stock-bot/queue';
import type { ServiceContainer } from '@stock-bot/di';
import { getLogger } from '@stock-bot/logger';
import type { ProxyInfo } from '@stock-bot/proxy';
import {
createJobHandler,
handlerRegistry,
type HandlerConfigWithSchedule,
} from '@stock-bot/queue';
const handlerLogger = getLogger('proxy-handler');

View file

@ -6,16 +6,14 @@ import type { IServiceContainer } from '@stock-bot/handlers';
export async function fetchExchanges(services: IServiceContainer): Promise<any[]> {
// Get exchanges from MongoDB
const exchanges = await services.mongodb.collection('qm_exchanges')
.find({}).toArray();
const exchanges = await services.mongodb.collection('qm_exchanges').find({}).toArray();
return exchanges;
}
export async function getExchangeByCode(services: IServiceContainer, code: string): Promise<any> {
// Get specific exchange by code
const exchange = await services.mongodb.collection('qm_exchanges')
.findOne({ code });
const exchange = await services.mongodb.collection('qm_exchanges').findOne({ code });
return exchange;
}

View file

@ -24,7 +24,7 @@ export async function checkSessions(handler: BaseHandler): Promise<{
const currentCount = sessionManager.getSessions(sessionId).length;
const neededSessions = SESSION_CONFIG.MAX_SESSIONS - currentCount;
for (let i = 0; i < neededSessions; i++) {
await handler.scheduleOperation('create-session', { sessionId , sessionType });
await handler.scheduleOperation('create-session', { sessionId, sessionType });
handler.logger.info(`Queued job to create session for ${sessionType}`);
queuedCount++;
}
@ -34,7 +34,7 @@ export async function checkSessions(handler: BaseHandler): Promise<{
return {
cleaned: cleanedCount,
queued: queuedCount,
message: `Session check completed: cleaned ${cleanedCount}, queued ${queuedCount}`
message: `Session check completed: cleaned ${cleanedCount}, queued ${queuedCount}`,
};
}
@ -45,7 +45,6 @@ export async function createSingleSession(
handler: BaseHandler,
input: any
): Promise<{ sessionId: string; status: string; sessionType: string }> {
const { sessionId, sessionType } = input || {};
const sessionManager = QMSessionManager.getInstance();
@ -60,7 +59,7 @@ export async function createSingleSession(
// lastUsed: new Date()
// };
handler.logger.info(`Creating session for ${sessionType}`)
handler.logger.info(`Creating session for ${sessionType}`);
// Add session to manager
// sessionManager.addSession(sessionType, session);
@ -68,7 +67,6 @@ export async function createSingleSession(
return {
sessionId: sessionType,
status: 'created',
sessionType
sessionType,
};
}

View file

@ -9,7 +9,6 @@ export async function spiderSymbolSearch(
services: IServiceContainer,
config: SymbolSpiderJob
): Promise<{ foundSymbols: number; depth: number }> {
// Simple spider implementation
// TODO: Implement actual API calls to discover symbols
@ -18,7 +17,7 @@ export async function spiderSymbolSearch(
return {
foundSymbols,
depth: config.depth
depth: config.depth,
};
}

View file

@ -6,16 +6,14 @@ import type { IServiceContainer } from '@stock-bot/handlers';
export async function searchSymbols(services: IServiceContainer): Promise<any[]> {
// Get symbols from MongoDB
const symbols = await services.mongodb.collection('qm_symbols')
.find({}).limit(50).toArray();
const symbols = await services.mongodb.collection('qm_symbols').find({}).limit(50).toArray();
return symbols;
}
export async function fetchSymbolData(services: IServiceContainer, symbol: string): Promise<any> {
// Fetch data for a specific symbol
const symbolData = await services.mongodb.collection('qm_symbols')
.findOne({ symbol });
const symbolData = await services.mongodb.collection('qm_symbols').findOne({ symbol });
return symbolData;
}

View file

@ -1,8 +1,4 @@
import {
BaseHandler,
Handler,
type IServiceContainer
} from '@stock-bot/handlers';
import { BaseHandler, Handler, type IServiceContainer } from '@stock-bot/handlers';
@Handler('qm')
export class QMHandler extends BaseHandler {
@ -104,5 +100,4 @@ export class QMHandler extends BaseHandler {
// spiderJobId
// };
// }
}

View file

@ -2,11 +2,10 @@
* Shared configuration for QM operations
*/
// QM Session IDs for different endpoints
export const QM_SESSION_IDS = {
LOOKUP: 'dc8c9930437f65d30f6597768800957017bac203a0a50342932757c8dfa158d6', // lookup endpoint
// '5ad521e05faf5778d567f6d0012ec34d6cdbaeb2462f41568f66558bc7b4ced9': [], //4488d072b
// '5ad521e05faf5778d567f6d0012ec34d6cdbaeb2462f41568f66558bc7b4ced9': [], //4488d072b
// cc1cbdaf040f76db8f4c94f7d156b9b9b716e1a7509ec9c74a48a47f6b6b9f87: [], //97ff00cf3 // getQuotes
// '74963ff42f1db2320d051762b5d3950ff9eab23f9d5c5b592551b4ca0441d086': [], //32ca24e394b // getSplitsBySymbol getBrokerRatingsBySymbol getDividendsBySymbol getEarningsSurprisesBySymbol getEarningsEventsBySymbol
// '1e1d7cb1de1fd2fe52684abdea41a446919a5fe12776dfab88615ac1ce1ec2f6': [], //fb5721812d2c // getEnhancedQuotes getProfiles

View file

@ -35,7 +35,9 @@ export class QMSessionManager {
}
// Filter out sessions with excessive failures
const validSessions = sessions.filter(session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS);
const validSessions = sessions.filter(
session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS
);
if (validSessions.length === 0) {
return null;
}
@ -100,7 +102,9 @@ export class QMSessionManager {
*/
needsMoreSessions(sessionId: string): boolean {
const sessions = this.sessionCache[sessionId] || [];
const validSessions = sessions.filter(session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS);
const validSessions = sessions.filter(
session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS
);
return validSessions.length < SESSION_CONFIG.MIN_SESSIONS;
}
@ -119,13 +123,17 @@ export class QMSessionManager {
const stats: Record<string, { total: number; valid: number; failed: number }> = {};
Object.entries(this.sessionCache).forEach(([sessionId, sessions]) => {
const validSessions = sessions.filter(session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS);
const failedSessions = sessions.filter(session => session.failedCalls > SESSION_CONFIG.MAX_FAILED_CALLS);
const validSessions = sessions.filter(
session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS
);
const failedSessions = sessions.filter(
session => session.failedCalls > SESSION_CONFIG.MAX_FAILED_CALLS
);
stats[sessionId] = {
total: sessions.length,
valid: validSessions.length,
failed: failedSessions.length
failed: failedSessions.length,
};
});

View file

@ -1,9 +1,8 @@
/**
* WebShare Fetch Operations - API integration
*/
import type { ProxyInfo } from '@stock-bot/proxy';
import { OperationContext } from '@stock-bot/di';
import type { ProxyInfo } from '@stock-bot/proxy';
import { WEBSHARE_CONFIG } from '../shared/config';
/**
@ -30,14 +29,17 @@ export async function fetchWebShareProxies(): Promise<ProxyInfo[]> {
ctx.logger.info('Fetching proxies from WebShare API', { apiUrl });
const response = await fetch(`${apiUrl}proxy/list/?mode=${WEBSHARE_CONFIG.DEFAULT_MODE}&page=${WEBSHARE_CONFIG.DEFAULT_PAGE}&page_size=${WEBSHARE_CONFIG.DEFAULT_PAGE_SIZE}`, {
method: 'GET',
headers: {
Authorization: `Token ${apiKey}`,
'Content-Type': 'application/json',
},
signal: AbortSignal.timeout(WEBSHARE_CONFIG.TIMEOUT),
});
const response = await fetch(
`${apiUrl}proxy/list/?mode=${WEBSHARE_CONFIG.DEFAULT_MODE}&page=${WEBSHARE_CONFIG.DEFAULT_PAGE}&page_size=${WEBSHARE_CONFIG.DEFAULT_PAGE_SIZE}`,
{
method: 'GET',
headers: {
Authorization: `Token ${apiKey}`,
'Content-Type': 'application/json',
},
signal: AbortSignal.timeout(WEBSHARE_CONFIG.TIMEOUT),
}
);
if (!response.ok) {
ctx.logger.error('WebShare API request failed', {
@ -55,22 +57,19 @@ export async function fetchWebShareProxies(): Promise<ProxyInfo[]> {
}
// Transform proxy data to ProxyInfo format
const proxies: ProxyInfo[] = data.results.map((proxy: {
username: string;
password: string;
proxy_address: string;
port: number;
}) => ({
source: 'webshare',
protocol: 'http' as const,
host: proxy.proxy_address,
port: proxy.port,
username: proxy.username,
password: proxy.password,
isWorking: true, // WebShare provides working proxies
firstSeen: new Date(),
lastChecked: new Date(),
}));
const proxies: ProxyInfo[] = data.results.map(
(proxy: { username: string; password: string; proxy_address: string; port: number }) => ({
source: 'webshare',
protocol: 'http' as const,
host: proxy.proxy_address,
port: proxy.port,
username: proxy.username,
password: proxy.password,
isWorking: true, // WebShare provides working proxies
firstSeen: new Date(),
lastChecked: new Date(),
})
);
ctx.logger.info('Successfully fetched proxies from WebShare', {
count: proxies.length,

View file

@ -4,7 +4,7 @@ import {
Operation,
QueueSchedule,
type ExecutionContext,
type IServiceContainer
type IServiceContainer,
} from '@stock-bot/handlers';
@Handler('webshare')
@ -17,7 +17,7 @@ export class WebShareHandler extends BaseHandler {
@QueueSchedule('0 */6 * * *', {
priority: 3,
immediately: true,
description: 'Fetch fresh proxies from WebShare API'
description: 'Fetch fresh proxies from WebShare API',
})
async fetchProxies(_input: unknown, _context: ExecutionContext): Promise<unknown> {
this.logger.info('Fetching proxies from WebShare API');
@ -28,6 +28,14 @@ export class WebShareHandler extends BaseHandler {
if (proxies.length > 0) {
// Update the centralized proxy manager using the injected service
if (!this.proxy) {
this.logger.warn('Proxy manager is not initialized, cannot update proxies');
return {
success: false,
proxiesUpdated: 0,
error: 'Proxy manager not initialized',
};
}
await this.proxy.updateProxies(proxies);
this.logger.info('Updated proxy manager with WebShare proxies', {
@ -37,7 +45,11 @@ export class WebShareHandler extends BaseHandler {
// Cache proxy stats for monitoring
await this.cache.set('webshare-proxy-count', proxies.length, 3600);
await this.cache.set('webshare-working-count', proxies.filter(p => p.isWorking !== false).length, 3600);
await this.cache.set(
'webshare-working-count',
proxies.filter(p => p.isWorking !== false).length,
3600
);
await this.cache.set('last-webshare-fetch', new Date().toISOString(), 1800);
return {
@ -59,4 +71,3 @@ export class WebShareHandler extends BaseHandler {
}
}
}

View file

@ -4,20 +4,18 @@
*/
// Framework imports
import { initializeServiceConfig } from '@stock-bot/config';
import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { initializeServiceConfig } from '@stock-bot/config';
// Library imports
import {
createServiceContainer,
initializeServices as initializeAwilixServices,
type ServiceContainer
type ServiceContainer,
} from '@stock-bot/di';
import { getLogger, setLoggerConfig, shutdownLoggers } from '@stock-bot/logger';
import { Shutdown } from '@stock-bot/shutdown';
import { handlerRegistry } from '@stock-bot/types';
// Local imports
import { createRoutes } from './routes/create-routes';
import { initializeAllHandlers } from './handlers';
@ -178,7 +176,7 @@ async function initializeServices() {
logger.error('Failed to initialize services', {
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
details: JSON.stringify(error, null, 2)
details: JSON.stringify(error, null, 2),
});
throw error;
}
@ -236,13 +234,19 @@ shutdown.onShutdownMedium(async () => {
if (container) {
// Disconnect database clients
const mongoClient = container.resolve('mongoClient');
if (mongoClient?.disconnect) await mongoClient.disconnect();
if (mongoClient?.disconnect) {
await mongoClient.disconnect();
}
const postgresClient = container.resolve('postgresClient');
if (postgresClient?.disconnect) await postgresClient.disconnect();
if (postgresClient?.disconnect) {
await postgresClient.disconnect();
}
const questdbClient = container.resolve('questdbClient');
if (questdbClient?.disconnect) await questdbClient.disconnect();
if (questdbClient?.disconnect) {
await questdbClient.disconnect();
}
logger.info('All services disposed successfully');
}

View file

@ -28,12 +28,14 @@ export function createRoutes(services: IServiceContainer): Hono {
});
// Add a new endpoint to test the improved DI
app.get('/api/di-test', async (c) => {
app.get('/api/di-test', async c => {
try {
const services = c.get('services') as IServiceContainer;
// Test MongoDB connection
const mongoStats = services.mongodb?.getPoolMetrics?.() || { status: services.mongodb ? 'connected' : 'disabled' };
const mongoStats = services.mongodb?.getPoolMetrics?.() || {
status: services.mongodb ? 'connected' : 'disabled',
};
// Test PostgreSQL connection
const pgConnected = services.postgres?.connected || false;
@ -51,17 +53,20 @@ export function createRoutes(services: IServiceContainer): Hono {
mongodb: mongoStats,
postgres: { connected: pgConnected },
cache: { ready: cacheReady },
queue: queueStats
queue: queueStats,
},
timestamp: new Date().toISOString()
timestamp: new Date().toISOString(),
});
} catch (error) {
const services = c.get('services') as IServiceContainer;
services.logger.error('DI test endpoint failed', { error });
return c.json({
success: false,
error: error instanceof Error ? error.message : String(error)
}, 500);
return c.json(
{
success: false,
error: error instanceof Error ? error.message : String(error),
},
500
);
}
});

View file

@ -11,7 +11,7 @@ exchange.get('/', async c => {
return c.json({
status: 'success',
data: [],
message: 'Exchange endpoints will be implemented with database integration'
message: 'Exchange endpoints will be implemented with database integration',
});
} catch (error) {
logger.error('Failed to get exchanges', { error });

View file

@ -14,7 +14,7 @@ queue.get('/status', async c => {
return c.json({
status: 'success',
data: globalStats,
message: 'Queue status retrieved successfully'
message: 'Queue status retrieved successfully',
});
} catch (error) {
logger.error('Failed to get queue status', { error });

View file

@ -1,5 +1,5 @@
import { getLogger } from '@stock-bot/logger';
import { sleep } from '@stock-bot/di';
import { getLogger } from '@stock-bot/logger';
const logger = getLogger('symbol-search-util');

View file

@ -3,7 +3,6 @@
/**
* Test script for CEO handler operations
*/
import { initializeServiceConfig } from '@stock-bot/config';
import { createServiceContainer, initializeServices } from '@stock-bot/di';
import { getLogger } from '@stock-bot/logger';
@ -70,12 +69,15 @@ async function testCeoOperations() {
logger.info('Testing process-individual-symbol operation...');
const sampleSymbol = await collection.findOne({});
if (sampleSymbol) {
const individualResult = await ceoHandler.processIndividualSymbol({
ceoId: sampleSymbol.ceoId,
symbol: sampleSymbol.symbol,
exchange: sampleSymbol.exchange,
name: sampleSymbol.name,
}, {});
const individualResult = await ceoHandler.processIndividualSymbol(
{
ceoId: sampleSymbol.ceoId,
symbol: sampleSymbol.symbol,
exchange: sampleSymbol.exchange,
name: sampleSymbol.name,
},
{}
);
logger.info('Process individual symbol result:', individualResult);
}
} else {

View file

@ -1,5 +1,5 @@
import { PostgreSQLClient } from '@stock-bot/postgres';
import { MongoDBClient } from '@stock-bot/mongodb';
import { PostgreSQLClient } from '@stock-bot/postgres';
let postgresClient: PostgreSQLClient | null = null;
let mongodbClient: MongoDBClient | null = null;

View file

@ -21,9 +21,7 @@ export async function clearPostgreSQLData(payload: JobPayload): Promise<{
const exchangeCountResult = await postgresClient.query(
'SELECT COUNT(*) as count FROM exchanges'
);
const symbolCountResult = await postgresClient.query(
'SELECT COUNT(*) as count FROM symbols'
);
const symbolCountResult = await postgresClient.query('SELECT COUNT(*) as count FROM symbols');
const mappingCountResult = await postgresClient.query(
'SELECT COUNT(*) as count FROM provider_mappings'
);

View file

@ -1,11 +1,11 @@
import { syncAllExchanges } from './sync-all-exchanges.operations';
import { syncQMExchanges } from './qm-exchanges.operations';
import { syncIBExchanges } from './sync-ib-exchanges.operations';
import { syncQMProviderMappings } from './sync-qm-provider-mappings.operations';
import { clearPostgreSQLData } from './clear-postgresql-data.operations';
import { getSyncStatus } from './enhanced-sync-status.operations';
import { getExchangeStats } from './exchange-stats.operations';
import { getProviderMappingStats } from './provider-mapping-stats.operations';
import { getSyncStatus } from './enhanced-sync-status.operations';
import { syncQMExchanges } from './qm-exchanges.operations';
import { syncAllExchanges } from './sync-all-exchanges.operations';
import { syncIBExchanges } from './sync-ib-exchanges.operations';
import { syncQMProviderMappings } from './sync-qm-provider-mappings.operations';
export const exchangeOperations = {
syncAllExchanges,

View file

@ -4,7 +4,9 @@ import type { JobPayload } from '../../../types/job-payloads';
const logger = getLogger('sync-qm-exchanges');
export async function syncQMExchanges(payload: JobPayload): Promise<{ processed: number; created: number; updated: number }> {
export async function syncQMExchanges(
payload: JobPayload
): Promise<{ processed: number; created: number; updated: number }> {
logger.info('Starting QM exchanges sync...');
try {
@ -72,7 +74,11 @@ async function createExchange(qmExchange: any, postgresClient: any): Promise<voi
]);
}
async function updateExchange(exchangeId: string, qmExchange: any, postgresClient: any): Promise<void> {
async function updateExchange(
exchangeId: string,
qmExchange: any,
postgresClient: any
): Promise<void> {
const query = `
UPDATE exchanges
SET name = COALESCE($2, name),
@ -88,7 +94,12 @@ async function updateExchange(exchangeId: string, qmExchange: any, postgresClien
]);
}
async function updateSyncStatus(provider: string, dataType: string, count: number, postgresClient: any): Promise<void> {
async function updateSyncStatus(
provider: string,
dataType: string,
count: number,
postgresClient: any
): Promise<void> {
const query = `
UPDATE sync_status
SET last_sync_at = NOW(),

View file

@ -228,9 +228,13 @@ function getBasicExchangeMapping(providerCode: string): string | null {
return mappings[providerCode.toUpperCase()] || null;
}
async function findProviderExchangeMapping(provider: string, providerExchangeCode: string): Promise<any> {
async function findProviderExchangeMapping(
provider: string,
providerExchangeCode: string
): Promise<any> {
const postgresClient = getPostgreSQLClient();
const query = 'SELECT * FROM provider_exchange_mappings WHERE provider = $1 AND provider_exchange_code = $2';
const query =
'SELECT * FROM provider_exchange_mappings WHERE provider = $1 AND provider_exchange_code = $2';
const result = await postgresClient.query(query, [provider, providerExchangeCode]);
return result.rows[0] || null;
}
@ -242,7 +246,12 @@ async function findExchangeByCode(code: string): Promise<any> {
return result.rows[0] || null;
}
async function updateSyncStatus(provider: string, dataType: string, count: number, postgresClient: any): Promise<void> {
async function updateSyncStatus(
provider: string,
dataType: string,
count: number,
postgresClient: any
): Promise<void> {
const query = `
INSERT INTO sync_status (provider, data_type, last_sync_at, last_sync_count, sync_errors)
VALUES ($1, $2, NOW(), $3, NULL)

View file

@ -1,7 +1,7 @@
import { getLogger } from '@stock-bot/logger';
import type { MasterExchange } from '@stock-bot/mongodb';
import { getMongoDBClient } from '../../../clients';
import type { JobPayload } from '../../../types/job-payloads';
import type { MasterExchange } from '@stock-bot/mongodb';
const logger = getLogger('sync-ib-exchanges');
@ -14,7 +14,9 @@ interface IBExchange {
currency?: string;
}
export async function syncIBExchanges(payload: JobPayload): Promise<{ syncedCount: number; totalExchanges: number }> {
export async function syncIBExchanges(
payload: JobPayload
): Promise<{ syncedCount: number; totalExchanges: number }> {
logger.info('Syncing IB exchanges from database...');
try {

View file

@ -131,9 +131,13 @@ async function createProviderExchangeMapping(
]);
}
async function findProviderExchangeMapping(provider: string, providerExchangeCode: string): Promise<any> {
async function findProviderExchangeMapping(
provider: string,
providerExchangeCode: string
): Promise<any> {
const postgresClient = getPostgreSQLClient();
const query = 'SELECT * FROM provider_exchange_mappings WHERE provider = $1 AND provider_exchange_code = $2';
const query =
'SELECT * FROM provider_exchange_mappings WHERE provider = $1 AND provider_exchange_code = $2';
const result = await postgresClient.query(query, [provider, providerExchangeCode]);
return result.rows[0] || null;
}

View file

@ -1,6 +1,6 @@
import { syncQMSymbols } from './qm-symbols.operations';
import { syncSymbolsFromProvider } from './sync-symbols-from-provider.operations';
import { getSyncStatus } from './sync-status.operations';
import { syncSymbolsFromProvider } from './sync-symbols-from-provider.operations';
export const symbolOperations = {
syncQMSymbols,

View file

@ -4,7 +4,9 @@ import type { JobPayload } from '../../../types/job-payloads';
const logger = getLogger('sync-qm-symbols');
export async function syncQMSymbols(payload: JobPayload): Promise<{ processed: number; created: number; updated: number }> {
export async function syncQMSymbols(
payload: JobPayload
): Promise<{ processed: number; created: number; updated: number }> {
logger.info('Starting QM symbols sync...');
try {
@ -21,7 +23,10 @@ export async function syncQMSymbols(payload: JobPayload): Promise<{ processed: n
for (const symbol of qmSymbols) {
try {
// 2. Resolve exchange
const exchangeId = await resolveExchange(symbol.exchangeCode || symbol.exchange, postgresClient);
const exchangeId = await resolveExchange(
symbol.exchangeCode || symbol.exchange,
postgresClient
);
if (!exchangeId) {
logger.warn('Unknown exchange, skipping symbol', {
@ -64,7 +69,9 @@ export async function syncQMSymbols(payload: JobPayload): Promise<{ processed: n
// Helper functions
async function resolveExchange(exchangeCode: string, postgresClient: any): Promise<string | null> {
if (!exchangeCode) return null;
if (!exchangeCode) {
return null;
}
// Simple mapping - expand this as needed
const exchangeMap: Record<string, string> = {
@ -92,7 +99,11 @@ async function findSymbol(symbol: string, exchangeId: string, postgresClient: an
return result.rows[0] || null;
}
async function createSymbol(qmSymbol: any, exchangeId: string, postgresClient: any): Promise<string> {
async function createSymbol(
qmSymbol: any,
exchangeId: string,
postgresClient: any
): Promise<string> {
const query = `
INSERT INTO symbols (symbol, exchange_id, company_name, country, currency)
VALUES ($1, $2, $3, $4, $5)
@ -153,7 +164,12 @@ async function upsertProviderMapping(
]);
}
async function updateSyncStatus(provider: string, dataType: string, count: number, postgresClient: any): Promise<void> {
async function updateSyncStatus(
provider: string,
dataType: string,
count: number,
postgresClient: any
): Promise<void> {
const query = `
UPDATE sync_status
SET last_sync_at = NOW(),

View file

@ -87,7 +87,11 @@ export async function syncSymbolsFromProvider(payload: JobPayload): Promise<Sync
}
}
async function processSingleSymbol(symbol: any, provider: string, result: SyncResult): Promise<void> {
async function processSingleSymbol(
symbol: any,
provider: string,
result: SyncResult
): Promise<void> {
const symbolCode = symbol.symbol || symbol.code;
const exchangeCode = symbol.exchangeCode || symbol.exchange || symbol.exchange_id;
@ -121,7 +125,10 @@ async function processSingleSymbol(symbol: any, provider: string, result: SyncRe
}
}
async function findActiveProviderExchangeMapping(provider: string, providerExchangeCode: string): Promise<any> {
async function findActiveProviderExchangeMapping(
provider: string,
providerExchangeCode: string
): Promise<any> {
const postgresClient = getPostgreSQLClient();
const query = `
SELECT pem.*, e.code as master_exchange_code
@ -178,7 +185,11 @@ async function updateSymbol(symbolId: string, symbol: any): Promise<void> {
]);
}
async function upsertProviderMapping(symbolId: string, provider: string, symbol: any): Promise<void> {
async function upsertProviderMapping(
symbolId: string,
provider: string,
symbol: any
): Promise<void> {
const postgresClient = getPostgreSQLClient();
const query = `
INSERT INTO provider_mappings
@ -199,7 +210,12 @@ async function upsertProviderMapping(symbolId: string, provider: string, symbol:
]);
}
async function updateSyncStatus(provider: string, dataType: string, count: number, postgresClient: any): Promise<void> {
async function updateSyncStatus(
provider: string,
dataType: string,
count: number,
postgresClient: any
): Promise<void> {
const query = `
INSERT INTO sync_status (provider, data_type, last_sync_at, last_sync_count, sync_errors)
VALUES ($1, $2, NOW(), $3, NULL)

View file

@ -1,16 +1,16 @@
// Framework imports
import { initializeServiceConfig } from '@stock-bot/config';
import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { initializeServiceConfig } from '@stock-bot/config';
// Library imports
import { getLogger, setLoggerConfig, shutdownLoggers } from '@stock-bot/logger';
import { MongoDBClient } from '@stock-bot/mongodb';
import { PostgreSQLClient } from '@stock-bot/postgres';
import { QueueManager, type QueueManagerConfig } from '@stock-bot/queue';
import { Shutdown } from '@stock-bot/shutdown';
import { setMongoDBClient, setPostgreSQLClient } from './clients';
// Local imports
import { enhancedSyncRoutes, healthRoutes, statsRoutes, syncRoutes } from './routes';
import { setMongoDBClient, setPostgreSQLClient } from './clients';
const config = initializeServiceConfig();
console.log('Data Sync Service Configuration:', JSON.stringify(config, null, 2));
@ -66,17 +66,20 @@ async function initializeServices() {
// Initialize MongoDB client
logger.debug('Connecting to MongoDB...');
const mongoConfig = databaseConfig.mongodb;
mongoClient = new MongoDBClient({
uri: mongoConfig.uri,
database: mongoConfig.database,
host: mongoConfig.host || 'localhost',
port: mongoConfig.port || 27017,
timeouts: {
connectTimeout: 30000,
socketTimeout: 30000,
serverSelectionTimeout: 5000,
mongoClient = new MongoDBClient(
{
uri: mongoConfig.uri,
database: mongoConfig.database,
host: mongoConfig.host || 'localhost',
port: mongoConfig.port || 27017,
timeouts: {
connectTimeout: 30000,
socketTimeout: 30000,
serverSelectionTimeout: 5000,
},
},
}, logger);
logger
);
await mongoClient.connect();
setMongoDBClient(mongoClient);
logger.info('MongoDB connected');
@ -84,18 +87,21 @@ async function initializeServices() {
// Initialize PostgreSQL client
logger.debug('Connecting to PostgreSQL...');
const pgConfig = databaseConfig.postgres;
postgresClient = new PostgreSQLClient({
host: pgConfig.host,
port: pgConfig.port,
database: pgConfig.database,
username: pgConfig.user,
password: pgConfig.password,
poolSettings: {
min: 2,
max: pgConfig.poolSize || 10,
idleTimeoutMillis: pgConfig.idleTimeout || 30000,
postgresClient = new PostgreSQLClient(
{
host: pgConfig.host,
port: pgConfig.port,
database: pgConfig.database,
username: pgConfig.user,
password: pgConfig.password,
poolSettings: {
min: 2,
max: pgConfig.poolSize || 10,
idleTimeoutMillis: pgConfig.idleTimeout || 30000,
},
},
}, logger);
logger
);
await postgresClient.connect();
setPostgreSQLClient(postgresClient);
logger.info('PostgreSQL connected');
@ -124,7 +130,7 @@ async function initializeServices() {
enableDLQ: true,
},
enableScheduledJobs: true,
delayWorkerStart: true, // Prevent workers from starting until all singletons are ready
delayWorkerStart: true, // Prevent workers from starting until all singletons are ready
};
queueManager = QueueManager.getOrInitialize(queueManagerConfig);

View file

@ -39,7 +39,11 @@ enhancedSync.post('/provider-mappings/qm', async c => {
payload: {},
});
return c.json({ success: true, jobId: job.id, message: 'QM provider mappings sync job queued' });
return c.json({
success: true,
jobId: job.id,
message: 'QM provider mappings sync job queued',
});
} catch (error) {
logger.error('Failed to queue QM provider mappings sync job', { error });
return c.json(

View file

@ -1,5 +1,5 @@
import { PostgreSQLClient } from '@stock-bot/postgres';
import { MongoDBClient } from '@stock-bot/mongodb';
import { PostgreSQLClient } from '@stock-bot/postgres';
let postgresClient: PostgreSQLClient | null = null;
let mongodbClient: MongoDBClient | null = null;

View file

@ -77,17 +77,20 @@ async function initializeServices() {
// Initialize MongoDB client
logger.debug('Connecting to MongoDB...');
const mongoConfig = databaseConfig.mongodb;
mongoClient = new MongoDBClient({
uri: mongoConfig.uri,
database: mongoConfig.database,
host: mongoConfig.host,
port: mongoConfig.port,
timeouts: {
connectTimeout: 30000,
socketTimeout: 30000,
serverSelectionTimeout: 5000,
mongoClient = new MongoDBClient(
{
uri: mongoConfig.uri,
database: mongoConfig.database,
host: mongoConfig.host,
port: mongoConfig.port,
timeouts: {
connectTimeout: 30000,
socketTimeout: 30000,
serverSelectionTimeout: 5000,
},
},
}, logger);
logger
);
await mongoClient.connect();
setMongoDBClient(mongoClient);
logger.info('MongoDB connected');
@ -95,18 +98,21 @@ async function initializeServices() {
// Initialize PostgreSQL client
logger.debug('Connecting to PostgreSQL...');
const pgConfig = databaseConfig.postgres;
postgresClient = new PostgreSQLClient({
host: pgConfig.host,
port: pgConfig.port,
database: pgConfig.database,
username: pgConfig.user,
password: pgConfig.password,
poolSettings: {
min: 2,
max: pgConfig.poolSize || 10,
idleTimeoutMillis: pgConfig.idleTimeout || 30000,
postgresClient = new PostgreSQLClient(
{
host: pgConfig.host,
port: pgConfig.port,
database: pgConfig.database,
username: pgConfig.user,
password: pgConfig.password,
poolSettings: {
min: 2,
max: pgConfig.poolSize || 10,
idleTimeoutMillis: pgConfig.idleTimeout || 30000,
},
},
}, logger);
logger
);
await postgresClient.connect();
setPostgreSQLClient(postgresClient);
logger.info('PostgreSQL connected');

View file

@ -4,13 +4,13 @@
import { Hono } from 'hono';
import { getLogger } from '@stock-bot/logger';
import { exchangeService } from '../services/exchange.service';
import { createSuccessResponse, handleError } from '../utils/error-handler';
import {
validateCreateExchange,
validateUpdateExchange,
validateCreateProviderMapping,
validateUpdateExchange,
validateUpdateProviderMapping,
} from '../utils/validation';
import { handleError, createSuccessResponse } from '../utils/error-handler';
const logger = getLogger('exchange-routes');
export const exchangeRoutes = new Hono();
@ -44,7 +44,7 @@ exchangeRoutes.get('/:id', async c => {
logger.info('Successfully retrieved exchange details', {
exchangeId,
exchangeCode: result.exchange.code,
mappingCount: result.provider_mappings.length
mappingCount: result.provider_mappings.length,
});
return c.json(createSuccessResponse(result));
} catch (error) {
@ -68,13 +68,10 @@ exchangeRoutes.post('/', async c => {
logger.info('Exchange created successfully', {
exchangeId: exchange.id,
code: exchange.code,
name: exchange.name
name: exchange.name,
});
return c.json(
createSuccessResponse(exchange, 'Exchange created successfully'),
201
);
return c.json(createSuccessResponse(exchange, 'Exchange created successfully'), 201);
} catch (error) {
logger.error('Failed to create exchange', { error });
return handleError(c, error, 'to create exchange');
@ -103,14 +100,14 @@ exchangeRoutes.patch('/:id', async c => {
logger.info('Exchange updated successfully', {
exchangeId,
code: exchange.code,
updates: validatedUpdates
updates: validatedUpdates,
});
// Log special actions
if (validatedUpdates.visible === false) {
logger.warn('Exchange marked as hidden - provider mappings will be deleted', {
exchangeId,
code: exchange.code
code: exchange.code,
});
}
@ -144,13 +141,7 @@ exchangeRoutes.get('/provider-mappings/:provider', async c => {
const mappings = await exchangeService.getProviderMappingsByProvider(provider);
logger.info('Successfully retrieved provider mappings', { provider, count: mappings.length });
return c.json(
createSuccessResponse(
mappings,
undefined,
mappings.length
)
);
return c.json(createSuccessResponse(mappings, undefined, mappings.length));
} catch (error) {
logger.error('Failed to get provider mappings', { error, provider });
return handleError(c, error, 'to get provider mappings');
@ -180,7 +171,7 @@ exchangeRoutes.patch('/provider-mappings/:id', async c => {
mappingId,
provider: mapping.provider,
providerExchangeCode: mapping.provider_exchange_code,
updates: validatedUpdates
updates: validatedUpdates,
});
return c.json(createSuccessResponse(mapping, 'Provider mapping updated successfully'));
@ -206,13 +197,10 @@ exchangeRoutes.post('/provider-mappings', async c => {
mappingId: mapping.id,
provider: mapping.provider,
providerExchangeCode: mapping.provider_exchange_code,
masterExchangeId: mapping.master_exchange_id
masterExchangeId: mapping.master_exchange_id,
});
return c.json(
createSuccessResponse(mapping, 'Provider mapping created successfully'),
201
);
return c.json(createSuccessResponse(mapping, 'Provider mapping created successfully'), 201);
} catch (error) {
logger.error('Failed to create provider mapping', { error });
return handleError(c, error, 'to create provider mapping');
@ -242,16 +230,10 @@ exchangeRoutes.get('/provider-exchanges/unmapped/:provider', async c => {
const exchanges = await exchangeService.getUnmappedProviderExchanges(provider);
logger.info('Successfully retrieved unmapped provider exchanges', {
provider,
count: exchanges.length
count: exchanges.length,
});
return c.json(
createSuccessResponse(
exchanges,
undefined,
exchanges.length
)
);
return c.json(createSuccessResponse(exchanges, undefined, exchanges.length));
} catch (error) {
logger.error('Failed to get unmapped provider exchanges', { error, provider });
return handleError(c, error, 'to get unmapped provider exchanges');

View file

@ -3,7 +3,7 @@
*/
import { Hono } from 'hono';
import { getLogger } from '@stock-bot/logger';
import { getPostgreSQLClient, getMongoDBClient } from '../clients';
import { getMongoDBClient, getPostgreSQLClient } from '../clients';
const logger = getLogger('health-routes');
export const healthRoutes = new Hono();
@ -84,13 +84,13 @@ healthRoutes.get('/detailed', async c => {
if (allHealthy) {
logger.info('Detailed health check successful - all systems healthy', {
mongodb: health.checks.mongodb.status,
postgresql: health.checks.postgresql.status
postgresql: health.checks.postgresql.status,
});
} else {
logger.warn('Detailed health check failed - some systems unhealthy', {
mongodb: health.checks.mongodb.status,
postgresql: health.checks.postgresql.status,
overallStatus: health.status
overallStatus: health.status,
});
}

View file

@ -1,15 +1,15 @@
import { getLogger } from '@stock-bot/logger';
import { getPostgreSQLClient, getMongoDBClient } from '../clients';
import { getMongoDBClient, getPostgreSQLClient } from '../clients';
import {
Exchange,
ExchangeWithMappings,
ProviderMapping,
CreateExchangeRequest,
UpdateExchangeRequest,
CreateProviderMappingRequest,
UpdateProviderMappingRequest,
ProviderExchange,
Exchange,
ExchangeStats,
ExchangeWithMappings,
ProviderExchange,
ProviderMapping,
UpdateExchangeRequest,
UpdateProviderMappingRequest,
} from '../types/exchange.types';
const logger = getLogger('exchange-service');
@ -63,14 +63,17 @@ export class ExchangeService {
const mappingsResult = await this.postgresClient.query(mappingsQuery);
// Group mappings by exchange ID
const mappingsByExchange = mappingsResult.rows.reduce((acc, mapping) => {
const exchangeId = mapping.master_exchange_id;
if (!acc[exchangeId]) {
acc[exchangeId] = [];
}
acc[exchangeId].push(mapping);
return acc;
}, {} as Record<string, ProviderMapping[]>);
const mappingsByExchange = mappingsResult.rows.reduce(
(acc, mapping) => {
const exchangeId = mapping.master_exchange_id;
if (!acc[exchangeId]) {
acc[exchangeId] = [];
}
acc[exchangeId].push(mapping);
return acc;
},
{} as Record<string, ProviderMapping[]>
);
// Attach mappings to exchanges
return exchangesResult.rows.map(exchange => ({
@ -79,7 +82,9 @@ export class ExchangeService {
}));
}
async getExchangeById(id: string): Promise<{ exchange: Exchange; provider_mappings: ProviderMapping[] } | null> {
async getExchangeById(
id: string
): Promise<{ exchange: Exchange; provider_mappings: ProviderMapping[] } | null> {
const exchangeQuery = 'SELECT * FROM exchanges WHERE id = $1 AND visible = true';
const exchangeResult = await this.postgresClient.query(exchangeQuery, [id]);
@ -230,7 +235,10 @@ export class ExchangeService {
return result.rows[0];
}
async updateProviderMapping(id: string, updates: UpdateProviderMappingRequest): Promise<ProviderMapping | null> {
async updateProviderMapping(
id: string,
updates: UpdateProviderMappingRequest
): Promise<ProviderMapping | null> {
const updateFields = [];
const values = [];
let paramIndex = 1;
@ -359,7 +367,6 @@ export class ExchangeService {
break;
}
default:
throw new Error(`Unknown provider: ${provider}`);
}

View file

@ -1,7 +1,7 @@
import { Context } from 'hono';
import { getLogger } from '@stock-bot/logger';
import { ValidationError } from './validation';
import { ApiResponse } from '../types/exchange.types';
import { ValidationError } from './validation';
const logger = getLogger('error-handler');

View file

@ -1,7 +1,10 @@
import { CreateExchangeRequest, CreateProviderMappingRequest } from '../types/exchange.types';
export class ValidationError extends Error {
constructor(message: string, public field?: string) {
constructor(
message: string,
public field?: string
) {
super(message);
this.name = 'ValidationError';
}
@ -38,7 +41,10 @@ export function validateCreateExchange(data: unknown): CreateExchangeRequest {
}
if (currency.length !== 3) {
throw new ValidationError('Currency must be exactly 3 characters (e.g., USD, EUR, CAD)', 'currency');
throw new ValidationError(
'Currency must be exactly 3 characters (e.g., USD, EUR, CAD)',
'currency'
);
}
return {

View file

@ -1,16 +1,16 @@
import { useCallback, useEffect, useState } from 'react';
import { exchangeApi } from '../services/exchangeApi';
import {
CreateExchangeRequest,
CreateProviderMappingRequest,
Exchange,
ExchangeDetails,
ExchangeStats,
ProviderMapping,
ProviderExchange,
CreateExchangeRequest,
ProviderMapping,
UpdateExchangeRequest,
CreateProviderMappingRequest,
UpdateProviderMappingRequest,
} from '../types';
import { exchangeApi } from '../services/exchangeApi';
export function useExchanges() {
const [exchanges, setExchanges] = useState<Exchange[]>([]);
@ -62,18 +62,15 @@ export function useExchanges() {
[fetchExchanges]
);
const fetchExchangeDetails = useCallback(
async (id: string): Promise<ExchangeDetails | null> => {
try {
return await exchangeApi.getExchangeById(id);
} catch (err) {
// Error fetching exchange details - error state will show in UI
setError(err instanceof Error ? err.message : 'Failed to fetch exchange details');
return null;
}
},
[]
);
const fetchExchangeDetails = useCallback(async (id: string): Promise<ExchangeDetails | null> => {
try {
return await exchangeApi.getExchangeById(id);
} catch (err) {
// Error fetching exchange details - error state will show in UI
setError(err instanceof Error ? err.message : 'Failed to fetch exchange details');
return null;
}
}, []);
const fetchStats = useCallback(async (): Promise<ExchangeStats | null> => {
try {

View file

@ -1,22 +1,22 @@
import { useState, useCallback } from 'react';
import { useCallback, useState } from 'react';
import { FormErrors } from '../types';
export function useFormValidation<T>(
initialData: T,
validateFn: (data: T) => FormErrors
) {
export function useFormValidation<T>(initialData: T, validateFn: (data: T) => FormErrors) {
const [formData, setFormData] = useState<T>(initialData);
const [errors, setErrors] = useState<FormErrors>({});
const [isSubmitting, setIsSubmitting] = useState(false);
const updateField = useCallback((field: keyof T, value: T[keyof T]) => {
setFormData(prev => ({ ...prev, [field]: value }));
const updateField = useCallback(
(field: keyof T, value: T[keyof T]) => {
setFormData(prev => ({ ...prev, [field]: value }));
// Clear error when user starts typing
if (errors[field as string]) {
setErrors(prev => ({ ...prev, [field as string]: '' }));
}
}, [errors]);
// Clear error when user starts typing
if (errors[field as string]) {
setErrors(prev => ({ ...prev, [field as string]: '' }));
}
},
[errors]
);
const validate = useCallback((): boolean => {
const newErrors = validateFn(formData);
@ -30,24 +30,29 @@ export function useFormValidation<T>(
setIsSubmitting(false);
}, [initialData]);
const handleSubmit = useCallback(async (
onSubmit: (data: T) => Promise<void>,
onSuccess?: () => void,
onError?: (error: unknown) => void
) => {
if (!validate()) {return;}
const handleSubmit = useCallback(
async (
onSubmit: (data: T) => Promise<void>,
onSuccess?: () => void,
onError?: (error: unknown) => void
) => {
if (!validate()) {
return;
}
setIsSubmitting(true);
try {
await onSubmit(formData);
reset();
onSuccess?.();
} catch (error) {
onError?.(error);
} finally {
setIsSubmitting(false);
}
}, [formData, validate, reset]);
setIsSubmitting(true);
try {
await onSubmit(formData);
reset();
onSuccess?.();
} catch (error) {
onError?.(error);
} finally {
setIsSubmitting(false);
}
},
[formData, validate, reset]
);
return {
formData,

View file

@ -1,23 +1,20 @@
import {
ApiResponse,
CreateExchangeRequest,
CreateProviderMappingRequest,
Exchange,
ExchangeDetails,
ExchangeStats,
ProviderMapping,
ProviderExchange,
CreateExchangeRequest,
ProviderMapping,
UpdateExchangeRequest,
CreateProviderMappingRequest,
UpdateProviderMappingRequest,
} from '../types';
const API_BASE_URL = import.meta.env.VITE_API_BASE_URL || 'http://localhost:4000/api';
class ExchangeApiService {
private async request<T>(
endpoint: string,
options?: RequestInit
): Promise<ApiResponse<T>> {
private async request<T>(endpoint: string, options?: RequestInit): Promise<ApiResponse<T>> {
const url = `${API_BASE_URL}${endpoint}`;
const response = await fetch(url, {

View file

@ -32,7 +32,9 @@ export interface AddExchangeDialogProps extends BaseDialogProps {
export interface AddProviderMappingDialogProps extends BaseDialogProps {
exchangeId: string;
exchangeName: string;
onCreateMapping: (request: import('./request.types').CreateProviderMappingRequest) => Promise<unknown>;
onCreateMapping: (
request: import('./request.types').CreateProviderMappingRequest
) => Promise<unknown>;
}
export interface DeleteExchangeDialogProps extends BaseDialogProps {

View file

@ -19,7 +19,11 @@ export function formatPercentage(value: number): string {
}
export function getValueColor(value: number): string {
if (value > 0) {return 'text-success';}
if (value < 0) {return 'text-danger';}
if (value > 0) {
return 'text-success';
}
if (value < 0) {
return 'text-danger';
}
return 'text-text-secondary';
}

View file

@ -23,9 +23,15 @@ export function formatPercentage(value: number, decimals = 2): string {
* Format large numbers with K, M, B suffixes
*/
export function formatNumber(num: number): string {
if (num >= 1e9) {return (num / 1e9).toFixed(1) + 'B';}
if (num >= 1e6) {return (num / 1e6).toFixed(1) + 'M';}
if (num >= 1e3) {return (num / 1e3).toFixed(1) + 'K';}
if (num >= 1e9) {
return (num / 1e9).toFixed(1) + 'B';
}
if (num >= 1e6) {
return (num / 1e6).toFixed(1) + 'M';
}
if (num >= 1e3) {
return (num / 1e3).toFixed(1) + 'K';
}
return num.toString();
}
@ -33,8 +39,12 @@ export function formatNumber(num: number): string {
* Get color class based on numeric value (profit/loss)
*/
export function getValueColor(value: number): string {
if (value > 0) {return 'text-success';}
if (value < 0) {return 'text-danger';}
if (value > 0) {
return 'text-success';
}
if (value < 0) {
return 'text-danger';
}
return 'text-text-secondary';
}
@ -42,6 +52,8 @@ export function getValueColor(value: number): string {
* Truncate text to specified length
*/
export function truncateText(text: string, length: number): string {
if (text.length <= length) {return text;}
if (text.length <= length) {
return text;
}
return text.slice(0, length) + '...';
}

View file

@ -1,148 +0,0 @@
# Enhanced Cache Provider Usage
The Redis cache provider now supports advanced TTL handling and conditional operations.
## Basic Usage (Backward Compatible)
```typescript
import { RedisCache } from '@stock-bot/cache';
const cache = new RedisCache({
keyPrefix: 'trading:',
defaultTTL: 3600 // 1 hour
});
// Simple set with TTL (old way - still works)
await cache.set('user:123', userData, 1800); // 30 minutes
// Simple get
const user = await cache.get<UserData>('user:123');
```
## Enhanced Set Options
```typescript
// Preserve existing TTL when updating
await cache.set('user:123', updatedUserData, { preserveTTL: true });
// Only set if key exists (update operation)
const oldValue = await cache.set('user:123', newData, {
onlyIfExists: true,
getOldValue: true
});
// Only set if key doesn't exist (create operation)
await cache.set('user:456', newUser, {
onlyIfNotExists: true,
ttl: 7200 // 2 hours
});
// Get old value when setting new one
const previousData = await cache.set('session:abc', sessionData, {
getOldValue: true,
ttl: 1800
});
```
## Convenience Methods
```typescript
// Update value preserving TTL
await cache.update('user:123', updatedUserData);
// Set only if exists
const updated = await cache.setIfExists('user:123', newData, 3600);
// Set only if not exists (returns true if created)
const created = await cache.setIfNotExists('user:456', userData);
// Replace existing key with new TTL
const oldData = await cache.replace('user:123', newData, 7200);
// Atomic field updates
await cache.updateField('counter:views', (current) => (current || 0) + 1);
await cache.updateField('user:123', (user) => ({
...user,
lastSeen: new Date().toISOString(),
loginCount: (user?.loginCount || 0) + 1
}));
```
## Stock Bot Use Cases
### 1. Rate Limiting
```typescript
// Only create rate limit if not exists
const rateLimited = await cache.setIfNotExists(
`ratelimit:${userId}:${endpoint}`,
{ count: 1, resetTime: Date.now() + 60000 },
60 // 1 minute
);
if (!rateLimited) {
// Increment existing counter
await cache.updateField(`ratelimit:${userId}:${endpoint}`, (data) => ({
...data,
count: data.count + 1
}));
}
```
### 2. Session Management
```typescript
// Update session data without changing expiration
await cache.update(`session:${sessionId}`, {
...sessionData,
lastActivity: Date.now()
});
```
### 3. Cache Warming
```typescript
// Only update existing cached data, don't create new entries
const warmed = await cache.setIfExists(`stock:${symbol}:price`, latestPrice);
if (warmed) {
console.log(`Warmed cache for ${symbol}`);
}
```
### 4. Atomic Counters
```typescript
// Thread-safe counter increments
await cache.updateField('metrics:api:calls', (count) => (count || 0) + 1);
await cache.updateField('metrics:errors:500', (count) => (count || 0) + 1);
```
### 5. TTL Preservation for Frequently Updated Data
```typescript
// Keep original expiration when updating frequently changing data
await cache.set(`portfolio:${userId}:positions`, positions, { preserveTTL: true });
```
## Error Handling
The cache provider includes robust error handling:
```typescript
try {
await cache.set('key', value);
} catch (error) {
// Errors are logged and fallback values returned
// The cache operations are non-blocking
}
// Check cache health
const isHealthy = await cache.health();
// Wait for cache to be ready
await cache.waitForReady(10000); // 10 second timeout
```
## Performance Benefits
1. **Atomic Operations**: `updateField` uses Lua scripts to prevent race conditions
2. **TTL Preservation**: Avoids unnecessary TTL resets on updates
3. **Conditional Operations**: Reduces network round trips
4. **Shared Connections**: Efficient connection pooling
5. **Error Recovery**: Graceful degradation when Redis is unavailable

View file

@ -1,169 +0,0 @@
# Loki Logging for Stock Bot
This document outlines how to use the Loki logging system integrated with the Stock Bot platform (Updated June 2025).
## Overview
Loki provides centralized logging for all Stock Bot services with:
1. **Centralized logging** for all microservices
2. **Log aggregation** and filtering by service, level, and custom labels
3. **Grafana integration** for visualization and dashboards
4. **Query capabilities** using LogQL for log analysis
5. **Alert capabilities** for critical issues
## Getting Started
### Starting the Logging Stack
```cmd
# Start the monitoring stack (includes Loki and Grafana)
scripts\docker.ps1 monitoring
```
Or start services individually:
```cmd
# Start Loki service only
docker-compose up -d loki
# Start Loki and Grafana
docker-compose up -d loki grafana
```
### Viewing Logs
Once started:
1. Access Grafana at http://localhost:3000 (login with admin/admin)
2. Navigate to the "Stock Bot Logs" dashboard
3. View and query your logs
## Using the Logger in Your Services
The Stock Bot logger automatically sends logs to Loki using the updated pattern:
```typescript
import { getLogger } from '@stock-bot/logger';
// Create a logger for your service
const logger = getLogger('your-service-name');
// Log at different levels
logger.debug('Detailed information for debugging');
logger.info('General information about operations');
logger.warn('Potential issues that don\'t affect operation');
logger.error('Critical errors that require attention');
// Log with structured data (searchable in Loki)
logger.info('Processing trade', {
symbol: 'MSFT',
price: 410.75,
quantity: 50
});
```
## Configuration Options
Logger configuration is managed through the `@stock-bot/config` package and can be set in your `.env` file:
```bash
# Logging configuration
LOG_LEVEL=debug # debug, info, warn, error
LOG_CONSOLE=true # Log to console in addition to Loki
LOKI_HOST=localhost # Loki server hostname
LOKI_PORT=3100 # Loki server port
LOKI_RETENTION_DAYS=30 # Days to retain logs
LOKI_LABELS=environment=development,service=stock-bot # Default labels
LOKI_BATCH_SIZE=100 # Number of logs to batch before sending
LOKI_BATCH_WAIT=5 # Max time to wait before sending logs
```
## Useful Loki Queries
Inside Grafana, you can use these LogQL queries to analyze your logs:
1. **All logs from a specific service**:
```
{service="market-data-gateway"}
```
2. **All error logs across all services**:
```
{level="error"}
```
3. **Logs containing specific text**:
```
{service="market-data-gateway"} |= "trade"
```
4. **Count of error logs by service over time**:
```
sum by(service) (count_over_time({level="error"}[5m]))
```
## Testing the Logging Integration
Test the logging integration using Bun:
```cmd
# Run from project root using Bun (current runtime)
bun run tools/test-loki-logging.ts
```
## Architecture
Our logging implementation follows this architecture:
```
┌─────────────────┐ ┌─────────────────┐
│ Trading Services│────►│ @stock-bot/logger│
└─────────────────┘ │ getLogger() │
└────────┬────────┘
┌────────────────────────────────────────┐
│ Loki │
└────────────────┬───────────────────────┘
┌────────────────────────────────────────┐
│ Grafana │
└────────────────────────────────────────┘
```
## Adding New Dashboards
To create new Grafana dashboards for log visualization:
1. Build your dashboard in the Grafana UI
2. Export it to JSON
3. Add it to `monitoring/grafana/provisioning/dashboards/json/`
4. Restart the monitoring stack
## Troubleshooting
If logs aren't appearing in Grafana:
1. Run the status check script to verify Loki and Grafana are working:
```cmd
tools\check-loki-status.bat
```
2. Check that Loki and Grafana containers are running:
```cmd
docker ps | findstr "loki grafana"
```
3. Verify .env configuration for Loki host and port:
```cmd
type .env | findstr "LOKI_"
```
4. Ensure your service has the latest @stock-bot/logger package
5. Check for errors in the Loki container logs:
```cmd
docker logs stock-bot-loki
```

View file

@ -1,212 +0,0 @@
# MongoDB Client Multi-Database Migration Guide
## Overview
Your MongoDB client has been enhanced to support multiple databases dynamically while maintaining full backward compatibility.
## Key Features Added
### 1. **Dynamic Database Switching**
```typescript
// Set default database (all operations will use this unless overridden)
client.setDefaultDatabase('analytics');
// Get current default database
const currentDb = client.getDefaultDatabase(); // Returns: 'analytics'
```
### 2. **Database Parameter in Methods**
All methods now accept an optional `database` parameter:
```typescript
// Old way (still works - uses default database)
await client.batchUpsert('symbols', data, 'symbol');
// New way (specify database explicitly)
await client.batchUpsert('symbols', data, 'symbol', { database: 'stock' });
```
### 3. **Convenience Methods**
Pre-configured methods for common databases:
```typescript
// Stock database operations
await client.batchUpsertStock('symbols', data, 'symbol');
// Analytics database operations
await client.batchUpsertAnalytics('metrics', data, 'metric_name');
// Trading documents database operations
await client.batchUpsertTrading('orders', data, 'order_id');
```
### 4. **Direct Database Access**
```typescript
// Get specific database instances
const stockDb = client.getDatabase('stock');
const analyticsDb = client.getDatabase('analytics');
// Get collections with database override
const collection = client.getCollection('symbols', 'stock');
```
## Migration Steps
### Step 1: No Changes Required (Backward Compatible)
Your existing code continues to work unchanged:
```typescript
// This still works exactly as before
const client = MongoDBClient.getInstance();
await client.connect();
await client.batchUpsert('exchanges', exchangeData, 'exchange_id');
```
### Step 2: Organize Data by Database (Recommended)
Update your data service to use appropriate databases:
```typescript
// In your data service initialization
export class DataService {
private mongoClient = MongoDBClient.getInstance();
async initialize() {
await this.mongoClient.connect();
// Set stock as default for most operations
this.mongoClient.setDefaultDatabase('stock');
}
async saveInteractiveBrokersData(exchanges: any[], symbols: any[]) {
// Stock market data goes to 'stock' database (default)
await this.mongoClient.batchUpsert('exchanges', exchanges, 'exchange_id');
await this.mongoClient.batchUpsert('symbols', symbols, 'symbol');
}
async saveAnalyticsData(performance: any[]) {
// Analytics data goes to 'analytics' database
await this.mongoClient.batchUpsert(
'performance',
performance,
'date',
{ database: 'analytics' }
);
}
}
```
### Step 3: Use Convenience Methods (Optional)
Replace explicit database parameters with convenience methods:
```typescript
// Instead of:
await client.batchUpsert('symbols', data, 'symbol', { database: 'stock' });
// Use:
await client.batchUpsertStock('symbols', data, 'symbol');
```
## Factory Functions
New factory functions are available for easier database management:
```typescript
import {
connectMongoDB,
setDefaultDatabase,
getCurrentDatabase,
getDatabase
} from '@stock-bot/mongodb-client';
// Set default database globally
setDefaultDatabase('analytics');
// Get current default
const current = getCurrentDatabase();
// Get specific database
const stockDb = getDatabase('stock');
```
## Database Recommendations
### Stock Database (`stock`)
- Market data (symbols, exchanges, prices)
- Financial instruments
- Market events
- Real-time data
### Analytics Database (`analytics`)
- Performance metrics
- Calculated indicators
- Reports and dashboards
- Aggregated data
### Trading Documents Database (`trading_documents`)
- Trade orders and executions
- User portfolios
- Transaction logs
- Audit trails
## Example: Updating Your Data Service
```typescript
// Before (still works)
export class DataService {
async saveExchanges(exchanges: any[]) {
const client = MongoDBClient.getInstance();
await client.batchUpsert('exchanges', exchanges, 'exchange_id');
}
}
// After (recommended)
export class DataService {
private mongoClient = MongoDBClient.getInstance();
async initialize() {
await this.mongoClient.connect();
this.mongoClient.setDefaultDatabase('stock'); // Set appropriate default
}
async saveExchanges(exchanges: any[]) {
// Uses default 'stock' database
await this.mongoClient.batchUpsert('exchanges', exchanges, 'exchange_id');
// Or use convenience method
await this.mongoClient.batchUpsertStock('exchanges', exchanges, 'exchange_id');
}
async savePerformanceMetrics(metrics: any[]) {
// Save to analytics database
await this.mongoClient.batchUpsertAnalytics('metrics', metrics, 'metric_name');
}
}
```
## Testing
Your existing tests continue to work. For new multi-database features:
```typescript
import { MongoDBClient } from '@stock-bot/mongodb-client';
const client = MongoDBClient.getInstance();
await client.connect();
// Test database switching
client.setDefaultDatabase('test_db');
expect(client.getDefaultDatabase()).toBe('test_db');
// Test explicit database parameter
await client.batchUpsert('test_collection', data, 'id', { database: 'other_db' });
```
## Benefits
1. **Organized Data**: Separate databases for different data types
2. **Better Performance**: Smaller, focused databases
3. **Easier Maintenance**: Clear data boundaries
4. **Scalability**: Can scale databases independently
5. **Backward Compatibility**: No breaking changes
## Next Steps
1. Update your data service to use appropriate default database
2. Gradually migrate to using specific databases for different data types
3. Consider using convenience methods for cleaner code
4. Update tests to cover multi-database scenarios

View file

@ -1,17 +1,17 @@
#!/usr/bin/env bun
/* eslint-disable no-console */
import { parseArgs } from 'util';
import { join } from 'path';
import { ConfigManager } from './config-manager';
import { appConfigSchema } from './schemas';
import { parseArgs } from 'util';
import { redactSecrets } from './utils/secrets';
import {
validateConfig,
formatValidationResult,
checkDeprecations,
checkRequiredEnvVars,
validateCompleteness
formatValidationResult,
validateCompleteness,
validateConfig,
} from './utils/validation';
import { redactSecrets } from './utils/secrets';
import { ConfigManager } from './config-manager';
import { appConfigSchema } from './schemas';
import type { Environment } from './types';
interface CliOptions {
@ -36,9 +36,7 @@ const REQUIRED_PATHS = [
'database.postgres.database',
];
const REQUIRED_ENV_VARS = [
'NODE_ENV',
];
const REQUIRED_ENV_VARS = ['NODE_ENV'];
const SECRET_PATHS = [
'database.postgres.password',
@ -179,7 +177,6 @@ async function main() {
console.log('No action specified. Use --help for usage information.');
process.exit(1);
}
} catch (error) {
if (values.json) {
console.error(JSON.stringify({ error: String(error) }));

View file

@ -6,14 +6,20 @@ export class ConfigError extends Error {
}
export class ConfigValidationError extends ConfigError {
constructor(message: string, public errors: unknown) {
constructor(
message: string,
public errors: unknown
) {
super(message);
this.name = 'ConfigValidationError';
}
}
export class ConfigLoaderError extends ConfigError {
constructor(message: string, public loader: string) {
constructor(
message: string,
public loader: string
) {
super(`${loader}: ${message}`);
this.name = 'ConfigLoaderError';
}

View file

@ -1,87 +1,105 @@
export * from './base.schema';
export * from './database.schema';
export * from './provider.schema';
export * from './service.schema';
import { z } from 'zod';
import { baseConfigSchema, environmentSchema } from './base.schema';
import { providerConfigSchema, webshareProviderConfigSchema } from './provider.schema';
import { httpConfigSchema, queueConfigSchema } from './service.schema';
export * from './base.schema';
export * from './database.schema';
export * from './provider.schema';
export * from './service.schema';
// Flexible service schema with defaults
const flexibleServiceConfigSchema = z.object({
name: z.string().default('default-service'),
port: z.number().min(1).max(65535).default(3000),
host: z.string().default('0.0.0.0'),
healthCheckPath: z.string().default('/health'),
metricsPath: z.string().default('/metrics'),
shutdownTimeout: z.number().default(30000),
cors: z.object({
enabled: z.boolean().default(true),
origin: z.union([z.string(), z.array(z.string())]).default('*'),
credentials: z.boolean().default(true),
}).default({}),
}).default({});
const flexibleServiceConfigSchema = z
.object({
name: z.string().default('default-service'),
port: z.number().min(1).max(65535).default(3000),
host: z.string().default('0.0.0.0'),
healthCheckPath: z.string().default('/health'),
metricsPath: z.string().default('/metrics'),
shutdownTimeout: z.number().default(30000),
cors: z
.object({
enabled: z.boolean().default(true),
origin: z.union([z.string(), z.array(z.string())]).default('*'),
credentials: z.boolean().default(true),
})
.default({}),
})
.default({});
// Flexible database schema with defaults
const flexibleDatabaseConfigSchema = z.object({
postgres: z.object({
host: z.string().default('localhost'),
port: z.number().default(5432),
database: z.string().default('test_db'),
user: z.string().default('test_user'),
password: z.string().default('test_pass'),
ssl: z.boolean().default(false),
poolSize: z.number().min(1).max(100).default(10),
connectionTimeout: z.number().default(30000),
idleTimeout: z.number().default(10000),
}).default({}),
questdb: z.object({
host: z.string().default('localhost'),
ilpPort: z.number().default(9009),
httpPort: z.number().default(9000),
pgPort: z.number().default(8812),
database: z.string().default('questdb'),
user: z.string().default('admin'),
password: z.string().default('quest'),
bufferSize: z.number().default(65536),
flushInterval: z.number().default(1000),
}).default({}),
mongodb: z.object({
uri: z.string().url().optional(),
host: z.string().default('localhost'),
port: z.number().default(27017),
database: z.string().default('test_mongo'),
user: z.string().optional(),
password: z.string().optional(),
authSource: z.string().default('admin'),
replicaSet: z.string().optional(),
poolSize: z.number().min(1).max(100).default(10),
}).default({}),
dragonfly: z.object({
host: z.string().default('localhost'),
port: z.number().default(6379),
password: z.string().optional(),
db: z.number().min(0).max(15).default(0),
keyPrefix: z.string().optional(),
ttl: z.number().optional(),
maxRetries: z.number().default(3),
retryDelay: z.number().default(100),
}).default({}),
}).default({});
const flexibleDatabaseConfigSchema = z
.object({
postgres: z
.object({
host: z.string().default('localhost'),
port: z.number().default(5432),
database: z.string().default('test_db'),
user: z.string().default('test_user'),
password: z.string().default('test_pass'),
ssl: z.boolean().default(false),
poolSize: z.number().min(1).max(100).default(10),
connectionTimeout: z.number().default(30000),
idleTimeout: z.number().default(10000),
})
.default({}),
questdb: z
.object({
host: z.string().default('localhost'),
ilpPort: z.number().default(9009),
httpPort: z.number().default(9000),
pgPort: z.number().default(8812),
database: z.string().default('questdb'),
user: z.string().default('admin'),
password: z.string().default('quest'),
bufferSize: z.number().default(65536),
flushInterval: z.number().default(1000),
})
.default({}),
mongodb: z
.object({
uri: z.string().url().optional(),
host: z.string().default('localhost'),
port: z.number().default(27017),
database: z.string().default('test_mongo'),
user: z.string().optional(),
password: z.string().optional(),
authSource: z.string().default('admin'),
replicaSet: z.string().optional(),
poolSize: z.number().min(1).max(100).default(10),
})
.default({}),
dragonfly: z
.object({
host: z.string().default('localhost'),
port: z.number().default(6379),
password: z.string().optional(),
db: z.number().min(0).max(15).default(0),
keyPrefix: z.string().optional(),
ttl: z.number().optional(),
maxRetries: z.number().default(3),
retryDelay: z.number().default(100),
})
.default({}),
})
.default({});
// Flexible log schema with defaults (renamed from logging)
const flexibleLogConfigSchema = z.object({
level: z.enum(['trace', 'debug', 'info', 'warn', 'error', 'fatal']).default('info'),
format: z.enum(['json', 'pretty']).default('json'),
hideObject: z.boolean().default(false),
loki: z.object({
enabled: z.boolean().default(false),
host: z.string().default('localhost'),
port: z.number().default(3100),
labels: z.record(z.string()).default({}),
}).optional(),
}).default({});
const flexibleLogConfigSchema = z
.object({
level: z.enum(['trace', 'debug', 'info', 'warn', 'error', 'fatal']).default('info'),
format: z.enum(['json', 'pretty']).default('json'),
hideObject: z.boolean().default(false),
loki: z
.object({
enabled: z.boolean().default(false),
host: z.string().default('localhost'),
port: z.number().default(3100),
labels: z.record(z.string()).default({}),
})
.optional(),
})
.default({});
// Complete application configuration schema
export const appConfigSchema = baseConfigSchema.extend({

View file

@ -5,10 +5,12 @@ export const baseProviderConfigSchema = z.object({
name: z.string(),
enabled: z.boolean().default(true),
priority: z.number().default(0),
rateLimit: z.object({
maxRequests: z.number().default(100),
windowMs: z.number().default(60000),
}).optional(),
rateLimit: z
.object({
maxRequests: z.number().default(100),
windowMs: z.number().default(60000),
})
.optional(),
timeout: z.number().default(30000),
retries: z.number().default(3),
});

View file

@ -8,23 +8,27 @@ export const serviceConfigSchema = z.object({
healthCheckPath: z.string().default('/health'),
metricsPath: z.string().default('/metrics'),
shutdownTimeout: z.number().default(30000),
cors: z.object({
enabled: z.boolean().default(true),
origin: z.union([z.string(), z.array(z.string())]).default('*'),
credentials: z.boolean().default(true),
}).default({}),
cors: z
.object({
enabled: z.boolean().default(true),
origin: z.union([z.string(), z.array(z.string())]).default('*'),
credentials: z.boolean().default(true),
})
.default({}),
});
// Logging configuration
export const loggingConfigSchema = z.object({
level: z.enum(['trace', 'debug', 'info', 'warn', 'error', 'fatal']).default('info'),
format: z.enum(['json', 'pretty']).default('json'),
loki: z.object({
enabled: z.boolean().default(false),
host: z.string().default('localhost'),
port: z.number().default(3100),
labels: z.record(z.string()).default({}),
}).optional(),
loki: z
.object({
enabled: z.boolean().default(false),
host: z.string().default('localhost'),
port: z.number().default(3100),
labels: z.record(z.string()).default({}),
})
.optional(),
});
// Queue configuration
@ -35,15 +39,19 @@ export const queueConfigSchema = z.object({
password: z.string().optional(),
db: z.number().default(1),
}),
defaultJobOptions: z.object({
attempts: z.number().default(3),
backoff: z.object({
type: z.enum(['exponential', 'fixed']).default('exponential'),
delay: z.number().default(1000),
}).default({}),
removeOnComplete: z.number().default(10),
removeOnFail: z.number().default(5),
}).default({}),
defaultJobOptions: z
.object({
attempts: z.number().default(3),
backoff: z
.object({
type: z.enum(['exponential', 'fixed']).default('exponential'),
delay: z.number().default(1000),
})
.default({}),
removeOnComplete: z.number().default(10),
removeOnFail: z.number().default(5),
})
.default({}),
});
// HTTP client configuration
@ -52,12 +60,16 @@ export const httpConfigSchema = z.object({
retries: z.number().default(3),
retryDelay: z.number().default(1000),
userAgent: z.string().optional(),
proxy: z.object({
enabled: z.boolean().default(false),
url: z.string().url().optional(),
auth: z.object({
username: z.string(),
password: z.string(),
}).optional(),
}).optional(),
proxy: z
.object({
enabled: z.boolean().default(false),
url: z.string().url().optional(),
auth: z
.object({
username: z.string(),
password: z.string(),
})
.optional(),
})
.optional(),
});

View file

@ -56,20 +56,15 @@ export class SecretValue<T = string> {
* Zod schema for secret values
*/
export const secretSchema = <T extends z.ZodTypeAny>(_schema: T) => {
return z.custom<SecretValue<z.infer<T>>>(
(val) => val instanceof SecretValue,
{
message: 'Expected SecretValue instance',
}
);
return z.custom<SecretValue<z.infer<T>>>(val => val instanceof SecretValue, {
message: 'Expected SecretValue instance',
});
};
/**
* Transform string to SecretValue in Zod schema
*/
export const secretStringSchema = z
.string()
.transform((val) => new SecretValue(val));
export const secretStringSchema = z.string().transform(val => new SecretValue(val));
/**
* Create a secret value

View file

@ -17,10 +17,7 @@ export interface ValidationResult {
/**
* Validate configuration against a schema
*/
export function validateConfig<T>(
config: unknown,
schema: z.ZodSchema<T>
): ValidationResult {
export function validateConfig<T>(config: unknown, schema: z.ZodSchema<T>): ValidationResult {
try {
schema.parse(config);
return { valid: true };
@ -77,9 +74,7 @@ export function checkDeprecations(
/**
* Check for required environment variables
*/
export function checkRequiredEnvVars(
required: string[]
): ValidationResult {
export function checkRequiredEnvVars(required: string[]): ValidationResult {
const errors: ValidationResult['errors'] = [];
for (const envVar of required) {
@ -169,9 +164,7 @@ export function formatValidationResult(result: ValidationResult): string {
/**
* Create a strict schema that doesn't allow extra properties
*/
export function createStrictSchema<T extends z.ZodRawShape>(
shape: T
): z.ZodObject<T, 'strict'> {
export function createStrictSchema<T extends z.ZodRawShape>(shape: T): z.ZodObject<T, 'strict'> {
return z.object(shape).strict();
}

View file

@ -1,8 +1,8 @@
import { describe, test, expect, beforeEach } from 'bun:test';
import { beforeEach, describe, expect, test } from 'bun:test';
import { z } from 'zod';
import { ConfigManager } from '../src/config-manager';
import { ConfigLoader } from '../src/types';
import { ConfigValidationError } from '../src/errors';
import { ConfigLoader } from '../src/types';
// Mock loader for testing
class MockLoader implements ConfigLoader {
@ -178,29 +178,35 @@ describe('ConfigManager', () => {
test('should handle deep merge correctly', async () => {
manager = new ConfigManager({
loaders: [
new MockLoader({
app: {
settings: {
feature1: true,
feature2: false,
nested: {
value: 'base',
new MockLoader(
{
app: {
settings: {
feature1: true,
feature2: false,
nested: {
value: 'base',
},
},
},
},
}, 0),
new MockLoader({
app: {
settings: {
feature2: true,
feature3: true,
nested: {
value: 'override',
extra: 'new',
0
),
new MockLoader(
{
app: {
settings: {
feature2: true,
feature3: true,
nested: {
value: 'override',
extra: 'new',
},
},
},
},
}, 10),
10
),
],
});

View file

@ -1,10 +1,10 @@
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
import { existsSync, mkdirSync, rmSync, writeFileSync } from 'fs';
import { join } from 'path';
import { mkdirSync, writeFileSync, rmSync, existsSync } from 'fs';
import { afterEach, beforeEach, describe, expect, test } from 'bun:test';
import { ConfigManager } from '../src/config-manager';
import { FileLoader } from '../src/loaders/file.loader';
import { EnvLoader } from '../src/loaders/env.loader';
import { initializeConfig, initializeServiceConfig, resetConfig } from '../src/index';
import { EnvLoader } from '../src/loaders/env.loader';
import { FileLoader } from '../src/loaders/file.loader';
import { appConfigSchema } from '../src/schemas';
// Test directories setup
@ -188,7 +188,7 @@ describe('Dynamic Location Config Loading', () => {
// Initialize with custom config path
resetConfig();
const manager = new ConfigManager({
configPath: join(SCENARIOS.appService, 'config')
configPath: join(SCENARIOS.appService, 'config'),
});
const config = await manager.initialize(appConfigSchema);
@ -217,7 +217,7 @@ function setupTestScenarios() {
version: '1.0.0',
service: {
name: 'monorepo-root',
port: 3000
port: 3000,
},
database: {
postgres: {
@ -225,25 +225,25 @@ function setupTestScenarios() {
port: 5432,
database: 'test_db',
user: 'test_user',
password: 'test_pass'
password: 'test_pass',
},
questdb: {
host: 'localhost',
ilpPort: 9009
ilpPort: 9009,
},
mongodb: {
host: 'localhost',
port: 27017,
database: 'test_mongo'
database: 'test_mongo',
},
dragonfly: {
host: 'localhost',
port: 6379
}
port: 6379,
},
},
logging: {
level: 'info'
}
level: 'info',
},
};
writeFileSync(
@ -261,13 +261,13 @@ function setupTestScenarios() {
name: 'test-service',
database: {
postgres: {
host: 'service-db'
}
host: 'service-db',
},
},
service: {
name: 'test-service',
port: 4000
}
port: 4000,
},
};
writeFileSync(
@ -286,8 +286,8 @@ function setupTestScenarios() {
version: '2.0.0',
service: {
name: 'test-lib',
port: 5000
}
port: 5000,
},
};
writeFileSync(
@ -305,13 +305,13 @@ function setupTestScenarios() {
name: 'deep-service',
database: {
postgres: {
host: 'deep-db'
}
host: 'deep-db',
},
},
service: {
name: 'deep-service',
port: 6000
}
port: 6000,
},
};
writeFileSync(
@ -330,7 +330,7 @@ function setupTestScenarios() {
version: '0.1.0',
service: {
name: 'standalone-app',
port: 7000
port: 7000,
},
database: {
postgres: {
@ -338,25 +338,25 @@ function setupTestScenarios() {
port: 5432,
database: 'standalone_db',
user: 'standalone_user',
password: 'standalone_pass'
password: 'standalone_pass',
},
questdb: {
host: 'localhost',
ilpPort: 9009
ilpPort: 9009,
},
mongodb: {
host: 'localhost',
port: 27017,
database: 'standalone_mongo'
database: 'standalone_mongo',
},
dragonfly: {
host: 'localhost',
port: 6379
}
port: 6379,
},
},
logging: {
level: 'debug'
}
level: 'debug',
},
};
writeFileSync(

View file

@ -1,12 +1,12 @@
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
import { chmodSync, existsSync, mkdirSync, rmSync, writeFileSync } from 'fs';
import { join } from 'path';
import { mkdirSync, writeFileSync, rmSync, existsSync, chmodSync } from 'fs';
import { afterEach, beforeEach, describe, expect, test } from 'bun:test';
import { ConfigManager } from '../src/config-manager';
import { FileLoader } from '../src/loaders/file.loader';
import { EnvLoader } from '../src/loaders/env.loader';
import { initializeConfig, initializeServiceConfig, resetConfig } from '../src/index';
import { appConfigSchema } from '../src/schemas';
import { ConfigError, ConfigValidationError } from '../src/errors';
import { initializeConfig, initializeServiceConfig, resetConfig } from '../src/index';
import { EnvLoader } from '../src/loaders/env.loader';
import { FileLoader } from '../src/loaders/file.loader';
import { appConfigSchema } from '../src/schemas';
const TEST_DIR = join(__dirname, 'edge-case-tests');
@ -39,7 +39,7 @@ describe('Edge Cases and Error Handling', () => {
test('should handle missing .env files gracefully', async () => {
// No .env file exists
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
// Should not throw even without .env file
@ -52,13 +52,10 @@ describe('Edge Cases and Error Handling', () => {
mkdirSync(configDir, { recursive: true });
// Create corrupted JSON file
writeFileSync(
join(configDir, 'development.json'),
'{ "app": { "name": "test", invalid json }'
);
writeFileSync(join(configDir, 'development.json'), '{ "app": { "name": "test", invalid json }');
const manager = new ConfigManager({
loaders: [new FileLoader(configDir, 'development')]
loaders: [new FileLoader(configDir, 'development')],
});
// Should throw error for invalid JSON
@ -69,7 +66,7 @@ describe('Edge Cases and Error Handling', () => {
const nonExistentDir = join(TEST_DIR, 'nonexistent');
const manager = new ConfigManager({
loaders: [new FileLoader(nonExistentDir, 'development')]
loaders: [new FileLoader(nonExistentDir, 'development')],
});
// Should not throw, should return empty config
@ -89,7 +86,7 @@ describe('Edge Cases and Error Handling', () => {
chmodSync(configFile, 0o000);
const manager = new ConfigManager({
loaders: [new FileLoader(configDir, 'development')]
loaders: [new FileLoader(configDir, 'development')],
});
// Should handle permission error gracefully
@ -116,19 +113,16 @@ describe('Edge Cases and Error Handling', () => {
app: {
name: 'test',
settings: {
ref: 'settings'
}
}
ref: 'settings',
},
},
})
);
process.env.APP_SETTINGS_NESTED_VALUE = 'deep-value';
const manager = new ConfigManager({
loaders: [
new FileLoader(configDir, 'development'),
new EnvLoader('')
]
loaders: [new FileLoader(configDir, 'development'), new EnvLoader('')],
});
const config = await manager.initialize(appConfigSchema);
@ -140,7 +134,7 @@ describe('Edge Cases and Error Handling', () => {
process.env.LEVEL1_LEVEL2_LEVEL3_LEVEL4_LEVEL5_VALUE = 'deep-value';
const manager = new ConfigManager({
loaders: [new EnvLoader('', { nestedDelimiter: '_' })]
loaders: [new EnvLoader('', { nestedDelimiter: '_' })],
});
const config = await manager.initialize();
@ -159,8 +153,8 @@ describe('Edge Cases and Error Handling', () => {
JSON.stringify({
database: {
host: 'localhost',
port: 5432
}
port: 5432,
},
})
);
@ -168,10 +162,7 @@ describe('Edge Cases and Error Handling', () => {
process.env.DATABASE = 'simple-string';
const manager = new ConfigManager({
loaders: [
new FileLoader(configDir, 'development'),
new EnvLoader('')
]
loaders: [new FileLoader(configDir, 'development'), new EnvLoader('')],
});
const config = await manager.initialize(appConfigSchema);
@ -231,7 +222,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}}
process.chdir(TEST_DIR);
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
const config = await manager.initialize();
@ -250,7 +241,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}}
writeFileSync(join(configDir, 'development.json'), '{}');
const manager = new ConfigManager({
loaders: [new FileLoader(configDir, 'development')]
loaders: [new FileLoader(configDir, 'development')],
});
const config = await manager.initialize(appConfigSchema);
@ -260,7 +251,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}}
test('should handle config initialization without schema', async () => {
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
// Initialize without schema
@ -271,7 +262,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}}
test('should handle accessing config before initialization', () => {
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
// Should throw error when accessing uninitialized config
@ -282,7 +273,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}}
test('should handle invalid config paths in getValue', async () => {
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
const config = await manager.initialize(appConfigSchema);
@ -301,7 +292,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}}
process.env.EMPTY_VALUE = '';
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
const config = await manager.initialize();
@ -318,7 +309,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}}
process.env.SERVICE_PORT = 'not-a-number'; // This should cause validation to fail
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
await expect(manager.initialize(appConfigSchema)).rejects.toThrow(ConfigValidationError);
@ -326,7 +317,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}}
test('should handle config updates with invalid schema', async () => {
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
await manager.initialize(appConfigSchema);
@ -335,8 +326,8 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}}
expect(() => {
manager.set({
service: {
port: 'invalid-port' as any
}
port: 'invalid-port' as any,
},
});
}).toThrow(ConfigValidationError);
});
@ -356,8 +347,8 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}}
const manager = new ConfigManager({
loaders: [
new FileLoader(configDir, 'development'), // priority 50
new EnvLoader('') // priority 100
]
new EnvLoader(''), // priority 100
],
});
const config = await manager.initialize(appConfigSchema);
@ -372,7 +363,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}}
// This should not cause the loader to fail
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
const config = await manager.initialize();

View file

@ -1,18 +1,18 @@
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
import { writeFileSync, mkdirSync, rmSync } from 'fs';
import { mkdirSync, rmSync, writeFileSync } from 'fs';
import { join } from 'path';
import { afterEach, beforeEach, describe, expect, test } from 'bun:test';
import {
initializeConfig,
getConfig,
getConfigManager,
resetConfig,
getDatabaseConfig,
getServiceConfig,
getLoggingConfig,
getProviderConfig,
getServiceConfig,
initializeConfig,
isDevelopment,
isProduction,
isTest,
resetConfig,
} from '../src';
describe('Config Module', () => {
@ -71,10 +71,7 @@ describe('Config Module', () => {
environment: 'test',
};
writeFileSync(
join(testConfigDir, 'default.json'),
JSON.stringify(config, null, 2)
);
writeFileSync(join(testConfigDir, 'default.json'), JSON.stringify(config, null, 2));
});
afterEach(() => {
@ -178,10 +175,7 @@ describe('Config Module', () => {
},
};
writeFileSync(
join(testConfigDir, 'production.json'),
JSON.stringify(prodConfig, null, 2)
);
writeFileSync(join(testConfigDir, 'production.json'), JSON.stringify(prodConfig, null, 2));
const config = await initializeConfig(testConfigDir);

View file

@ -1,6 +1,6 @@
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
import { writeFileSync, mkdirSync, rmSync } from 'fs';
import { mkdirSync, rmSync, writeFileSync } from 'fs';
import { join } from 'path';
import { afterEach, beforeEach, describe, expect, test } from 'bun:test';
import { EnvLoader } from '../src/loaders/env.loader';
import { FileLoader } from '../src/loaders/file.loader';
@ -73,18 +73,18 @@ describe('EnvLoader', () => {
const loader = new EnvLoader('TEST_', {
parseValues: true,
nestedDelimiter: '__'
nestedDelimiter: '__',
});
const config = await loader.load();
expect(config.APP).toEqual({
NAME: 'nested-app',
SETTINGS: {
ENABLED: true
}
ENABLED: true,
},
});
expect(config.DATABASE).toEqual({
HOST: 'localhost'
HOST: 'localhost',
});
});
});
@ -103,13 +103,10 @@ describe('FileLoader', () => {
test('should load JSON configuration file', async () => {
const config = {
app: { name: 'file-app', version: '1.0.0' },
database: { host: 'localhost', port: 5432 }
database: { host: 'localhost', port: 5432 },
};
writeFileSync(
join(testDir, 'default.json'),
JSON.stringify(config, null, 2)
);
writeFileSync(join(testDir, 'default.json'), JSON.stringify(config, null, 2));
const loader = new FileLoader(testDir);
const loaded = await loader.load();
@ -120,30 +117,24 @@ describe('FileLoader', () => {
test('should load environment-specific configuration', async () => {
const defaultConfig = {
app: { name: 'app', port: 3000 },
database: { host: 'localhost' }
database: { host: 'localhost' },
};
const prodConfig = {
app: { port: 8080 },
database: { host: 'prod-db' }
database: { host: 'prod-db' },
};
writeFileSync(
join(testDir, 'default.json'),
JSON.stringify(defaultConfig, null, 2)
);
writeFileSync(join(testDir, 'default.json'), JSON.stringify(defaultConfig, null, 2));
writeFileSync(
join(testDir, 'production.json'),
JSON.stringify(prodConfig, null, 2)
);
writeFileSync(join(testDir, 'production.json'), JSON.stringify(prodConfig, null, 2));
const loader = new FileLoader(testDir, 'production');
const loaded = await loader.load();
expect(loaded).toEqual({
app: { name: 'app', port: 8080 },
database: { host: 'prod-db' }
database: { host: 'prod-db' },
});
});
@ -155,10 +146,7 @@ describe('FileLoader', () => {
});
test('should throw on invalid JSON', async () => {
writeFileSync(
join(testDir, 'default.json'),
'invalid json content'
);
writeFileSync(join(testDir, 'default.json'), 'invalid json content');
const loader = new FileLoader(testDir);
@ -168,10 +156,7 @@ describe('FileLoader', () => {
test('should support custom configuration', async () => {
const config = { custom: 'value' };
writeFileSync(
join(testDir, 'custom.json'),
JSON.stringify(config, null, 2)
);
writeFileSync(join(testDir, 'custom.json'), JSON.stringify(config, null, 2));
const loader = new FileLoader(testDir);
const loaded = await loader.loadFile('custom.json');

View file

@ -1,11 +1,11 @@
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
import { existsSync, mkdirSync, rmSync, writeFileSync } from 'fs';
import { join } from 'path';
import { afterEach, beforeEach, describe, expect, test } from 'bun:test';
import { ConfigManager } from '../src/config-manager';
import { getProviderConfig, resetConfig } from '../src/index';
import { EnvLoader } from '../src/loaders/env.loader';
import { FileLoader } from '../src/loaders/file.loader';
import { appConfigSchema } from '../src/schemas';
import { resetConfig, getProviderConfig } from '../src/index';
import { join } from 'path';
import { mkdirSync, writeFileSync, rmSync, existsSync } from 'fs';
const TEST_DIR = join(__dirname, 'provider-tests');
@ -44,7 +44,7 @@ describe('Provider Configuration Tests', () => {
process.env.WEBSHARE_ENABLED = 'true';
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
const config = await manager.initialize(appConfigSchema);
@ -64,7 +64,7 @@ describe('Provider Configuration Tests', () => {
process.env.EOD_PRIORITY = '10';
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
const config = await manager.initialize(appConfigSchema);
@ -88,7 +88,7 @@ describe('Provider Configuration Tests', () => {
process.env.IB_PRIORITY = '5';
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
const config = await manager.initialize(appConfigSchema);
@ -113,7 +113,7 @@ describe('Provider Configuration Tests', () => {
process.env.QM_PRIORITY = '15';
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
const config = await manager.initialize(appConfigSchema);
@ -136,7 +136,7 @@ describe('Provider Configuration Tests', () => {
process.env.YAHOO_PRIORITY = '20';
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
const config = await manager.initialize(appConfigSchema);
@ -156,24 +156,28 @@ describe('Provider Configuration Tests', () => {
writeFileSync(
join(configDir, 'development.json'),
JSON.stringify({
providers: {
eod: {
name: 'EOD Historical Data',
apiKey: 'file-eod-key',
baseUrl: 'https://file.eod.com/api',
tier: 'free',
enabled: false,
priority: 1
JSON.stringify(
{
providers: {
eod: {
name: 'EOD Historical Data',
apiKey: 'file-eod-key',
baseUrl: 'https://file.eod.com/api',
tier: 'free',
enabled: false,
priority: 1,
},
yahoo: {
name: 'Yahoo Finance',
baseUrl: 'https://file.yahoo.com',
enabled: true,
priority: 2,
},
},
yahoo: {
name: 'Yahoo Finance',
baseUrl: 'https://file.yahoo.com',
enabled: true,
priority: 2
}
}
}, null, 2)
},
null,
2
)
);
// Set environment variables that should override file config
@ -183,10 +187,7 @@ describe('Provider Configuration Tests', () => {
process.env.YAHOO_PRIORITY = '25';
const manager = new ConfigManager({
loaders: [
new FileLoader(configDir, 'development'),
new EnvLoader('')
]
loaders: [new FileLoader(configDir, 'development'), new EnvLoader('')],
});
const config = await manager.initialize(appConfigSchema);
@ -211,7 +212,7 @@ describe('Provider Configuration Tests', () => {
process.env.IB_GATEWAY_PORT = 'not-a-number'; // Should be a number
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
// Should throw validation error
@ -226,7 +227,7 @@ describe('Provider Configuration Tests', () => {
process.env.WEBSHARE_ENABLED = 'true';
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
await manager.initialize(appConfigSchema);
@ -241,7 +242,9 @@ describe('Provider Configuration Tests', () => {
expect((webshareConfig as any).apiKey).toBe('test-webshare-key');
// Test non-existent provider
expect(() => getProviderConfig('nonexistent')).toThrow('Provider configuration not found: nonexistent');
expect(() => getProviderConfig('nonexistent')).toThrow(
'Provider configuration not found: nonexistent'
);
});
test('should handle boolean string parsing correctly', async () => {
@ -253,7 +256,7 @@ describe('Provider Configuration Tests', () => {
process.env.WEBSHARE_ENABLED = 'yes'; // Should be treated as string, not boolean
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
const config = await manager.initialize(appConfigSchema);
@ -272,7 +275,7 @@ describe('Provider Configuration Tests', () => {
process.env.IB_GATEWAY_CLIENT_ID = '999';
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
const config = await manager.initialize(appConfigSchema);
@ -302,7 +305,7 @@ YAHOO_BASE_URL=https://env-file.yahoo.com
process.chdir(TEST_DIR);
const manager = new ConfigManager({
loaders: [new EnvLoader('')]
loaders: [new EnvLoader('')],
});
const config = await manager.initialize(appConfigSchema);

View file

@ -1,6 +1,6 @@
import { afterEach, beforeEach, describe, expect, test } from 'bun:test';
import { existsSync, mkdirSync, rmSync, writeFileSync } from 'fs';
import { join } from 'path';
import { afterEach, beforeEach, describe, expect, test } from 'bun:test';
import {
getConfig,
getDatabaseConfig,
@ -11,7 +11,7 @@ import {
isDevelopment,
isProduction,
isTest,
resetConfig
resetConfig,
} from '../src/index';
const TEST_DIR = join(__dirname, 'real-usage-tests');
@ -168,7 +168,9 @@ describe('Real Usage Scenarios', () => {
const config = await initializeServiceConfig();
// Should throw for non-existent providers
expect(() => getProviderConfig('nonexistent')).toThrow('Provider configuration not found: nonexistent');
expect(() => getProviderConfig('nonexistent')).toThrow(
'Provider configuration not found: nonexistent'
);
// Should work for providers that exist but might not be configured
// (they should have defaults from schema)
@ -209,8 +211,8 @@ describe('Real Usage Scenarios', () => {
// Update config at runtime (useful for testing)
configManager.set({
service: {
port: 9999
}
port: 9999,
},
});
const updatedConfig = getConfig();
@ -263,7 +265,7 @@ function setupRealUsageScenarios() {
development: {
app: {
name: 'stock-bot-monorepo',
version: '1.0.0'
version: '1.0.0',
},
database: {
postgres: {
@ -271,116 +273,125 @@ function setupRealUsageScenarios() {
port: 5432,
database: 'trading_bot',
username: 'trading_user',
password: 'trading_pass_dev'
password: 'trading_pass_dev',
},
questdb: {
host: 'localhost',
port: 9009,
database: 'questdb'
database: 'questdb',
},
mongodb: {
host: 'localhost',
port: 27017,
database: 'stock'
database: 'stock',
},
dragonfly: {
host: 'localhost',
port: 6379
}
port: 6379,
},
},
logging: {
level: 'info',
format: 'json'
format: 'json',
},
providers: {
yahoo: {
name: 'Yahoo Finance',
enabled: true,
priority: 1,
baseUrl: 'https://query1.finance.yahoo.com'
baseUrl: 'https://query1.finance.yahoo.com',
},
eod: {
name: 'EOD Historical Data',
enabled: false,
priority: 2,
apiKey: 'demo-api-key',
baseUrl: 'https://eodhistoricaldata.com/api'
}
}
baseUrl: 'https://eodhistoricaldata.com/api',
},
},
},
production: {
logging: {
level: 'warn'
level: 'warn',
},
database: {
postgres: {
host: 'prod-postgres.internal',
port: 5432
}
}
port: 5432,
},
},
},
test: {
logging: {
level: 'debug'
level: 'debug',
},
database: {
postgres: {
database: 'trading_bot_test'
}
}
}
database: 'trading_bot_test',
},
},
},
};
Object.entries(rootConfigs).forEach(([env, config]) => {
writeFileSync(
join(scenarios.root, 'config', `${env}.json`),
JSON.stringify(config, null, 2)
);
writeFileSync(join(scenarios.root, 'config', `${env}.json`), JSON.stringify(config, null, 2));
});
// Data service config
writeFileSync(
join(scenarios.dataService, 'config', 'development.json'),
JSON.stringify({
app: {
name: 'data-ingestion'
JSON.stringify(
{
app: {
name: 'data-ingestion',
},
service: {
name: 'data-ingestion',
port: 3001,
workers: 2,
},
},
service: {
name: 'data-ingestion',
port: 3001,
workers: 2
}
}, null, 2)
null,
2
)
);
// Web API config
writeFileSync(
join(scenarios.webApi, 'config', 'development.json'),
JSON.stringify({
app: {
name: 'web-api'
JSON.stringify(
{
app: {
name: 'web-api',
},
service: {
name: 'web-api',
port: 4000,
cors: {
origin: ['http://localhost:3000', 'http://localhost:4200'],
},
},
},
service: {
name: 'web-api',
port: 4000,
cors: {
origin: ['http://localhost:3000', 'http://localhost:4200']
}
}
}, null, 2)
null,
2
)
);
// Cache lib config
writeFileSync(
join(scenarios.cacheLib, 'config', 'development.json'),
JSON.stringify({
app: {
name: 'cache-lib'
JSON.stringify(
{
app: {
name: 'cache-lib',
},
service: {
name: 'cache-lib',
},
},
service: {
name: 'cache-lib'
}
}, null, 2)
null,
2
)
);
// Root .env file

View file

@ -6,6 +6,5 @@
"composite": true
},
"include": ["src/**/*"],
"references": [
]
"references": []
}

View file

@ -3,6 +3,8 @@
* Creates a decoupled, reusable dependency injection container
*/
import { asFunction, asValue, createContainer, InjectionMode, type AwilixContainer } from 'awilix';
import { z } from 'zod';
import { Browser } from '@stock-bot/browser';
import { createCache, type CacheProvider } from '@stock-bot/cache';
import type { IServiceContainer } from '@stock-bot/handlers';
@ -12,8 +14,6 @@ import { PostgreSQLClient } from '@stock-bot/postgres';
import { ProxyManager } from '@stock-bot/proxy';
import { QuestDBClient } from '@stock-bot/questdb';
import { type QueueManager } from '@stock-bot/queue';
import { asFunction, asValue, createContainer, InjectionMode, type AwilixContainer } from 'awilix';
import { z } from 'zod';
// Configuration schema with validation
const appConfigSchema = z.object({
@ -38,22 +38,28 @@ const appConfigSchema = z.object({
user: z.string(),
password: z.string(),
}),
questdb: z.object({
enabled: z.boolean().optional(),
host: z.string(),
httpPort: z.number().optional(),
pgPort: z.number().optional(),
influxPort: z.number().optional(),
database: z.string().optional(),
}).optional(),
proxy: z.object({
cachePrefix: z.string().optional(),
ttl: z.number().optional(),
}).optional(),
browser: z.object({
headless: z.boolean().optional(),
timeout: z.number().optional(),
}).optional(),
questdb: z
.object({
enabled: z.boolean().optional(),
host: z.string(),
httpPort: z.number().optional(),
pgPort: z.number().optional(),
influxPort: z.number().optional(),
database: z.string().optional(),
})
.optional(),
proxy: z
.object({
cachePrefix: z.string().optional(),
ttl: z.number().optional(),
})
.optional(),
browser: z
.object({
headless: z.boolean().optional(),
timeout: z.number().optional(),
})
.optional(),
});
export type AppConfig = z.infer<typeof appConfigSchema>;
@ -99,7 +105,9 @@ export function createServiceContainer(rawConfig: unknown): AwilixContainer<Serv
redisConfig: asValue(config.redis),
mongoConfig: asValue(config.mongodb),
postgresConfig: asValue(config.postgres),
questdbConfig: asValue(config.questdb || { host: 'localhost', httpPort: 9000, pgPort: 8812, influxPort: 9009 }),
questdbConfig: asValue(
config.questdb || { host: 'localhost', httpPort: 9000, pgPort: 8812, influxPort: 9009 }
),
// Core services with dependency injection
logger: asFunction(() => getLogger('app')).singleton(),
@ -126,11 +134,7 @@ export function createServiceContainer(rawConfig: unknown): AwilixContainer<Serv
logger.warn('Cache is disabled, ProxyManager will have limited functionality');
return null;
}
const manager = new ProxyManager(
cache,
config.proxy || {},
logger
);
const manager = new ProxyManager(cache, config.proxy || {}, logger);
return manager;
}).singleton();
@ -197,7 +201,7 @@ export function createServiceContainer(rawConfig: unknown): AwilixContainer<Serv
return QueueManager.initialize({
redis: { host: redisConfig.host, port: redisConfig.port, db: redisConfig.db },
enableScheduledJobs: true,
delayWorkerStart: true // We'll start workers manually
delayWorkerStart: true, // We'll start workers manually
});
}).singleton();
@ -207,16 +211,19 @@ export function createServiceContainer(rawConfig: unknown): AwilixContainer<Serv
}).singleton();
// Build the IServiceContainer for handlers
registrations.serviceContainer = asFunction((cradle) => ({
logger: cradle.logger,
cache: cradle.cache,
proxy: cradle.proxyManager,
browser: cradle.browser,
mongodb: cradle.mongoClient,
postgres: cradle.postgresClient,
questdb: cradle.questdbClient,
queue: cradle.queueManager,
} as IServiceContainer)).singleton();
registrations.serviceContainer = asFunction(
cradle =>
({
logger: cradle.logger,
cache: cradle.cache,
proxy: cradle.proxyManager,
browser: cradle.browser,
mongodb: cradle.mongoClient,
postgres: cradle.postgresClient,
questdb: cradle.questdbClient,
queue: cradle.queueManager,
}) as IServiceContainer
).singleton();
container.register(registrations);
return container;

View file

@ -9,5 +9,5 @@ export {
initializeServices,
type AppConfig,
type ServiceCradle,
type ServiceContainer
type ServiceContainer,
} from './awilix-container';

View file

@ -3,6 +3,7 @@
*/
import { getLogger, type Logger } from '@stock-bot/logger';
interface ServiceResolver {
resolve<T>(serviceName: string): T;
resolveAsync<T>(serviceName: string): Promise<T>;
@ -30,10 +31,12 @@ export class OperationContext {
this.traceId = options.traceId || this.generateTraceId();
this.startTime = new Date();
this.logger = options.parentLogger || getLogger(`${options.handlerName}:${options.operationName}`, {
traceId: this.traceId,
metadata: this.metadata,
});
this.logger =
options.parentLogger ||
getLogger(`${options.handlerName}:${options.operationName}`, {
traceId: this.traceId,
metadata: this.metadata,
});
}
/**

View file

@ -20,8 +20,8 @@ export class PoolSizeCalculator {
// Handler-level defaults
'batch-import': { min: 10, max: 100, idle: 20 },
'real-time': { min: 2, max: 10, idle: 3 },
'analytics': { min: 5, max: 30, idle: 10 },
'reporting': { min: 3, max: 20, idle: 5 },
analytics: { min: 5, max: 30, idle: 10 },
reporting: { min: 3, max: 20, idle: 5 },
};
static calculate(
@ -73,7 +73,9 @@ export class PoolSizeCalculator {
const recommendedSize = Math.ceil(minConnections * 1.2);
// Ensure we meet target latency
const latencyBasedSize = Math.ceil(expectedConcurrency * (averageQueryTimeMs / targetLatencyMs));
const latencyBasedSize = Math.ceil(
expectedConcurrency * (averageQueryTimeMs / targetLatencyMs)
);
return Math.max(recommendedSize, latencyBasedSize, 2); // Minimum 2 connections
}

View file

@ -62,7 +62,10 @@ export interface ConnectionFactory {
createPostgreSQL(config: PostgreSQLPoolConfig): Promise<ConnectionPool<any>>;
createCache(config: CachePoolConfig): Promise<ConnectionPool<any>>;
createQueue(config: QueuePoolConfig): Promise<ConnectionPool<any>>;
getPool(type: 'mongodb' | 'postgres' | 'cache' | 'queue', name: string): ConnectionPool<any> | undefined;
getPool(
type: 'mongodb' | 'postgres' | 'cache' | 'queue',
name: string
): ConnectionPool<any> | undefined;
listPools(): Array<{ type: string; name: string; metrics: PoolMetrics }>;
disposeAll(): Promise<void>;
}

View file

@ -1,8 +1,13 @@
/**
* Test DI library functionality
*/
import { test, expect, describe } from 'bun:test';
import { ServiceContainer, ConnectionFactory, OperationContext, PoolSizeCalculator } from '../src/index';
import { describe, expect, test } from 'bun:test';
import {
ConnectionFactory,
OperationContext,
PoolSizeCalculator,
ServiceContainer,
} from '../src/index';
describe('DI Library', () => {
test('ServiceContainer - sync resolution', () => {

View file

@ -10,8 +10,5 @@
},
"include": ["src/**/*.ts"],
"exclude": ["node_modules", "dist", "test"],
"references": [
{ "path": "../config" },
{ "path": "../logger" }
]
"references": [{ "path": "../config" }, { "path": "../logger" }]
}

View file

@ -1,7 +1,11 @@
import { getLogger } from '@stock-bot/logger';
import { createJobHandler, handlerRegistry, type HandlerConfigWithSchedule } from '@stock-bot/types';
import { fetch } from '@stock-bot/utils';
import type { Collection } from 'mongodb';
import { getLogger } from '@stock-bot/logger';
import {
createJobHandler,
handlerRegistry,
type HandlerConfigWithSchedule,
} from '@stock-bot/types';
import { fetch } from '@stock-bot/utils';
import type { IServiceContainer } from '../types/service-container';
import type { ExecutionContext, IHandler } from '../types/types';
@ -35,7 +39,8 @@ export abstract class BaseHandler implements IHandler {
// Read handler name from decorator first, then fallback to parameter or class name
const constructor = this.constructor as any;
this.handlerName = constructor.__handlerName || handlerName || this.constructor.name.toLowerCase();
this.handlerName =
constructor.__handlerName || handlerName || this.constructor.name.toLowerCase();
}
/**
@ -50,7 +55,7 @@ export abstract class BaseHandler implements IHandler {
this.logger.debug('Handler execute called', {
handler: this.handlerName,
operation,
availableOperations: operations.map((op: any) => ({ name: op.name, method: op.method }))
availableOperations: operations.map((op: any) => ({ name: op.name, method: op.method })),
});
// Find the operation metadata
@ -58,7 +63,7 @@ export abstract class BaseHandler implements IHandler {
if (!operationMeta) {
this.logger.error('Operation not found', {
requestedOperation: operation,
availableOperations: operations.map((op: any) => op.name)
availableOperations: operations.map((op: any) => op.name),
});
throw new Error(`Unknown operation: ${operation}`);
}
@ -71,7 +76,7 @@ export abstract class BaseHandler implements IHandler {
this.logger.debug('Executing operation method', {
operation,
method: operationMeta.method
method: operationMeta.method,
});
return await method.call(this, input, context);
@ -85,23 +90,25 @@ export abstract class BaseHandler implements IHandler {
const jobData = {
handler: this.handlerName,
operation,
payload
payload,
};
await queue.add(operation, jobData, { delay });
}
/**
* Create execution context for operations
*/
protected createExecutionContext(type: 'http' | 'queue' | 'scheduled', metadata: Record<string, any> = {}): ExecutionContext {
protected createExecutionContext(
type: 'http' | 'queue' | 'scheduled',
metadata: Record<string, any> = {}
): ExecutionContext {
return {
type,
metadata: {
...metadata,
timestamp: Date.now(),
traceId: `${this.constructor.name}-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`
}
traceId: `${this.constructor.name}-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`,
},
};
}
@ -152,7 +159,11 @@ export abstract class BaseHandler implements IHandler {
/**
* Schedule operation with delay in seconds
*/
protected async scheduleIn(operation: string, payload: unknown, delaySeconds: number): Promise<void> {
protected async scheduleIn(
operation: string,
payload: unknown,
delaySeconds: number
): Promise<void> {
return this.scheduleOperation(operation, payload, delaySeconds * 1000);
}
@ -176,7 +187,7 @@ export abstract class BaseHandler implements IHandler {
method: 'POST',
body: JSON.stringify(data),
headers: { 'Content-Type': 'application/json', ...options?.headers },
logger: this.logger
logger: this.logger,
}),
put: (url: string, data?: any, options?: any) =>
fetch(url, {
@ -184,7 +195,7 @@ export abstract class BaseHandler implements IHandler {
method: 'PUT',
body: JSON.stringify(data),
headers: { 'Content-Type': 'application/json', ...options?.headers },
logger: this.logger
logger: this.logger,
}),
delete: (url: string, options?: any) =>
fetch(url, { ...options, method: 'DELETE', logger: this.logger }),
@ -220,10 +231,10 @@ export abstract class BaseHandler implements IHandler {
// Create operation handlers from decorator metadata
const operationHandlers: Record<string, any> = {};
for (const op of operations) {
operationHandlers[op.name] = createJobHandler(async (payload) => {
operationHandlers[op.name] = createJobHandler(async payload => {
const context: ExecutionContext = {
type: 'queue',
metadata: { source: 'queue', timestamp: Date.now() }
metadata: { source: 'queue', timestamp: Date.now() },
};
return await this.execute(op.name, payload, context);
});
@ -257,8 +268,8 @@ export abstract class BaseHandler implements IHandler {
scheduledJobs: scheduledJobs.map((job: any) => ({
operation: job.operation,
cronPattern: job.cronPattern,
immediately: job.immediately
}))
immediately: job.immediately,
})),
});
}
@ -278,7 +289,6 @@ export abstract class BaseHandler implements IHandler {
async onDispose?(): Promise<void>;
}
/**
* Specialized handler for operations that have scheduled jobs
*/

View file

@ -5,10 +5,7 @@
* @param name Handler name for registration
*/
export function Handler(name: string) {
return function <T extends { new (...args: any[]): {} }>(
target: T,
_context?: any
) {
return function <T extends { new (...args: any[]): {} }>(target: T, _context?: any) {
// Store handler name on the constructor
(target as any).__handlerName = name;
(target as any).__needsAutoRegistration = true;
@ -22,11 +19,7 @@ export function Handler(name: string) {
* @param name Operation name
*/
export function Operation(name: string): any {
return function (
target: any,
methodName: string,
descriptor?: PropertyDescriptor
): any {
return function (target: any, methodName: string, descriptor?: PropertyDescriptor): any {
// Store metadata directly on the class constructor
const constructor = target.constructor;
@ -55,11 +48,7 @@ export function QueueSchedule(
description?: string;
}
): any {
return function (
target: any,
methodName: string,
descriptor?: PropertyDescriptor
): any {
return function (target: any, methodName: string, descriptor?: PropertyDescriptor): any {
// Store metadata directly on the class constructor
const constructor = target.constructor;
@ -81,10 +70,7 @@ export function QueueSchedule(
* Handlers marked with @Disabled() will be skipped during auto-registration
*/
export function Disabled() {
return function <T extends { new (...args: any[]): {} }>(
target: T,
_context?: any
) {
return function <T extends { new (...args: any[]): {} }>(target: T, _context?: any) {
// Store disabled flag on the constructor
(target as any).__disabled = true;
@ -108,11 +94,7 @@ export function ScheduledOperation(
description?: string;
}
): any {
return function (
target: any,
methodName: string,
descriptor?: PropertyDescriptor
): any {
return function (target: any, methodName: string, descriptor?: PropertyDescriptor): any {
// Apply both decorators
Operation(name)(target, methodName, descriptor);
QueueSchedule(cronPattern, options)(target, methodName, descriptor);

View file

@ -22,7 +22,13 @@ export type { IServiceContainer } from './types/service-container';
export { createJobHandler } from './types/types';
// Decorators
export { Handler, Operation, QueueSchedule, ScheduledOperation, Disabled } from './decorators/decorators';
export {
Handler,
Operation,
QueueSchedule,
ScheduledOperation,
Disabled,
} from './decorators/decorators';
// Auto-registration utilities
export { autoRegisterHandlers, createAutoHandlerRegistry } from './registry/auto-register';

View file

@ -1,5 +1,10 @@
import { getLogger } from '@stock-bot/logger';
import type { JobHandler, HandlerConfig, HandlerConfigWithSchedule, ScheduledJob } from '../types/types';
import type {
HandlerConfig,
HandlerConfigWithSchedule,
JobHandler,
ScheduledJob,
} from '../types/types';
const logger = getLogger('handler-registry');
@ -102,10 +107,7 @@ class HandlerRegistry {
* Get all handlers with their full configurations for queue manager registration
*/
getAllHandlers(): Map<string, { operations: HandlerConfig; scheduledJobs?: ScheduledJob[] }> {
const result = new Map<
string,
{ operations: HandlerConfig; scheduledJobs?: ScheduledJob[] }
>();
const result = new Map<string, { operations: HandlerConfig; scheduledJobs?: ScheduledJob[] }>();
for (const [name, operations] of this.handlers) {
const scheduledJobs = this.handlerSchedules.get(name);

Some files were not shown because too many files have changed in this diff Show more