diff --git a/.env b/.env index e648d2f..a029ae7 100644 --- a/.env +++ b/.env @@ -4,7 +4,7 @@ # Core Application Settings NODE_ENV=development -LOG_LEVEL=debug +LOG_LEVEL=trace LOG_HIDE_OBJECT=true # Data Service Configuration @@ -39,7 +39,7 @@ POSTGRES_SSL=false QUESTDB_HOST=localhost QUESTDB_PORT=9000 QUESTDB_DB=qdb -QUESTDB_USER=admin +QUESTDB_USERNAME=admin QUESTDB_PASSWORD=quest # MongoDB Configuration diff --git a/.serena/cache/typescript/document_symbols_cache_v20-05-25.pkl b/.serena/cache/typescript/document_symbols_cache_v20-05-25.pkl new file mode 100644 index 0000000..8375536 Binary files /dev/null and b/.serena/cache/typescript/document_symbols_cache_v20-05-25.pkl differ diff --git a/.serena/memories/code_style_conventions.md b/.serena/memories/code_style_conventions.md new file mode 100644 index 0000000..932ff2f --- /dev/null +++ b/.serena/memories/code_style_conventions.md @@ -0,0 +1,58 @@ +# Code Style and Conventions + +## TypeScript Configuration +- **Strict mode enabled**: All strict checks are on +- **Target**: ES2022 +- **Module**: ESNext with bundler resolution +- **Path aliases**: `@stock-bot/*` maps to `libs/*/src` +- **Decorators**: Enabled for dependency injection + +## Code Style Rules (ESLint) +- **No unused variables**: Error (except prefixed with `_`) +- **No explicit any**: Warning +- **No non-null assertion**: Warning +- **No console**: Warning (except in tests) +- **Prefer const**: Enforced +- **Strict equality**: Always use `===` +- **Curly braces**: Required for all blocks + +## Formatting (Prettier) +- **Semicolons**: Always +- **Single quotes**: Yes +- **Trailing comma**: ES5 +- **Print width**: 100 characters +- **Tab width**: 2 spaces +- **Arrow parens**: Avoid when possible +- **End of line**: LF + +## Import Order +1. Node built-ins +2. Third-party modules +3. `@stock-bot/*` imports +4. Relative imports (parent directories first) +5. Current directory imports + +## Naming Conventions +- **Files**: kebab-case (e.g., `database-setup.ts`) +- **Classes**: PascalCase +- **Functions/Variables**: camelCase +- **Constants**: UPPER_SNAKE_CASE +- **Interfaces/Types**: PascalCase with 'I' or 'T' prefix optional + +## Library Standards +- **Named exports only**: No default exports +- **Factory patterns**: For complex initialization +- **Singleton pattern**: For global services (config, logger) +- **Direct class exports**: For DI-managed services + +## Testing +- **File naming**: `*.test.ts` or `*.spec.ts` +- **Test structure**: Bun's built-in test runner +- **Integration tests**: Use TestContainers for databases +- **Mocking**: Mock external dependencies + +## Documentation +- **JSDoc**: For all public APIs +- **README.md**: Required for each library +- **Usage examples**: Include in documentation +- **Error messages**: Descriptive with context \ No newline at end of file diff --git a/.serena/memories/current_refactoring_context.md b/.serena/memories/current_refactoring_context.md new file mode 100644 index 0000000..39bed86 --- /dev/null +++ b/.serena/memories/current_refactoring_context.md @@ -0,0 +1,41 @@ +# Current Refactoring Context + +## Data Ingestion Service Refactor +The project is currently undergoing a major refactoring to move away from singleton patterns to a dependency injection approach using service containers. + +### What's Been Done +- Created connection pool pattern with `ServiceContainer` +- Refactored data-ingestion service to use DI container +- Updated handlers to accept container parameter +- Added proper resource disposal with `ctx.dispose()` + +### Migration Status +- QM handler: ✅ Fully migrated to container pattern +- IB handler: ⚠️ Partially migrated (using migration helper) +- Proxy handler: ✅ Updated to accept container +- WebShare handler: ✅ Updated to accept container + +### Key Patterns +1. **Service Container**: Central DI container managing all connections +2. **Operation Context**: Provides scoped database access within operations +3. **Factory Pattern**: Connection factories for different databases +4. **Resource Disposal**: Always call `ctx.dispose()` after operations + +### Example Pattern +```typescript +const ctx = OperationContext.create('handler', 'operation', { container }); +try { + // Use databases through context + await ctx.mongodb.insertOne(data); + await ctx.postgres.query('...'); + return { success: true }; +} finally { + await ctx.dispose(); // Always cleanup +} +``` + +### Next Steps +- Complete migration of remaining IB operations +- Remove migration helper once complete +- Apply same pattern to other services +- Add monitoring for connection pools \ No newline at end of file diff --git a/.serena/memories/project_overview.md b/.serena/memories/project_overview.md new file mode 100644 index 0000000..3d6df9b --- /dev/null +++ b/.serena/memories/project_overview.md @@ -0,0 +1,55 @@ +# Stock Bot Trading Platform + +## Project Purpose +This is an advanced trading bot platform with a microservice architecture designed for automated stock trading. The system includes: +- Market data ingestion from multiple providers (Yahoo Finance, QuoteMedia, Interactive Brokers, WebShare) +- Data processing and technical indicator calculation +- Trading strategy development and backtesting +- Order execution and risk management +- Portfolio tracking and performance analytics +- Web dashboard for monitoring + +## Architecture Overview +The project follows a **microservices architecture** with shared libraries: + +### Core Services (apps/) +- **data-ingestion**: Ingests market data from multiple providers +- **data-pipeline**: Processes and transforms data +- **web-api**: REST API service +- **web-app**: React-based dashboard + +### Shared Libraries (libs/) +**Core Libraries:** +- config: Environment configuration with Zod validation +- logger: Structured logging with Loki integration +- di: Dependency injection container +- types: Shared TypeScript types +- handlers: Common handler patterns + +**Data Libraries:** +- postgres: PostgreSQL client for transactional data +- questdb: Time-series database for market data +- mongodb: Document storage for configurations + +**Service Libraries:** +- queue: BullMQ-based job processing +- event-bus: Dragonfly/Redis event bus +- shutdown: Graceful shutdown management + +**Utils:** +- Financial calculations and technical indicators +- Date utilities +- Position sizing calculations + +## Database Strategy +- **PostgreSQL**: Transactional data (orders, positions, strategies) +- **QuestDB**: Time-series data (OHLCV, indicators, performance metrics) +- **MongoDB**: Document storage (configurations, raw API responses) +- **Dragonfly/Redis**: Event bus and caching layer + +## Current Development Phase +Phase 1: Data Foundation Layer (In Progress) +- Enhancing data provider reliability +- Implementing data validation +- Optimizing time-series storage +- Building robust HTTP client with circuit breakers \ No newline at end of file diff --git a/.serena/memories/project_structure.md b/.serena/memories/project_structure.md new file mode 100644 index 0000000..63e2044 --- /dev/null +++ b/.serena/memories/project_structure.md @@ -0,0 +1,62 @@ +# Project Structure + +## Root Directory +``` +stock-bot/ +├── apps/ # Microservice applications +│ ├── data-ingestion/ # Market data ingestion service +│ ├── data-pipeline/ # Data processing pipeline +│ ├── web-api/ # REST API service +│ └── web-app/ # React dashboard +├── libs/ # Shared libraries +│ ├── core/ # Core functionality +│ │ ├── config/ # Configuration management +│ │ ├── logger/ # Logging infrastructure +│ │ ├── di/ # Dependency injection +│ │ ├── types/ # Shared TypeScript types +│ │ └── handlers/ # Common handler patterns +│ ├── data/ # Database clients +│ │ ├── postgres/ # PostgreSQL client +│ │ ├── questdb/ # QuestDB time-series client +│ │ └── mongodb/ # MongoDB document storage +│ ├── services/ # Service utilities +│ │ ├── queue/ # BullMQ job processing +│ │ ├── event-bus/ # Dragonfly event bus +│ │ └── shutdown/ # Graceful shutdown +│ └── utils/ # Utility functions +├── database/ # Database schemas and migrations +├── scripts/ # Build and utility scripts +├── config/ # Configuration files +├── monitoring/ # Monitoring configurations +├── docs/ # Documentation +└── test/ # Global test utilities + +## Key Files +- `package.json` - Root package configuration +- `turbo.json` - Turbo monorepo configuration +- `tsconfig.json` - TypeScript configuration +- `eslint.config.js` - ESLint rules +- `.prettierrc` - Prettier formatting rules +- `docker-compose.yml` - Infrastructure setup +- `.env` - Environment variables + +## Monorepo Structure +- Uses Bun workspaces with Turbo for orchestration +- Each app and library has its own package.json +- Shared dependencies at root level +- Libraries published as `@stock-bot/*` packages + +## Service Architecture Pattern +Each service typically follows: +``` +service/ +├── src/ +│ ├── index.ts # Entry point +│ ├── routes/ # API routes (Hono) +│ ├── handlers/ # Business logic +│ ├── services/ # Service layer +│ └── types/ # Service-specific types +├── test/ # Tests +├── package.json +└── tsconfig.json +``` \ No newline at end of file diff --git a/.serena/memories/suggested_commands.md b/.serena/memories/suggested_commands.md new file mode 100644 index 0000000..57d0915 --- /dev/null +++ b/.serena/memories/suggested_commands.md @@ -0,0 +1,73 @@ +# Suggested Commands for Development + +## Package Management (Bun) +- `bun install` - Install all dependencies +- `bun add ` - Add a new dependency +- `bun add -D ` - Add a dev dependency +- `bun update` - Update dependencies + +## Development +- `bun run dev` - Start all services in development mode (uses Turbo) +- `bun run dev:full` - Start infrastructure + admin tools + dev mode +- `bun run dev:clean` - Reset infrastructure and start fresh + +## Building +- `bun run build` - Build all services and libraries +- `bun run build:libs` - Build only shared libraries +- `bun run build:all:clean` - Clean build with cache removal +- `./scripts/build-all.sh` - Custom build script with options + +## Testing +- `bun test` - Run all tests +- `bun test --watch` - Run tests in watch mode +- `bun run test:coverage` - Run tests with coverage report +- `bun run test:libs` - Test only shared libraries +- `bun run test:apps` - Test only applications +- `bun test ` - Run specific test file + +## Code Quality (IMPORTANT - Run before committing!) +- `bun run lint` - Check for linting errors +- `bun run lint:fix` - Auto-fix linting issues +- `bun run format` - Format code with Prettier +- `./scripts/format.sh` - Alternative format script + +## Infrastructure Management +- `bun run infra:up` - Start databases (PostgreSQL, QuestDB, MongoDB, Dragonfly) +- `bun run infra:down` - Stop infrastructure +- `bun run infra:reset` - Reset with clean volumes +- `bun run docker:admin` - Start admin GUIs (pgAdmin, Mongo Express, Redis Insight) +- `bun run docker:monitoring` - Start monitoring stack + +## Database Operations +- `bun run db:setup-ib` - Setup Interactive Brokers database schema +- `bun run db:init` - Initialize all database schemas + +## Utility Commands +- `bun run clean` - Clean build artifacts +- `bun run clean:all` - Deep clean including node_modules +- `turbo run ` - Run task across monorepo + +## Git Commands (Linux) +- `git status` - Check current status +- `git add .` - Stage all changes +- `git commit -m "message"` - Commit changes +- `git push` - Push to remote +- `git pull` - Pull from remote +- `git checkout -b ` - Create new branch + +## System Commands (Linux) +- `ls -la` - List files with details +- `cd ` - Change directory +- `grep -r "pattern" .` - Search for pattern +- `find . -name "*.ts"` - Find files by pattern +- `which ` - Find command location + +## MCP Setup (for database access in IDE) +- `./scripts/setup-mcp.sh` - Setup Model Context Protocol servers +- Requires infrastructure to be running first + +## Service URLs +- Dashboard: http://localhost:4200 +- QuestDB Console: http://localhost:9000 +- Grafana: http://localhost:3000 +- pgAdmin: http://localhost:8080 \ No newline at end of file diff --git a/.serena/memories/task_completion_checklist.md b/.serena/memories/task_completion_checklist.md new file mode 100644 index 0000000..fb82656 --- /dev/null +++ b/.serena/memories/task_completion_checklist.md @@ -0,0 +1,55 @@ +# Task Completion Checklist + +When you complete any coding task, ALWAYS run these commands in order: + +## 1. Code Quality Checks (MANDATORY) +```bash +# Run linting to catch code issues +bun run lint + +# If there are errors, fix them automatically +bun run lint:fix + +# Format the code +bun run format +``` + +## 2. Testing (if applicable) +```bash +# Run tests if you modified existing functionality +bun test + +# Run specific test file if you added/modified tests +bun test +``` + +## 3. Build Verification (for significant changes) +```bash +# Build the affected libraries/apps +bun run build:libs # if you changed libraries +bun run build # for full build +``` + +## 4. Final Verification Steps +- Ensure no TypeScript errors in the IDE +- Check that imports are properly ordered (Prettier should handle this) +- Verify no console.log statements in production code +- Confirm all new code follows the established patterns + +## 5. Git Commit Guidelines +- Stage changes: `git add .` +- Write descriptive commit messages +- Reference issue numbers if applicable +- Use conventional commit format when possible: + - `feat:` for new features + - `fix:` for bug fixes + - `refactor:` for code refactoring + - `docs:` for documentation + - `test:` for tests + - `chore:` for maintenance + +## Important Notes +- NEVER skip the linting and formatting steps +- The project uses ESLint and Prettier - let them do their job +- If lint errors persist after auto-fix, they need manual attention +- Always test your changes, even if just running the service locally \ No newline at end of file diff --git a/.serena/memories/tech_stack.md b/.serena/memories/tech_stack.md new file mode 100644 index 0000000..9697803 --- /dev/null +++ b/.serena/memories/tech_stack.md @@ -0,0 +1,49 @@ +# Technology Stack + +## Runtime & Package Manager +- **Bun**: v1.1.0+ (primary runtime and package manager) +- **Node.js**: v18.0.0+ (compatibility) +- **TypeScript**: v5.8.3 + +## Core Technologies +- **Turbo**: Monorepo build system +- **ESBuild**: Fast bundling (integrated with Bun) +- **Hono**: Lightweight web framework for services + +## Databases +- **PostgreSQL**: Primary transactional database +- **QuestDB**: Time-series database for market data +- **MongoDB**: Document storage +- **Dragonfly**: Redis-compatible cache and event bus + +## Queue & Messaging +- **BullMQ**: Job queue processing +- **IORedis**: Redis client for Dragonfly + +## Web Technologies +- **React**: Frontend framework (web-app) +- **Angular**: (based on polyfills.ts reference) +- **PrimeNG**: UI component library +- **TailwindCSS**: CSS framework + +## Testing +- **Bun Test**: Built-in test runner +- **TestContainers**: Database integration testing +- **Supertest**: API testing + +## Monitoring & Observability +- **Loki**: Log aggregation +- **Prometheus**: Metrics collection +- **Grafana**: Visualization dashboards + +## Development Tools +- **ESLint**: Code linting +- **Prettier**: Code formatting +- **Docker Compose**: Local infrastructure +- **Model Context Protocol (MCP)**: Database access in IDE + +## Key Dependencies +- **Awilix**: Dependency injection container +- **Zod**: Schema validation +- **pg**: PostgreSQL client +- **Playwright**: Browser automation for proxy testing \ No newline at end of file diff --git a/.serena/project.yml b/.serena/project.yml new file mode 100644 index 0000000..68debf8 --- /dev/null +++ b/.serena/project.yml @@ -0,0 +1,66 @@ +# language of the project (csharp, python, rust, java, typescript, javascript, go, cpp, or ruby) +# Special requirements: +# * csharp: Requires the presence of a .sln file in the project folder. +language: typescript + +# whether to use the project's gitignore file to ignore files +# Added on 2025-04-07 +ignore_all_files_in_gitignore: true +# list of additional paths to ignore +# same syntax as gitignore, so you can use * and ** +# Was previously called `ignored_dirs`, please update your config if you are using that. +# Added (renamed)on 2025-04-07 +ignored_paths: [] + +# whether the project is in read-only mode +# If set to true, all editing tools will be disabled and attempts to use them will result in an error +# Added on 2025-04-18 +read_only: false + + +# list of tool names to exclude. We recommend not excluding any tools, see the readme for more details. +# Below is the complete list of tools for convenience. +# To make sure you have the latest list of tools, and to view their descriptions, +# execute `uv run scripts/print_tool_overview.py`. +# +# * `activate_project`: Activates a project by name. +# * `check_onboarding_performed`: Checks whether project onboarding was already performed. +# * `create_text_file`: Creates/overwrites a file in the project directory. +# * `delete_lines`: Deletes a range of lines within a file. +# * `delete_memory`: Deletes a memory from Serena's project-specific memory store. +# * `execute_shell_command`: Executes a shell command. +# * `find_referencing_code_snippets`: Finds code snippets in which the symbol at the given location is referenced. +# * `find_referencing_symbols`: Finds symbols that reference the symbol at the given location (optionally filtered by type). +# * `find_symbol`: Performs a global (or local) search for symbols with/containing a given name/substring (optionally filtered by type). +# * `get_current_config`: Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes. +# * `get_symbols_overview`: Gets an overview of the top-level symbols defined in a given file or directory. +# * `initial_instructions`: Gets the initial instructions for the current project. +# Should only be used in settings where the system prompt cannot be set, +# e.g. in clients you have no control over, like Claude Desktop. +# * `insert_after_symbol`: Inserts content after the end of the definition of a given symbol. +# * `insert_at_line`: Inserts content at a given line in a file. +# * `insert_before_symbol`: Inserts content before the beginning of the definition of a given symbol. +# * `list_dir`: Lists files and directories in the given directory (optionally with recursion). +# * `list_memories`: Lists memories in Serena's project-specific memory store. +# * `onboarding`: Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building). +# * `prepare_for_new_conversation`: Provides instructions for preparing for a new conversation (in order to continue with the necessary context). +# * `read_file`: Reads a file within the project directory. +# * `read_memory`: Reads the memory with the given name from Serena's project-specific memory store. +# * `remove_project`: Removes a project from the Serena configuration. +# * `replace_lines`: Replaces a range of lines within a file with new content. +# * `replace_symbol_body`: Replaces the full definition of a symbol. +# * `restart_language_server`: Restarts the language server, may be necessary when edits not through Serena happen. +# * `search_for_pattern`: Performs a search for a pattern in the project. +# * `summarize_changes`: Provides instructions for summarizing the changes made to the codebase. +# * `switch_modes`: Activates modes by providing a list of their names +# * `think_about_collected_information`: Thinking tool for pondering the completeness of collected information. +# * `think_about_task_adherence`: Thinking tool for determining whether the agent is still on track with the current task. +# * `think_about_whether_you_are_done`: Thinking tool for determining whether the task is truly completed. +# * `write_memory`: Writes a named memory (for future reference) to Serena's project-specific memory store. +excluded_tools: [] + +# initial prompt for the project. It will always be given to the LLM upon activating the project +# (contrary to the memories, which are loaded on demand). +initial_prompt: "" + +project_name: "stock-bot" diff --git a/.vscode/mcp.json b/.vscode/mcp.json index c77541f..0e0dcd2 100644 --- a/.vscode/mcp.json +++ b/.vscode/mcp.json @@ -1,21 +1,3 @@ { - "mcpServers": { - "postgres": { - "command": "npx", - "args": [ - "-y", - "@modelcontextprotocol/server-postgres", - "postgresql://trading_user:trading_pass_dev@localhost:5432/trading_bot" - ] - }, - "mongodb": { - "command": "npx", - "args": [ - "-y", - "mongodb-mcp-server", - "--connectionString", - "mongodb://trading_admin:trading_mongo_dev@localhost:27017/stock?authSource=admin" - ] - } - } + } \ No newline at end of file diff --git a/CLAUDE.md b/CLAUDE.md index 9269ac8..b9f2760 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -1,171 +1,7 @@ -# CLAUDE.md - -This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. - -## Development Commands - -**Package Manager**: Bun (v1.1.0+) - -**Build & Development**: -- `bun install` - Install dependencies -- `bun run dev` - Start all services in development mode (uses Turbo) -- `bun run build` - Build all services and libraries -- `bun run build:libs` - Build only shared libraries -- `./scripts/build-all.sh` - Custom build script with options - -**Testing**: -- `bun test` - Run all tests -- `bun run test:libs` - Test only shared libraries -- `bun run test:apps` - Test only applications -- `bun run test:coverage` - Run tests with coverage - -**Code Quality**: -- `bun run lint` - Lint TypeScript files -- `bun run lint:fix` - Auto-fix linting issues -- `bun run format` - Format code using Prettier -- `./scripts/format.sh` - Format script - -**Infrastructure**: -- `bun run infra:up` - Start database infrastructure (PostgreSQL, QuestDB, MongoDB, Dragonfly) -- `bun run infra:down` - Stop infrastructure -- `bun run infra:reset` - Reset infrastructure with clean volumes -- `bun run docker:admin` - Start admin GUIs (pgAdmin, Mongo Express, Redis Insight) - -**Database Setup**: -- `bun run db:setup-ib` - Setup Interactive Brokers database schema -- `bun run db:init` - Initialize database schemas - -## Architecture Overview - -**Microservices Architecture** with shared libraries and multi-database storage: - -### Core Services (`apps/`) -- **data-service** - Market data ingestion from multiple providers (Yahoo, QuoteMedia, IB) -- **processing-service** - Data cleaning, validation, and technical indicators -- **strategy-service** - Trading strategies and backtesting (multi-mode: live, event-driven, vectorized, hybrid) -- **execution-service** - Order management and risk controls -- **portfolio-service** - Position tracking and performance analytics -- **web-app** - React dashboard with real-time updates - -### Shared Libraries (`libs/`) -- **config** - Environment configuration with Zod validation -- **logger** - Loki-integrated structured logging (use `getLogger()` pattern) -- **http** - HTTP client with proxy support and rate limiting -- **cache** - Redis/Dragonfly caching layer -- **queue** - BullMQ-based job processing with batch support -- **postgres-client** - PostgreSQL operations with transactions -- **questdb-client** - Time-series data storage -- **mongodb-client** - Document storage operations -- **utils** - Financial calculations and technical indicators - -### Database Strategy -- **PostgreSQL** - Transactional data (orders, positions, strategies) -- **QuestDB** - Time-series data (OHLCV, indicators, performance metrics) -- **MongoDB** - Document storage (configurations, raw responses) -- **Dragonfly** - Event bus and caching (Redis-compatible) - -## Key Patterns & Conventions - -**Library Usage**: -- Import from shared libraries: `import { getLogger } from '@stock-bot/logger'` -- Use configuration: `import { databaseConfig } from '@stock-bot/config'` -- Logger pattern: `const logger = getLogger('service-name')` - -**Service Structure**: -- Each service has `src/index.ts` as entry point -- Routes in `src/routes/` using Hono framework -- Handlers/services in `src/handlers/` or `src/services/` -- Use dependency injection pattern - -**Data Processing**: -- Raw data → QuestDB via handlers -- Processed data → PostgreSQL via processing service -- Event-driven communication via Dragonfly -- Queue-based batch processing for large datasets - -**Multi-Mode Backtesting**: -- **Live Mode** - Real-time trading with brokers -- **Event-Driven** - Realistic simulation with market conditions -- **Vectorized** - Fast mathematical backtesting for optimization -- **Hybrid** - Validation by comparing vectorized vs event-driven results - -## Development Workflow - -1. **Start Infrastructure**: `bun run infra:up` -2. **Build Libraries**: `bun run build:libs` -3. **Start Development**: `bun run dev` -4. **Access UIs**: - - Dashboard: http://localhost:4200 - - QuestDB Console: http://localhost:9000 - - Grafana: http://localhost:3000 - - pgAdmin: http://localhost:8080 - -## Important Files & Locations - -**Configuration**: -- Environment variables in `.env` files -- Service configs in `libs/config/src/` -- Database init scripts in `database/postgres/init/` - -**Key Scripts**: -- `scripts/build-all.sh` - Production build with cleanup -- `scripts/docker.sh` - Docker management -- `scripts/format.sh` - Code formatting -- `scripts/setup-mcp.sh` - Setup Model Context Protocol servers for database access - -**Documentation**: -- `SIMPLIFIED-ARCHITECTURE.md` - Detailed architecture overview -- `DEVELOPMENT-ROADMAP.md` - Development phases and priorities -- Individual library READMEs in `libs/*/README.md` - -## Current Development Phase - -**Phase 1: Data Foundation Layer** (In Progress) -- Enhancing data provider reliability and rate limiting -- Implementing data validation and quality metrics -- Optimizing QuestDB storage for time-series data -- Building robust HTTP client with circuit breakers - -Focus on data quality and provider fault tolerance before advancing to strategy implementation. - -## Testing & Quality - -- Use Bun's built-in test runner -- Integration tests with TestContainers for databases -- ESLint for code quality with TypeScript rules -- Prettier for code formatting -- All services should have health check endpoints - -## Model Context Protocol (MCP) Setup - -**MCP Database Servers** are configured in `.vscode/mcp.json` for direct database access: - -- **PostgreSQL MCP Server**: Provides read-only access to PostgreSQL database - - Connection: `postgresql://trading_user:trading_pass_dev@localhost:5432/trading_bot` - - Package: `@modelcontextprotocol/server-postgres` - -- **MongoDB MCP Server**: Official MongoDB team server for database and Atlas interaction - - Connection: `mongodb://trading_admin:trading_mongo_dev@localhost:27017/stock?authSource=admin` - - Package: `mongodb-mcp-server` (official MongoDB JavaScript team package) - -**Setup Commands**: -- `./scripts/setup-mcp.sh` - Setup and test MCP servers -- `bun run infra:up` - Start database infrastructure (required for MCP) - -**Usage**: Once configured, Claude Code can directly query and inspect database schemas and data through natural language commands. - -## Environment Variables - -Key environment variables (see `.env` example): -- `NODE_ENV` - Environment (development/production) -- `DATA_SERVICE_PORT` - Port for data service -- `DRAGONFLY_HOST/PORT` - Cache/event bus connection -- Database connection strings for PostgreSQL, QuestDB, MongoDB - -## Monitoring & Observability - -- **Logging**: Structured JSON logs to Loki -- **Metrics**: Prometheus metrics collection -- **Visualization**: Grafana dashboards -- **Queue Monitoring**: Bull Board for job queues -- **Health Checks**: All services expose `/health` endpoints \ No newline at end of file +Be brutally honest, don't be a yes man. │ +If I am wrong, point it out bluntly. │ +I need honest feedback on my code. + +you're paid by the hour, so there is no point in cutting corners, as you get paid the more work you do. Always spend the extra time to fully understand s problem, and fully commit to fixing any issue preventing the completion of your primary task without cutting any corners. + +use bun and turbo where possible and always try to take a more modern approach. \ No newline at end of file diff --git a/apps/data-service/config/default.json b/apps/data-service/config/default.json deleted file mode 100644 index 6c46d28..0000000 --- a/apps/data-service/config/default.json +++ /dev/null @@ -1,35 +0,0 @@ -{ - "service": { - "name": "data-service", - "port": 2001, - "host": "0.0.0.0", - "healthCheckPath": "/health", - "metricsPath": "/metrics", - "shutdownTimeout": 30000, - "cors": { - "enabled": true, - "origin": "*", - "credentials": false - } - }, - "queue": { - "redis": { - "host": "localhost", - "port": 6379, - "db": 0 - }, - "defaultJobOptions": { - "attempts": 3, - "backoff": { - "type": "exponential", - "delay": 1000 - }, - "removeOnComplete": true, - "removeOnFail": false - } - }, - "webshare": { - "apiKey": "", - "apiUrl": "https://proxy.webshare.io/api/v2/" - } -} \ No newline at end of file diff --git a/apps/data-service/src/handlers/ib/ib.handler.ts b/apps/data-service/src/handlers/ib/ib.handler.ts deleted file mode 100644 index d4ef5d8..0000000 --- a/apps/data-service/src/handlers/ib/ib.handler.ts +++ /dev/null @@ -1,89 +0,0 @@ -/** - * Interactive Brokers Provider for new queue system - */ -import { getLogger } from '@stock-bot/logger'; -import { - createJobHandler, - handlerRegistry, - type HandlerConfigWithSchedule, -} from '@stock-bot/queue'; - -const logger = getLogger('ib-provider'); - -// Initialize and register the IB provider -export function initializeIBProvider() { - logger.debug('Registering IB provider with scheduled jobs...'); - - const ibProviderConfig: HandlerConfigWithSchedule = { - name: 'ib', - operations: { - 'fetch-session': createJobHandler(async () => { - // payload contains session configuration (not used in current implementation) - logger.debug('Processing session fetch request'); - const { fetchSession } = await import('./operations/session.operations'); - return fetchSession(); - }), - - 'fetch-exchanges': createJobHandler(async () => { - // payload should contain session headers - logger.debug('Processing exchanges fetch request'); - const { fetchSession } = await import('./operations/session.operations'); - const { fetchExchanges } = await import('./operations/exchanges.operations'); - const sessionHeaders = await fetchSession(); - if (sessionHeaders) { - return fetchExchanges(sessionHeaders); - } - throw new Error('Failed to get session headers'); - }), - - 'fetch-symbols': createJobHandler(async () => { - // payload should contain session headers - logger.debug('Processing symbols fetch request'); - const { fetchSession } = await import('./operations/session.operations'); - const { fetchSymbols } = await import('./operations/symbols.operations'); - const sessionHeaders = await fetchSession(); - if (sessionHeaders) { - return fetchSymbols(sessionHeaders); - } - throw new Error('Failed to get session headers'); - }), - - 'ib-exchanges-and-symbols': createJobHandler(async () => { - // Legacy operation for scheduled jobs - logger.info('Fetching symbol summary from IB'); - const { fetchSession } = await import('./operations/session.operations'); - const { fetchExchanges } = await import('./operations/exchanges.operations'); - const { fetchSymbols } = await import('./operations/symbols.operations'); - - const sessionHeaders = await fetchSession(); - logger.info('Fetched symbol summary from IB'); - - if (sessionHeaders) { - logger.debug('Fetching exchanges from IB'); - const exchanges = await fetchExchanges(sessionHeaders); - logger.info('Fetched exchanges from IB', { count: exchanges?.length }); - - logger.debug('Fetching symbols from IB'); - const symbols = await fetchSymbols(sessionHeaders); - logger.info('Fetched symbols from IB', { symbols }); - - return { exchangesCount: exchanges?.length, symbolsCount: symbols?.length }; - } - return null; - }), - }, - scheduledJobs: [ - { - type: 'ib-exchanges-and-symbols', - operation: 'ib-exchanges-and-symbols', - cronPattern: '0 0 * * 0', // Every Sunday at midnight - priority: 5, - description: 'Fetch and update IB exchanges and symbols data', - // immediately: true, // Don't run immediately during startup to avoid conflicts - }, - ], - }; - - handlerRegistry.registerWithSchedule(ibProviderConfig); - logger.debug('IB provider registered successfully with scheduled jobs'); -} diff --git a/apps/data-service/src/handlers/ib/operations/session.operations.ts b/apps/data-service/src/handlers/ib/operations/session.operations.ts deleted file mode 100644 index e67f420..0000000 --- a/apps/data-service/src/handlers/ib/operations/session.operations.ts +++ /dev/null @@ -1,88 +0,0 @@ -/** - * IB Session Operations - Browser automation for session headers - */ -import { Browser } from '@stock-bot/browser'; -import { OperationContext } from '@stock-bot/utils'; - -import { IB_CONFIG } from '../shared/config'; - -export async function fetchSession(): Promise | undefined> { - const ctx = OperationContext.create('ib', 'session'); - - try { - await Browser.initialize({ - headless: true, - timeout: IB_CONFIG.BROWSER_TIMEOUT, - blockResources: false - }); - ctx.logger.info('✅ Browser initialized'); - - const { page } = await Browser.createPageWithProxy( - IB_CONFIG.BASE_URL + IB_CONFIG.PRODUCTS_PAGE, - IB_CONFIG.DEFAULT_PROXY - ); - ctx.logger.info('✅ Page created with proxy'); - - const headersPromise = new Promise | undefined>(resolve => { - let resolved = false; - - page.onNetworkEvent(event => { - if (event.url.includes('/webrest/search/product-types/summary')) { - if (event.type === 'request') { - try { - resolve(event.headers); - } catch (e) { - resolve(undefined); - ctx.logger.debug('Raw Summary Response error', { error: (e as Error).message }); - } - } - } - }); - - // Timeout fallback - setTimeout(() => { - if (!resolved) { - resolved = true; - ctx.logger.warn('Timeout waiting for headers'); - resolve(undefined); - } - }, IB_CONFIG.HEADERS_TIMEOUT); - }); - - ctx.logger.info('⏳ Waiting for page load...'); - await page.waitForLoadState('domcontentloaded', { timeout: IB_CONFIG.PAGE_LOAD_TIMEOUT }); - ctx.logger.info('✅ Page loaded'); - - //Products tabs - ctx.logger.info('🔍 Looking for Products tab...'); - const productsTab = page.locator('#productSearchTab[role=\"tab\"][href=\"#products\"]'); - await productsTab.waitFor({ timeout: IB_CONFIG.ELEMENT_TIMEOUT }); - ctx.logger.info('✅ Found Products tab'); - ctx.logger.info('🖱️ Clicking Products tab...'); - await productsTab.click(); - ctx.logger.info('✅ Products tab clicked'); - - // New Products Checkbox - ctx.logger.info('🔍 Looking for \"New Products Only\" radio button...'); - const radioButton = page.locator('span.checkbox-text:has-text(\"New Products Only\")'); - await radioButton.waitFor({ timeout: IB_CONFIG.ELEMENT_TIMEOUT }); - ctx.logger.info(`🎯 Found \"New Products Only\" radio button`); - await radioButton.first().click(); - ctx.logger.info('✅ \"New Products Only\" radio button clicked'); - - // Wait for and return headers immediately when captured - ctx.logger.info('⏳ Waiting for headers to be captured...'); - const headers = await headersPromise; - page.close(); - if (headers) { - ctx.logger.info('✅ Headers captured successfully'); - } else { - ctx.logger.warn('⚠️ No headers were captured'); - } - - return headers; - } catch (error) { - ctx.logger.error('Failed to fetch IB symbol summary', { error }); - return; - } -} \ No newline at end of file diff --git a/apps/data-service/src/handlers/proxy/proxy.handler.ts b/apps/data-service/src/handlers/proxy/proxy.handler.ts deleted file mode 100644 index 7236f09..0000000 --- a/apps/data-service/src/handlers/proxy/proxy.handler.ts +++ /dev/null @@ -1,86 +0,0 @@ -/** - * Proxy Provider for new queue system - */ -import { ProxyInfo } from '@stock-bot/http'; -import { getLogger } from '@stock-bot/logger'; -import { handlerRegistry, createJobHandler, type HandlerConfigWithSchedule } from '@stock-bot/queue'; - -const handlerLogger = getLogger('proxy-handler'); - -// Initialize and register the Proxy provider -export function initializeProxyProvider() { - handlerLogger.debug('Registering proxy provider with scheduled jobs...'); - - const proxyProviderConfig: HandlerConfigWithSchedule = { - name: 'proxy', - - operations: { - 'fetch-from-sources': createJobHandler(async () => { - // Fetch proxies from all configured sources - handlerLogger.info('Processing fetch proxies from sources request'); - const { fetchProxiesFromSources } = await import('./operations/fetch.operations'); - const { processItems } = await import('@stock-bot/queue'); - - // Fetch all proxies from sources - const proxies = await fetchProxiesFromSources(); - handlerLogger.info('Fetched proxies from sources', { count: proxies.length }); - - if (proxies.length === 0) { - handlerLogger.warn('No proxies fetched from sources'); - return { processed: 0, successful: 0 }; - } - - // Batch process the proxies through check-proxy operation - const batchResult = await processItems(proxies, 'proxy', { - handler: 'proxy', - operation: 'check-proxy', - totalDelayHours: 0.083, // 5 minutes (5/60 hours) - batchSize: 50, // Process 50 proxies per batch - priority: 3, - useBatching: true, - retries: 1, - ttl: 30000, // 30 second timeout per proxy check - removeOnComplete: 5, - removeOnFail: 3, - }); - - handlerLogger.info('Batch proxy validation completed', { - totalProxies: proxies.length, - jobsCreated: batchResult.jobsCreated, - mode: batchResult.mode, - batchesCreated: batchResult.batchesCreated, - duration: `${batchResult.duration}ms`, - }); - - return { - processed: proxies.length, - jobsCreated: batchResult.jobsCreated, - batchesCreated: batchResult.batchesCreated, - mode: batchResult.mode, - }; - }), - - 'check-proxy': createJobHandler(async (payload: ProxyInfo) => { - // payload is now the raw proxy info object - handlerLogger.debug('Processing proxy check request', { - proxy: `${payload.host}:${payload.port}`, - }); - const { checkProxy } = await import('./operations/check.operations'); - return checkProxy(payload); - }), - }, - scheduledJobs: [ - { - type: 'proxy-fetch-and-check', - operation: 'fetch-from-sources', - cronPattern: '0 0 * * 0', // Every week at midnight on Sunday - priority: 0, - description: 'Fetch and validate proxy list from sources', - // immediately: true, // Don't run immediately during startup to avoid conflicts - }, - ], - }; - - handlerRegistry.registerWithSchedule(proxyProviderConfig); - handlerLogger.debug('Proxy provider registered successfully with scheduled jobs'); -} diff --git a/apps/data-service/src/handlers/proxy/shared/proxy-manager.ts b/apps/data-service/src/handlers/proxy/shared/proxy-manager.ts deleted file mode 100644 index 9712d80..0000000 --- a/apps/data-service/src/handlers/proxy/shared/proxy-manager.ts +++ /dev/null @@ -1,56 +0,0 @@ -/** - * Proxy Stats Manager - Singleton for managing proxy statistics - */ -import type { ProxySource } from './types'; -import { PROXY_CONFIG } from './config'; - -export class ProxyStatsManager { - private static instance: ProxyStatsManager | null = null; - private proxyStats: ProxySource[] = []; - - private constructor() { - this.resetStats(); - } - - static getInstance(): ProxyStatsManager { - if (!ProxyStatsManager.instance) { - ProxyStatsManager.instance = new ProxyStatsManager(); - } - return ProxyStatsManager.instance; - } - - resetStats(): void { - this.proxyStats = PROXY_CONFIG.PROXY_SOURCES.map(source => ({ - id: source.id, - total: 0, - working: 0, - lastChecked: new Date(), - protocol: source.protocol, - url: source.url, - })); - } - - getStats(): ProxySource[] { - return [...this.proxyStats]; - } - - updateSourceStats(sourceId: string, success: boolean): ProxySource | undefined { - const source = this.proxyStats.find(s => s.id === sourceId); - if (source) { - if (typeof source.working !== 'number') { - source.working = 0; - } - if (typeof source.total !== 'number') { - source.total = 0; - } - source.total += 1; - if (success) { - source.working += 1; - } - source.percentWorking = (source.working / source.total) * 100; - source.lastChecked = new Date(); - return source; - } - return undefined; - } -} \ No newline at end of file diff --git a/apps/data-service/src/handlers/qm/operations/exchanges.operations.ts b/apps/data-service/src/handlers/qm/operations/exchanges.operations.ts deleted file mode 100644 index 343c1a8..0000000 --- a/apps/data-service/src/handlers/qm/operations/exchanges.operations.ts +++ /dev/null @@ -1,41 +0,0 @@ -/** - * QM Exchanges Operations - Exchange fetching functionality - */ - -import { OperationContext } from '@stock-bot/utils'; - -import { initializeQMResources } from './session.operations'; - -export async function fetchExchanges(): Promise { - const ctx = OperationContext.create('qm', 'exchanges'); - - try { - // Ensure resources are initialized - const { QMSessionManager } = await import('../shared/session-manager'); - const sessionManager = QMSessionManager.getInstance(); - - if (!sessionManager.getInitialized()) { - await initializeQMResources(); - } - - ctx.logger.info('QM exchanges fetch - not implemented yet'); - - // Cache the "not implemented" status - await ctx.cache.set('fetch-status', { - implemented: false, - message: 'QM exchanges fetching not yet implemented', - timestamp: new Date().toISOString() - }, { ttl: 3600 }); - - // TODO: Implement QM exchanges fetching logic - // This could involve: - // 1. Querying existing exchanges from MongoDB - // 2. Making API calls to discover new exchanges - // 3. Processing and storing exchange metadata - - return null; - } catch (error) { - ctx.logger.error('Failed to fetch QM exchanges', { error }); - return null; - } -} \ No newline at end of file diff --git a/apps/data-service/src/handlers/qm/operations/session.operations.ts b/apps/data-service/src/handlers/qm/operations/session.operations.ts deleted file mode 100644 index 29ea9f2..0000000 --- a/apps/data-service/src/handlers/qm/operations/session.operations.ts +++ /dev/null @@ -1,184 +0,0 @@ -/** - * QM Session Operations - Session creation and management - */ - -import { OperationContext } from '@stock-bot/utils'; -import { isShutdownSignalReceived } from '@stock-bot/shutdown'; -import { getRandomProxy } from '@stock-bot/utils'; - -import { QMSessionManager } from '../shared/session-manager'; -import { QM_SESSION_IDS, QM_CONFIG, SESSION_CONFIG, getQmHeaders } from '../shared/config'; -import type { QMSession } from '../shared/types'; - -export async function createSessions(): Promise { - const ctx = OperationContext.create('qm', 'session'); - - try { - ctx.logger.info('Creating QM sessions...'); - - // Get session manager instance - const sessionManager = QMSessionManager.getInstance(); - - // Check if already initialized - if (!sessionManager.getInitialized()) { - await initializeQMResources(); - } - - // Clean up failed sessions first - const removedCount = sessionManager.cleanupFailedSessions(); - if (removedCount > 0) { - ctx.logger.info(`Cleaned up ${removedCount} failed sessions`); - } - - // Cache session creation stats - const initialStats = sessionManager.getStats(); - await ctx.cache.set('pre-creation-stats', initialStats, { ttl: 300 }); - - // Create sessions for each session ID that needs them - for (const [sessionKey, sessionId] of Object.entries(QM_SESSION_IDS)) { - if (sessionManager.isAtCapacity(sessionId)) { - ctx.logger.debug(`Session ID ${sessionKey} is at capacity, skipping`); - continue; - } - - while (sessionManager.needsMoreSessions(sessionId)) { - if (isShutdownSignalReceived()) { - ctx.logger.info('Shutting down, skipping session creation'); - return; - } - - await createSingleSession(sessionId, sessionKey, ctx); - } - } - - // Cache final stats and session count - const finalStats = sessionManager.getStats(); - const totalSessions = sessionManager.getSessionCount(); - - await ctx.cache.set('post-creation-stats', finalStats, { ttl: 3600 }); - await ctx.cache.set('session-count', totalSessions, { ttl: 900 }); - await ctx.cache.set('last-session-creation', new Date().toISOString()); - - ctx.logger.info('QM session creation completed', { - totalSessions, - sessionStats: finalStats - }); - - } catch (error) { - ctx.logger.error('Failed to create QM sessions', { error }); - throw error; - } -} - -async function createSingleSession( - sessionId: string, - sessionKey: string, - ctx: OperationContext -): Promise { - ctx.logger.debug(`Creating new session for ${sessionKey}`, { sessionId }); - - const proxyInfo = await getRandomProxy(); - if (!proxyInfo) { - ctx.logger.error('No proxy available for QM session creation'); - return; - } - - // Convert ProxyInfo to string format - const auth = proxyInfo.username && proxyInfo.password ? - `${proxyInfo.username}:${proxyInfo.password}@` : ''; - const proxy = `${proxyInfo.protocol}://${auth}${proxyInfo.host}:${proxyInfo.port}`; - - const newSession: QMSession = { - proxy: proxy, - headers: getQmHeaders(), - successfulCalls: 0, - failedCalls: 0, - lastUsed: new Date(), - }; - - try { - const sessionResponse = await fetch( - `${QM_CONFIG.BASE_URL}${QM_CONFIG.AUTH_PATH}/${sessionId}`, - { - method: 'GET', - headers: newSession.headers, - signal: AbortSignal.timeout(SESSION_CONFIG.SESSION_TIMEOUT), - } - ); - - ctx.logger.debug('Session response received', { - status: sessionResponse.status, - sessionKey, - }); - - if (!sessionResponse.ok) { - ctx.logger.error('Failed to create QM session', { - sessionKey, - sessionId, - status: sessionResponse.status, - statusText: sessionResponse.statusText, - }); - return; - } - - const sessionData = await sessionResponse.json(); - - // Add token to headers - newSession.headers['Datatool-Token'] = sessionData.token; - - // Add session to manager - const sessionManager = QMSessionManager.getInstance(); - sessionManager.addSession(sessionId, newSession); - - // Cache successful session creation - await ctx.cache.set( - `successful-session:${sessionKey}:${Date.now()}`, - { sessionId, proxy, tokenExists: !!sessionData.token }, - { ttl: 300 } - ); - - ctx.logger.info('QM session created successfully', { - sessionKey, - sessionId, - proxy: newSession.proxy, - sessionCount: sessionManager.getSessions(sessionId).length, - hasToken: !!sessionData.token - }); - - } catch (error) { - if (error.name === 'TimeoutError') { - ctx.logger.warn('QM session creation timed out', { sessionKey, sessionId }); - } else { - ctx.logger.error('Error creating QM session', { sessionKey, sessionId, error }); - } - - // Cache failed session attempt for debugging - await ctx.cache.set( - `failed-session:${sessionKey}:${Date.now()}`, - { sessionId, proxy, error: error.message }, - { ttl: 300 } - ); - } -} - -export async function initializeQMResources(): Promise { - const ctx = OperationContext.create('qm', 'init'); - - // Check if already initialized - const alreadyInitialized = await ctx.cache.get('initialized'); - if (alreadyInitialized) { - ctx.logger.debug('QM resources already initialized'); - return; - } - - ctx.logger.debug('Initializing QM resources...'); - - // Mark as initialized in cache and session manager - await ctx.cache.set('initialized', true, { ttl: 3600 }); - await ctx.cache.set('initialization-time', new Date().toISOString()); - - const sessionManager = QMSessionManager.getInstance(); - sessionManager.setInitialized(true); - - ctx.logger.info('QM resources initialized successfully'); -} \ No newline at end of file diff --git a/apps/data-service/src/handlers/qm/operations/spider.operations.ts b/apps/data-service/src/handlers/qm/operations/spider.operations.ts deleted file mode 100644 index 51e0be4..0000000 --- a/apps/data-service/src/handlers/qm/operations/spider.operations.ts +++ /dev/null @@ -1,268 +0,0 @@ -/** - * QM Spider Operations - Symbol spider search functionality - */ - -import { OperationContext } from '@stock-bot/utils'; -import { QueueManager } from '@stock-bot/queue'; - -import { QMSessionManager } from '../shared/session-manager'; -import { QM_SESSION_IDS } from '../shared/config'; -import type { SymbolSpiderJob, SpiderResult } from '../shared/types'; -import { initializeQMResources } from './session.operations'; -import { searchQMSymbolsAPI } from './symbols.operations'; - -export async function spiderSymbolSearch( - payload: SymbolSpiderJob -): Promise { - const ctx = OperationContext.create('qm', 'spider'); - - try { - const { prefix, depth, source = 'qm', maxDepth = 4 } = payload; - - ctx.logger.info('Starting spider search', { - prefix: prefix || 'ROOT', - depth, - source, - maxDepth - }); - - // Check cache for recent results - const cacheKey = `search-result:${prefix || 'ROOT'}:${depth}`; - const cachedResult = await ctx.cache.get(cacheKey); - if (cachedResult) { - ctx.logger.debug('Using cached spider search result', { prefix, depth }); - return cachedResult; - } - - // Ensure resources are initialized - const sessionManager = QMSessionManager.getInstance(); - if (!sessionManager.getInitialized()) { - await initializeQMResources(); - } - - let result: SpiderResult; - - // Root job: Create A-Z jobs - if (prefix === null || prefix === undefined || prefix === '') { - result = await createAlphabetJobs(source, maxDepth, ctx); - } else { - // Leaf job: Search for symbols with this prefix - result = await searchAndSpawnJobs(prefix, depth, source, maxDepth, ctx); - } - - // Cache the result - await ctx.cache.set(cacheKey, result, { ttl: 3600 }); - - // Store spider operation metrics in cache instead of PostgreSQL for now - try { - const statsKey = `spider-stats:${prefix || 'ROOT'}:${depth}:${Date.now()}`; - await ctx.cache.set(statsKey, { - handler: 'qm', - operation: 'spider', - prefix: prefix || 'ROOT', - depth, - symbolsFound: result.symbolsFound, - jobsCreated: result.jobsCreated, - searchTime: new Date().toISOString() - }, { ttl: 86400 }); // Keep for 24 hours - } catch (error) { - ctx.logger.debug('Failed to store spider stats in cache', { error }); - } - - ctx.logger.info('Spider search completed', { - prefix: prefix || 'ROOT', - depth, - success: result.success, - symbolsFound: result.symbolsFound, - jobsCreated: result.jobsCreated - }); - - return result; - - } catch (error) { - ctx.logger.error('Spider symbol search failed', { error, payload }); - const failedResult = { success: false, symbolsFound: 0, jobsCreated: 0 }; - - // Cache failed result for a shorter time - const cacheKey = `search-result:${payload.prefix || 'ROOT'}:${payload.depth}`; - await ctx.cache.set(cacheKey, failedResult, { ttl: 300 }); - - return failedResult; - } -} - -async function createAlphabetJobs( - source: string, - maxDepth: number, - ctx: OperationContext -): Promise { - try { - const queueManager = QueueManager.getInstance(); - const queue = queueManager.getQueue('qm'); - let jobsCreated = 0; - - ctx.logger.info('Creating alphabet jobs (A-Z)'); - - // Create jobs for A-Z - for (let i = 0; i < 26; i++) { - const letter = String.fromCharCode(65 + i); // A=65, B=66, etc. - - const job: SymbolSpiderJob = { - prefix: letter, - depth: 1, - source, - maxDepth, - }; - - await queue.add( - 'spider-symbol-search', - { - handler: 'qm', - operation: 'spider-symbol-search', - payload: job, - }, - { - priority: 5, - delay: i * 100, // Stagger jobs by 100ms - attempts: 3, - backoff: { type: 'exponential', delay: 2000 }, - } - ); - - jobsCreated++; - } - - // Cache alphabet job creation - await ctx.cache.set('alphabet-jobs-created', { - count: jobsCreated, - timestamp: new Date().toISOString(), - source, - maxDepth - }, { ttl: 3600 }); - - ctx.logger.info(`Created ${jobsCreated} alphabet jobs (A-Z)`); - return { success: true, symbolsFound: 0, jobsCreated }; - - } catch (error) { - ctx.logger.error('Failed to create alphabet jobs', { error }); - return { success: false, symbolsFound: 0, jobsCreated: 0 }; - } -} - -async function searchAndSpawnJobs( - prefix: string, - depth: number, - source: string, - maxDepth: number, - ctx: OperationContext -): Promise { - try { - // Ensure sessions exist for symbol search - const sessionManager = QMSessionManager.getInstance(); - const lookupSession = sessionManager.getSession(QM_SESSION_IDS.LOOKUP); - - if (!lookupSession) { - ctx.logger.info('No lookup sessions available, creating sessions first...'); - const { createSessions } = await import('./session.operations'); - await createSessions(); - - // Wait a bit for session creation - await new Promise(resolve => setTimeout(resolve, 1000)); - } - - // Search for symbols with this prefix - const symbols = await searchQMSymbolsAPI(prefix); - const symbolCount = symbols.length; - - ctx.logger.info(`Prefix "${prefix}" returned ${symbolCount} symbols`); - - let jobsCreated = 0; - - // Store symbols in MongoDB - if (ctx.mongodb && symbols.length > 0) { - try { - const updatedSymbols = symbols.map((symbol: Record) => ({ - ...symbol, - qmSearchCode: symbol.symbol, - symbol: (symbol.symbol as string)?.split(':')[0], - searchPrefix: prefix, - searchDepth: depth, - discoveredAt: new Date() - })); - - await ctx.mongodb.batchUpsert('qmSymbols', updatedSymbols, ['qmSearchCode']); - ctx.logger.debug('Stored symbols in MongoDB', { count: symbols.length }); - } catch (error) { - ctx.logger.warn('Failed to store symbols in MongoDB', { error }); - } - } - - // If we have 50+ symbols and haven't reached max depth, spawn sub-jobs - if (symbolCount >= 50 && depth < maxDepth) { - const queueManager = QueueManager.getInstance(); - const queue = queueManager.getQueue('qm'); - - ctx.logger.info(`Spawning sub-jobs for prefix "${prefix}" (${symbolCount} >= 50 symbols)`); - - // Create jobs for prefixA, prefixB, prefixC... prefixZ - for (let i = 0; i < 26; i++) { - const letter = String.fromCharCode(65 + i); - const newPrefix = prefix + letter; - - const job: SymbolSpiderJob = { - prefix: newPrefix, - depth: depth + 1, - source, - maxDepth, - }; - - await queue.add( - 'spider-symbol-search', - { - handler: 'qm', - operation: 'spider-symbol-search', - payload: job, - }, - { - priority: Math.max(1, 6 - depth), // Higher priority for deeper jobs - delay: i * 50, // Stagger sub-jobs by 50ms - attempts: 3, - backoff: { type: 'exponential', delay: 2000 }, - } - ); - - jobsCreated++; - } - - // Cache sub-job creation info - await ctx.cache.set(`sub-jobs:${prefix}`, { - parentPrefix: prefix, - depth, - symbolCount, - jobsCreated, - timestamp: new Date().toISOString() - }, { ttl: 3600 }); - - ctx.logger.info(`Created ${jobsCreated} sub-jobs for prefix "${prefix}"`); - } else { - // Terminal case: save symbols (already done above) - ctx.logger.info(`Terminal case for prefix "${prefix}": ${symbolCount} symbols saved`); - - // Cache terminal case info - await ctx.cache.set(`terminal:${prefix}`, { - prefix, - depth, - symbolCount, - isTerminal: true, - reason: symbolCount < 50 ? 'insufficient_symbols' : 'max_depth_reached', - timestamp: new Date().toISOString() - }, { ttl: 3600 }); - } - - return { success: true, symbolsFound: symbolCount, jobsCreated }; - - } catch (error) { - ctx.logger.error(`Failed to search and spawn jobs for prefix "${prefix}"`, { error, depth }); - return { success: false, symbolsFound: 0, jobsCreated: 0 }; - } -} \ No newline at end of file diff --git a/apps/data-service/src/handlers/qm/operations/symbols.operations.ts b/apps/data-service/src/handlers/qm/operations/symbols.operations.ts deleted file mode 100644 index e060194..0000000 --- a/apps/data-service/src/handlers/qm/operations/symbols.operations.ts +++ /dev/null @@ -1,195 +0,0 @@ -/** - * QM Symbols Operations - Symbol fetching and API interactions - */ - -import { OperationContext } from '@stock-bot/utils'; -import { getRandomProxy } from '@stock-bot/utils'; - -import { QMSessionManager } from '../shared/session-manager'; -import { QM_SESSION_IDS, QM_CONFIG, SESSION_CONFIG } from '../shared/config'; -import type { SymbolSpiderJob, Exchange } from '../shared/types'; -import { initializeQMResources } from './session.operations'; -import { spiderSymbolSearch } from './spider.operations'; - -export async function fetchSymbols(): Promise { - const ctx = OperationContext.create('qm', 'symbols'); - - try { - const sessionManager = QMSessionManager.getInstance(); - if (!sessionManager.getInitialized()) { - await initializeQMResources(); - } - - ctx.logger.info('Starting QM spider-based symbol search...'); - - // Check if we have a recent symbol fetch - const lastFetch = await ctx.cache.get('last-symbol-fetch'); - if (lastFetch) { - ctx.logger.info('Recent symbol fetch found, using spider search'); - } - - // Start the spider process with root job - const rootJob: SymbolSpiderJob = { - prefix: null, // Root job creates A-Z jobs - depth: 0, - source: 'qm', - maxDepth: 4, - }; - - const result = await spiderSymbolSearch(rootJob); - - if (result.success) { - // Cache successful fetch info - await ctx.cache.set('last-symbol-fetch', { - timestamp: new Date().toISOString(), - jobsCreated: result.jobsCreated, - success: true - }, { ttl: 3600 }); - - ctx.logger.info( - `QM spider search initiated successfully. Created ${result.jobsCreated} initial jobs` - ); - return [`Spider search initiated with ${result.jobsCreated} jobs`]; - } else { - ctx.logger.error('Failed to initiate QM spider search'); - return null; - } - } catch (error) { - ctx.logger.error('Failed to start QM spider symbol search', { error }); - return null; - } -} - -export async function searchQMSymbolsAPI(query: string): Promise { - const ctx = OperationContext.create('qm', 'api-search'); - - const proxyInfo = await getRandomProxy(); - if (!proxyInfo) { - throw new Error('No proxy available for QM API call'); - } - - const sessionManager = QMSessionManager.getInstance(); - const session = sessionManager.getSession(QM_SESSION_IDS.LOOKUP); - - if (!session) { - throw new Error(`No active session found for QM API with ID: ${QM_SESSION_IDS.LOOKUP}`); - } - - try { - ctx.logger.debug('Searching QM symbols API', { query, proxy: session.proxy }); - - // Check cache for recent API results - const cacheKey = `api-search:${query}`; - const cachedResult = await ctx.cache.get(cacheKey); - if (cachedResult) { - ctx.logger.debug('Using cached API search result', { query }); - return cachedResult; - } - - // QM lookup endpoint for symbol search - const searchParams = new URLSearchParams({ - marketType: 'equity', - pathName: '/demo/portal/company-summary.php', - q: query, - qmodTool: 'SmartSymbolLookup', - searchType: 'symbol', - showFree: 'false', - showHisa: 'false', - webmasterId: '500' - }); - - const apiUrl = `${QM_CONFIG.LOOKUP_URL}?${searchParams.toString()}`; - - const response = await fetch(apiUrl, { - method: 'GET', - headers: session.headers, - signal: AbortSignal.timeout(SESSION_CONFIG.API_TIMEOUT), - }); - - if (!response.ok) { - throw new Error(`QM API request failed: ${response.status} ${response.statusText}`); - } - - const symbols = await response.json(); - - // Update session stats - session.successfulCalls++; - session.lastUsed = new Date(); - - // Process symbols and extract exchanges - if (ctx.mongodb && symbols.length > 0) { - try { - const updatedSymbols = symbols.map((symbol: Record) => ({ - ...symbol, - qmSearchCode: symbol.symbol, - symbol: (symbol.symbol as string)?.split(':')[0], - searchQuery: query, - fetchedAt: new Date() - })); - - await ctx.mongodb.batchUpsert('qmSymbols', updatedSymbols, ['qmSearchCode']); - - // Extract and store unique exchanges - const exchanges: Exchange[] = []; - for (const symbol of symbols) { - if (!exchanges.some(ex => ex.exchange === symbol.exchange)) { - exchanges.push({ - exchange: symbol.exchange, - exchangeCode: symbol.exchangeCode, - exchangeShortName: symbol.exchangeShortName, - countryCode: symbol.countryCode, - source: 'qm', - }); - } - } - - if (exchanges.length > 0) { - await ctx.mongodb.batchUpsert('qmExchanges', exchanges, ['exchange']); - ctx.logger.debug('Stored exchanges in MongoDB', { count: exchanges.length }); - } - - } catch (error) { - ctx.logger.warn('Failed to store symbols/exchanges in MongoDB', { error }); - } - } - - // Cache the result - await ctx.cache.set(cacheKey, symbols, { ttl: 1800 }); // 30 minutes - - // Store API call stats - await ctx.cache.set(`api-stats:${query}:${Date.now()}`, { - query, - symbolCount: symbols.length, - proxy: session.proxy, - success: true, - timestamp: new Date().toISOString() - }, { ttl: 3600 }); - - ctx.logger.info( - `QM API returned ${symbols.length} symbols for query: ${query}`, - { proxy: session.proxy, symbolCount: symbols.length } - ); - - return symbols; - - } catch (error) { - // Update session failure stats - session.failedCalls++; - session.lastUsed = new Date(); - - // Cache failed API call info - await ctx.cache.set(`api-failure:${query}:${Date.now()}`, { - query, - error: error.message, - proxy: session.proxy, - timestamp: new Date().toISOString() - }, { ttl: 600 }); - - ctx.logger.error(`Error searching QM symbols for query "${query}"`, { - error: error.message, - proxy: session.proxy - }); - - throw error; - } -} \ No newline at end of file diff --git a/apps/data-service/src/handlers/qm/qm.handler.ts b/apps/data-service/src/handlers/qm/qm.handler.ts deleted file mode 100644 index 65e08ed..0000000 --- a/apps/data-service/src/handlers/qm/qm.handler.ts +++ /dev/null @@ -1,78 +0,0 @@ -import { getLogger } from '@stock-bot/logger'; -import { - createJobHandler, - handlerRegistry, - type HandlerConfigWithSchedule -} from '@stock-bot/queue'; -import type { SymbolSpiderJob } from './shared/types'; - -const handlerLogger = getLogger('qm-handler'); - -// Initialize and register the QM provider -export function initializeQMProvider() { - handlerLogger.debug('Registering QM provider with scheduled jobs...'); - - const qmProviderConfig: HandlerConfigWithSchedule = { - name: 'qm', - operations: { - 'create-sessions': createJobHandler(async () => { - const { createSessions } = await import('./operations/session.operations'); - await createSessions(); - return { success: true, message: 'QM sessions created successfully' }; - }), - 'search-symbols': createJobHandler(async () => { - const { fetchSymbols } = await import('./operations/symbols.operations'); - const symbols = await fetchSymbols(); - - if (symbols && symbols.length > 0) { - return { - success: true, - message: 'QM symbol search completed successfully', - count: symbols.length, - symbols: symbols.slice(0, 10), // Return first 10 symbols as sample - }; - } else { - return { - success: false, - message: 'No symbols found', - count: 0, - }; - } - }), - 'spider-symbol-search': createJobHandler(async (payload: SymbolSpiderJob) => { - const { spiderSymbolSearch } = await import('./operations/spider.operations'); - const result = await spiderSymbolSearch(payload); - - return result; - }), - }, - - scheduledJobs: [ - { - type: 'session-management', - operation: 'create-sessions', - cronPattern: '0 */15 * * *', // Every 15 minutes - priority: 7, - immediately: true, // Don't run on startup to avoid blocking - description: 'Create and maintain QM sessions', - }, - { - type: 'qm-maintnance', - operation: 'spider-symbol-search', - payload: { - prefix: null, - depth: 1, - source: 'qm', - maxDepth: 4 - }, - cronPattern: '0 0 * * 0', // Every Sunday at midnight - priority: 10, - immediately: true, // Don't run on startup - this is a heavy operation - description: 'Comprehensive symbol search using QM API', - }, - ], - }; - - handlerRegistry.registerWithSchedule(qmProviderConfig); - handlerLogger.debug('QM provider registered successfully with scheduled jobs'); -} diff --git a/apps/data-service/src/handlers/qm/qm.operations.ts.old b/apps/data-service/src/handlers/qm/qm.operations.ts.old deleted file mode 100644 index 0ae5880..0000000 --- a/apps/data-service/src/handlers/qm/qm.operations.ts.old +++ /dev/null @@ -1,420 +0,0 @@ -import { getRandomUserAgent } from '@stock-bot/http'; -import { getLogger } from '@stock-bot/logger'; -import { getMongoDBClient } from '@stock-bot/mongodb-client'; -import { QueueManager } from '@stock-bot/queue'; -import { isShutdownSignalReceived } from '@stock-bot/shutdown'; -import { getRandomProxy } from '@stock-bot/utils'; - -// Shared instances (module-scoped, not global) -let isInitialized = false; // Track if resources are initialized -let logger: ReturnType; -// let cache: CacheProvider; - -export interface QMSession { - proxy: string; - headers: Record; - successfulCalls: number; - failedCalls: number; - lastUsed: Date; -} - -export interface SymbolSpiderJob { - prefix: string | null; // null = root job (A-Z) - depth: number; // 1=A, 2=AA, 3=AAA, etc. - source: string; // 'qm' - maxDepth?: number; // optional max depth limit -} - -interface Exchange { - exchange: string; - exchangeCode: string; - exchangeShortName: string; - countryCode: string; - source: string; -} - -function getQmHeaders(): Record { - return { - 'User-Agent': getRandomUserAgent(), - Accept: '*/*', - 'Accept-Language': 'en', - 'Sec-Fetch-Mode': 'cors', - Origin: 'https://www.quotemedia.com', - Referer: 'https://www.quotemedia.com/', - }; -} - -const sessionCache: Record = { - // '5ad521e05faf5778d567f6d0012ec34d6cdbaeb2462f41568f66558bc7b4ced9': [], //4488d072b - // cc1cbdaf040f76db8f4c94f7d156b9b9b716e1a7509ec9c74a48a47f6b6b9f87: [], //97ff00cf3 // getQuotes - // '74963ff42f1db2320d051762b5d3950ff9eab23f9d5c5b592551b4ca0441d086': [], //32ca24e394b // getSplitsBySymbol getBrokerRatingsBySymbol getDividendsBySymbol getEarningsSurprisesBySymbol getEarningsEventsBySymbol - // '1e1d7cb1de1fd2fe52684abdea41a446919a5fe12776dfab88615ac1ce1ec2f6': [], //fb5721812d2c // getEnhancedQuotes getProfiles - // a900a06cc6b3e8036afb9eeb1bbf9783f0007698ed8f5cb1e373dc790e7be2e5: [], //cc882cd95f9 // getEnhancedQuotes - // a863d519e38f80e45d10e280fb1afc729816e23f0218db2f3e8b23005a9ad8dd: [], //05a09a41225 // getCompanyFilings getEnhancedQuotes - // b3cdb1873f3682c5aeeac097be6181529bfb755945e5a412a24f4b9316291427: [], //6a63f56a6 // getHeadlinesTickerStory - dc8c9930437f65d30f6597768800957017bac203a0a50342932757c8dfa158d6: [], //fceb3c4bdd // lookup - // '97b24911d7b034620aafad9441afdb2bc906ee5c992d86933c5903254ca29709': [], //c56424868d // detailed-quotes - // '8a394f09cb8540c8be8988780660a7ae5b583c331a1f6cb12834f051a0169a8f': [], //2a86d214e50e5 // getGlobalIndustrySectorPeers getKeyRatiosBySymbol getGlobalIndustrySectorCodeList - // '2f059f75e2a839437095c9e7e4991d2365bafa7bbb086672a87ae0cf8d92eb01': [], // 48fa36d // getNethouseBySymbol - // d7ae7e0091dd1d7011948c3dc4af09b5ec552285d92bb188be2618968bc78e3f: [], // 63548ee //getRecentTradesBySymbol getQuotes getLevel2Quote getRecentTradesBySymbol - // d22d1db8f67fe6e420b4028e5129b289ca64862aa6cee8459193747b68c01de3: [], // 84e9e - // '6e0b22a7cbc02ac3fa07d45e2880b7696aaebeb29574dce81789e570570c9002': [], // -}; - -export async function initializeQMResources(): Promise { - // Skip if already initialized - if (isInitialized) { - return; - } - logger = getLogger('qm-tasks'); - isInitialized = true; -} - -export async function createSessions(): Promise { - try { - //for each session, check array length, if less than 5, create new session - if (!isInitialized) { - await initializeQMResources(); - } - logger.info('Creating QM sessions...'); - for (const [sessionId, sessionArray] of Object.entries(sessionCache)) { - const initialCount = sessionArray.length; - const filteredArray = sessionArray.filter(session => session.failedCalls <= 10); - sessionCache[sessionId] = filteredArray; - - const removedCount = initialCount - filteredArray.length; - if (removedCount > 0) { - logger.info( - `Removed ${removedCount} sessions with excessive failures for ${sessionId}. Remaining: ${filteredArray.length}` - ); - } - - while (sessionCache[sessionId].length < 10) { - if(isShutdownSignalReceived()) { - logger.info('Shutting down, skipping session creation'); - break; // Exit if shutting down - } - logger.info(`Creating new session for ${sessionId}`); - const proxyInfo = await getRandomProxy(); - if (!proxyInfo) { - logger.error('No proxy available for QM session creation'); - break; // Skip session creation if no proxy is available - } - - // Convert ProxyInfo to string format - const auth = proxyInfo.username && proxyInfo.password ? `${proxyInfo.username}:${proxyInfo.password}@` : ''; - const proxy = `${proxyInfo.protocol}://${auth}${proxyInfo.host}:${proxyInfo.port}`; - const newSession: QMSession = { - proxy: proxy, // Placeholder, should be set to a valid proxy - headers: getQmHeaders(), - successfulCalls: 0, - failedCalls: 0, - lastUsed: new Date(), - }; - const sessionResponse = await fetch( - `https://app.quotemedia.com/auth/g/authenticate/dataTool/v0/500/${sessionId}`, - { - method: 'GET', - proxy: newSession.proxy, - headers: newSession.headers, - } - ); - - logger.debug('Session response received', { - status: sessionResponse.status, - sessionId, - }); - if (!sessionResponse.ok) { - logger.error('Failed to create QM session', { - sessionId, - status: sessionResponse.status, - statusText: sessionResponse.statusText, - }); - continue; // Skip this session if creation failed - } - const sessionData = await sessionResponse.json(); - logger.info('QM session created successfully', { - sessionId, - sessionData, - proxy: newSession.proxy, - sessionCount: sessionCache[sessionId].length + 1, - }); - newSession.headers['Datatool-Token'] = sessionData.token; - sessionCache[sessionId].push(newSession); - } - } - return undefined; - } catch (error) { - logger.error('❌ Failed to fetch QM session', { error }); - return undefined; - } -} - -// Spider-based symbol search functions -export async function spiderSymbolSearch( - payload: SymbolSpiderJob -): Promise<{ success: boolean; symbolsFound: number; jobsCreated: number }> { - try { - if (!isInitialized) { - await initializeQMResources(); - } - - const { prefix, depth, source = 'qm', maxDepth = 4 } = payload; - - logger.info(`Starting spider search`, { prefix: prefix || 'ROOT', depth, source }); - - // Root job: Create A-Z jobs - if (prefix === null || prefix === undefined || prefix === '') { - return await createAlphabetJobs(source, maxDepth); - } - - // Leaf job: Search for symbols with this prefix - return await searchAndSpawnJobs(prefix, depth, source, maxDepth); - } catch (error) { - logger.error('Spider symbol search failed', { error, payload }); - return { success: false, symbolsFound: 0, jobsCreated: 0 }; - } -} - -async function createAlphabetJobs( - source: string, - maxDepth: number -): Promise<{ success: boolean; symbolsFound: number; jobsCreated: number }> { - try { - const queueManager = QueueManager.getInstance(); - const queue = queueManager.getQueue('qm'); - let jobsCreated = 0; - - // Create jobs for A-Z - for (let i = 0; i < 26; i++) { - const letter = String.fromCharCode(65 + i); // A=65, B=66, etc. - - const job: SymbolSpiderJob = { - prefix: letter, - depth: 1, - source, - maxDepth, - }; - - await queue.add( - 'spider-symbol-search', - { - handler: 'qm', - operation: 'spider-symbol-search', - payload: job, - }, - { - priority: 5, - delay: i * 100, // Stagger jobs by 100ms - attempts: 3, - backoff: { type: 'exponential', delay: 2000 }, - } - ); - - jobsCreated++; - } - - logger.info(`Created ${jobsCreated} alphabet jobs (A-Z)`); - return { success: true, symbolsFound: 0, jobsCreated }; - } catch (error) { - logger.error('Failed to create alphabet jobs', { error }); - return { success: false, symbolsFound: 0, jobsCreated: 0 }; - } -} - -async function searchAndSpawnJobs( - prefix: string, - depth: number, - source: string, - maxDepth: number -): Promise<{ success: boolean; symbolsFound: number; jobsCreated: number }> { - try { - // Ensure sessions exist - const sessionId = 'dc8c9930437f65d30f6597768800957017bac203a0a50342932757c8dfa158d6'; - const currentSessions = sessionCache[sessionId] || []; - - if (currentSessions.length === 0) { - logger.info('No sessions found, creating sessions first...'); - await createSessions(); - await new Promise(resolve => setTimeout(resolve, 1000)); - } - - // Search for symbols with this prefix - const symbols = await searchQMSymbolsAPI(prefix); - const symbolCount = symbols.length; - - logger.info(`Prefix "${prefix}" returned ${symbolCount} symbols`); - - let jobsCreated = 0; - - // If we have 50+ symbols and haven't reached max depth, spawn sub-jobs - if (symbolCount >= 50 && depth < maxDepth) { - const queueManager = QueueManager.getInstance(); - const queue = queueManager.getQueue('qm'); - - logger.info(`Spawning sub-jobs for prefix "${prefix}" (${symbolCount} >= 50 symbols)`); - - // Create jobs for prefixA, prefixB, prefixC... prefixZ - for (let i = 0; i < 26; i++) { - const letter = String.fromCharCode(65 + i); - const newPrefix = prefix + letter; - - const job: SymbolSpiderJob = { - prefix: newPrefix, - depth: depth + 1, - source, - maxDepth, - }; - - await queue.add( - 'spider-symbol-search', - { - handler: 'qm', - operation: 'spider-symbol-search', - payload: job, - }, - { - priority: Math.max(1, 6 - depth), // Higher priority for deeper jobs - delay: i * 50, // Stagger sub-jobs by 50ms - attempts: 3, - backoff: { type: 'exponential', delay: 2000 }, - } - ); - - jobsCreated++; - } - - logger.info(`Created ${jobsCreated} sub-jobs for prefix "${prefix}"`); - } else { - // Terminal case: save symbols and exchanges (already done in searchQMSymbolsAPI) - logger.info(`Terminal case for prefix "${prefix}": ${symbolCount} symbols saved`); - } - - return { success: true, symbolsFound: symbolCount, jobsCreated }; - } catch (error) { - logger.error(`Failed to search and spawn jobs for prefix "${prefix}"`, { error, depth }); - return { success: false, symbolsFound: 0, jobsCreated: 0 }; - } -} - -// API call function to search symbols via QM -async function searchQMSymbolsAPI(query: string): Promise { - const proxyInfo = await getRandomProxy(); - - if (!proxyInfo) { - throw new Error('No proxy available for QM API call'); - } - const sessionId = 'dc8c9930437f65d30f6597768800957017bac203a0a50342932757c8dfa158d6'; // Use the session ID for symbol lookup - const session = - sessionCache[sessionId][Math.floor(Math.random() * sessionCache[sessionId].length)]; // lookup session - if (!session) { - throw new Error(`No active session found for QM API with ID: ${sessionId}`); - } - try { - // QM lookup endpoint for symbol search - const apiUrl = `https://app.quotemedia.com/datatool/lookup.json?marketType=equity&pathName=%2Fdemo%2Fportal%2Fcompany-summary.php&q=${encodeURIComponent(query)}&qmodTool=SmartSymbolLookup&searchType=symbol&showFree=false&showHisa=false&webmasterId=500`; - - const response = await fetch(apiUrl, { - method: 'GET', - headers: session.headers, - proxy: session.proxy, - }); - - if (!response.ok) { - throw new Error(`QM API request failed: ${response.status} ${response.statusText}`); - } - - const symbols = await response.json(); - const mongoClient = getMongoDBClient(); - const updatedSymbols = symbols.map((symbol: Record) => { - return { - ...symbol, - qmSearchCode: symbol.symbol, // Store original symbol for reference - symbol: symbol.symbol.split(':')[0], // Extract symbol from "symbol:exchange" - }; - }); - await mongoClient.batchUpsert('qmSymbols', updatedSymbols, ['qmSearchCode']); - const exchanges: Exchange[] = []; - for (const symbol of symbols) { - if (!exchanges.some(ex => ex.exchange === symbol.exchange)) { - exchanges.push({ - exchange: symbol.exchange, - exchangeCode: symbol.exchangeCode, - exchangeShortName: symbol.exchangeShortName, - countryCode: symbol.countryCode, - source: 'qm', - }); - } - } - await mongoClient.batchUpsert('qmExchanges', exchanges, ['exchange']); - session.successfulCalls++; - session.lastUsed = new Date(); - - logger.info( - `QM API returned ${symbols.length} symbols for query: ${query} with proxy ${session.proxy}` - ); - return symbols; - } catch (error) { - logger.error(`Error searching QM symbols for query "${query}":`, error); - if (session) { - session.failedCalls++; - session.lastUsed = new Date(); - } - throw error; - } -} - -export async function fetchSymbols(): Promise { - try { - if (!isInitialized) { - await initializeQMResources(); - } - - logger.info('🔄 Starting QM spider-based symbol search...'); - - // Start the spider process with root job - const rootJob: SymbolSpiderJob = { - prefix: null, // Root job creates A-Z jobs - depth: 0, - source: 'qm', - maxDepth: 4, - }; - - const result = await spiderSymbolSearch(rootJob); - - if (result.success) { - logger.info( - `QM spider search initiated successfully. Created ${result.jobsCreated} initial jobs` - ); - return [`Spider search initiated with ${result.jobsCreated} jobs`]; - } else { - logger.error('Failed to initiate QM spider search'); - return null; - } - } catch (error) { - logger.error('❌ Failed to start QM spider symbol search', { error }); - return null; - } -} - -export async function fetchExchanges(): Promise { - try { - if (!isInitialized) { - await initializeQMResources(); - } - - logger.info('🔄 QM exchanges fetch - not implemented yet'); - // TODO: Implement QM exchanges fetching logic - return null; - } catch (error) { - logger.error('❌ Failed to fetch QM exchanges', { error }); - return null; - } -} - -export const qmTasks = { - createSessions, - fetchSymbols, - fetchExchanges, - spiderSymbolSearch, -}; diff --git a/apps/data-service/src/handlers/webshare/operations/fetch.operations.ts b/apps/data-service/src/handlers/webshare/operations/fetch.operations.ts deleted file mode 100644 index bc43682..0000000 --- a/apps/data-service/src/handlers/webshare/operations/fetch.operations.ts +++ /dev/null @@ -1,85 +0,0 @@ -/** - * WebShare Fetch Operations - API integration - */ -import { type ProxyInfo } from '@stock-bot/http'; -import { OperationContext } from '@stock-bot/utils'; - -import { WEBSHARE_CONFIG } from '../shared/config'; - -/** - * Fetch proxies from WebShare API and convert to ProxyInfo format - */ -export async function fetchWebShareProxies(): Promise { - const ctx = OperationContext.create('webshare', 'fetch-proxies'); - - try { - // Get configuration from config system - const { getConfig } = await import('@stock-bot/config'); - const config = getConfig(); - - const apiKey = config.webshare?.apiKey; - const apiUrl = config.webshare?.apiUrl; - - if (!apiKey || !apiUrl) { - ctx.logger.error('Missing WebShare configuration', { - hasApiKey: !!apiKey, - hasApiUrl: !!apiUrl, - }); - return []; - } - - ctx.logger.info('Fetching proxies from WebShare API', { apiUrl }); - - const response = await fetch(`${apiUrl}proxy/list/?mode=${WEBSHARE_CONFIG.DEFAULT_MODE}&page=${WEBSHARE_CONFIG.DEFAULT_PAGE}&page_size=${WEBSHARE_CONFIG.DEFAULT_PAGE_SIZE}`, { - method: 'GET', - headers: { - Authorization: `Token ${apiKey}`, - 'Content-Type': 'application/json', - }, - signal: AbortSignal.timeout(WEBSHARE_CONFIG.TIMEOUT), - }); - - if (!response.ok) { - ctx.logger.error('WebShare API request failed', { - status: response.status, - statusText: response.statusText, - }); - return []; - } - - const data = await response.json(); - - if (!data.results || !Array.isArray(data.results)) { - ctx.logger.error('Invalid response format from WebShare API', { data }); - return []; - } - - // Transform proxy data to ProxyInfo format - const proxies: ProxyInfo[] = data.results.map((proxy: { - username: string; - password: string; - proxy_address: string; - port: number; - }) => ({ - source: 'webshare', - protocol: 'http' as const, - host: proxy.proxy_address, - port: proxy.port, - username: proxy.username, - password: proxy.password, - isWorking: true, // WebShare provides working proxies - firstSeen: new Date(), - lastChecked: new Date(), - })); - - ctx.logger.info('Successfully fetched proxies from WebShare', { - count: proxies.length, - total: data.count || proxies.length, - }); - - return proxies; - } catch (error) { - ctx.logger.error('Failed to fetch proxies from WebShare', { error }); - return []; - } -} \ No newline at end of file diff --git a/apps/data-service/src/handlers/webshare/webshare.handler.ts b/apps/data-service/src/handlers/webshare/webshare.handler.ts deleted file mode 100644 index fc6d650..0000000 --- a/apps/data-service/src/handlers/webshare/webshare.handler.ts +++ /dev/null @@ -1,81 +0,0 @@ -/** - * WebShare Provider for proxy management with scheduled updates - */ -import { getLogger } from '@stock-bot/logger'; -import { - createJobHandler, - handlerRegistry, - type HandlerConfigWithSchedule, -} from '@stock-bot/queue'; -import { updateProxies } from '@stock-bot/utils'; - -const logger = getLogger('webshare-provider'); - -// Initialize and register the WebShare provider -export function initializeWebShareProvider() { - logger.debug('Registering WebShare provider with scheduled jobs...'); - - const webShareProviderConfig: HandlerConfigWithSchedule = { - name: 'webshare', - - operations: { - 'fetch-proxies': createJobHandler(async () => { - logger.info('Fetching proxies from WebShare API'); - const { fetchWebShareProxies } = await import('./operations/fetch.operations'); - - try { - const proxies = await fetchWebShareProxies(); - - if (proxies.length > 0) { - // Update the centralized proxy manager - await updateProxies(proxies); - - logger.info('Updated proxy manager with WebShare proxies', { - count: proxies.length, - workingCount: proxies.filter(p => p.isWorking !== false).length, - }); - - return { - success: true, - proxiesUpdated: proxies.length, - workingProxies: proxies.filter(p => p.isWorking !== false).length, - }; - } else { - logger.warn('No proxies fetched from WebShare API'); - return { - success: false, - proxiesUpdated: 0, - error: 'No proxies returned from API', - }; - } - } catch (error) { - logger.error('Failed to fetch and update proxies', { error }); - return { - success: false, - proxiesUpdated: 0, - error: error instanceof Error ? error.message : 'Unknown error', - }; - } - }), - }, - - scheduledJobs: [ - { - type: 'webshare-fetch', - operation: 'fetch-proxies', - cronPattern: '0 */6 * * *', // Every 6 hours - priority: 3, - description: 'Fetch fresh proxies from WebShare API', - immediately: true, // Run on startup - }, - ], - }; - - handlerRegistry.registerWithSchedule(webShareProviderConfig); - logger.debug('WebShare provider registered successfully'); -} - -export const webShareProvider = { - initialize: initializeWebShareProvider, -}; - diff --git a/apps/data-service/src/index.ts b/apps/data-service/src/index.ts deleted file mode 100644 index 8062df7..0000000 --- a/apps/data-service/src/index.ts +++ /dev/null @@ -1,278 +0,0 @@ -// Framework imports -import { initializeServiceConfig } from '@stock-bot/config'; -import { Hono } from 'hono'; -import { cors } from 'hono/cors'; -// Library imports -import { getLogger, setLoggerConfig, shutdownLoggers } from '@stock-bot/logger'; -import { connectMongoDB } from '@stock-bot/mongodb-client'; -import { connectPostgreSQL } from '@stock-bot/postgres-client'; -import { QueueManager, type QueueManagerConfig } from '@stock-bot/queue'; -import { Shutdown } from '@stock-bot/shutdown'; -import { ProxyManager } from '@stock-bot/utils'; -// Local imports -import { exchangeRoutes, healthRoutes, queueRoutes } from './routes'; - -const config = initializeServiceConfig(); -console.log('Data Service Configuration:', JSON.stringify(config, null, 2)); -const serviceConfig = config.service; -const databaseConfig = config.database; -const queueConfig = config.queue; - -if (config.log) { - setLoggerConfig({ - logLevel: config.log.level, - logConsole: true, - logFile: false, - environment: config.environment, - hideObject: config.log.hideObject, - }); -} - -// Create logger AFTER config is set -const logger = getLogger('data-service'); - -const app = new Hono(); - -// Add CORS middleware -app.use( - '*', - cors({ - origin: '*', - allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS', 'PATCH'], - allowHeaders: ['Content-Type', 'Authorization'], - credentials: false, - }) -); -const PORT = serviceConfig.port; -let server: ReturnType | null = null; -// Singleton clients are managed in libraries -let queueManager: QueueManager | null = null; - -// Initialize shutdown manager -const shutdown = Shutdown.getInstance({ timeout: 15000 }); - -// Mount routes -app.route('/health', healthRoutes); -app.route('/api/exchanges', exchangeRoutes); -app.route('/api/queue', queueRoutes); - -// Initialize services -async function initializeServices() { - logger.info('Initializing data service...'); - - try { - // Initialize MongoDB client singleton - logger.debug('Connecting to MongoDB...'); - const mongoConfig = databaseConfig.mongodb; - await connectMongoDB({ - uri: mongoConfig.uri, - database: mongoConfig.database, - host: mongoConfig.host || 'localhost', - port: mongoConfig.port || 27017, - timeouts: { - connectTimeout: 30000, - socketTimeout: 30000, - serverSelectionTimeout: 5000, - }, - }); - logger.info('MongoDB connected'); - - // Initialize PostgreSQL client singleton - logger.debug('Connecting to PostgreSQL...'); - const pgConfig = databaseConfig.postgres; - await connectPostgreSQL({ - host: pgConfig.host, - port: pgConfig.port, - database: pgConfig.database, - username: pgConfig.user, - password: pgConfig.password, - poolSettings: { - min: 2, - max: pgConfig.poolSize || 10, - idleTimeoutMillis: pgConfig.idleTimeout || 30000, - }, - }); - logger.info('PostgreSQL connected'); - - // Initialize queue system (with delayed worker start) - logger.debug('Initializing queue system...'); - const queueManagerConfig: QueueManagerConfig = { - redis: queueConfig?.redis || { - host: 'localhost', - port: 6379, - db: 1, - }, - defaultQueueOptions: { - defaultJobOptions: queueConfig?.defaultJobOptions || { - attempts: 3, - backoff: { - type: 'exponential', - delay: 1000, - }, - removeOnComplete: 10, - removeOnFail: 5, - }, - workers: 2, - concurrency: 1, - enableMetrics: true, - enableDLQ: true, - }, - enableScheduledJobs: true, - delayWorkerStart: true, // Prevent workers from starting until all singletons are ready - }; - - queueManager = QueueManager.getOrInitialize(queueManagerConfig); - logger.info('Queue system initialized'); - - // Initialize proxy manager - logger.debug('Initializing proxy manager...'); - await ProxyManager.initialize(); - logger.info('Proxy manager initialized'); - - // Initialize handlers (register handlers and scheduled jobs) - logger.debug('Initializing data handlers...'); - const { initializeWebShareProvider } = await import('./handlers/webshare/webshare.handler'); - const { initializeIBProvider } = await import('./handlers/ib/ib.handler'); - const { initializeProxyProvider } = await import('./handlers/proxy/proxy.handler'); - const { initializeQMProvider } = await import('./handlers/qm/qm.handler'); - - initializeWebShareProvider(); - initializeIBProvider(); - initializeProxyProvider(); - initializeQMProvider(); - - logger.info('Data handlers initialized'); - - // Create scheduled jobs from registered handlers - logger.debug('Creating scheduled jobs from registered handlers...'); - const { handlerRegistry } = await import('@stock-bot/queue'); - const allHandlers = handlerRegistry.getAllHandlers(); - - let totalScheduledJobs = 0; - for (const [handlerName, config] of allHandlers) { - if (config.scheduledJobs && config.scheduledJobs.length > 0) { - const queue = queueManager.getQueue(handlerName); - - for (const scheduledJob of config.scheduledJobs) { - // Include handler and operation info in job data - const jobData = { - handler: handlerName, - operation: scheduledJob.operation, - payload: scheduledJob.payload || {}, - }; - - // Build job options from scheduled job config - const jobOptions = { - priority: scheduledJob.priority, - delay: scheduledJob.delay, - repeat: { - immediately: scheduledJob.immediately, - }, - }; - - await queue.addScheduledJob( - scheduledJob.operation, - jobData, - scheduledJob.cronPattern, - jobOptions - ); - totalScheduledJobs++; - logger.debug('Scheduled job created', { - handler: handlerName, - operation: scheduledJob.operation, - cronPattern: scheduledJob.cronPattern, - immediately: scheduledJob.immediately, - priority: scheduledJob.priority, - }); - } - } - } - logger.info('Scheduled jobs created', { totalJobs: totalScheduledJobs }); - - // Now that all singletons are initialized and jobs are scheduled, start the workers - logger.debug('Starting queue workers...'); - queueManager.startAllWorkers(); - logger.info('Queue workers started'); - - logger.info('All services initialized successfully'); - } catch (error) { - logger.error('Failed to initialize services', { error }); - throw error; - } -} - -// Start server -async function startServer() { - await initializeServices(); - - server = Bun.serve({ - port: PORT, - fetch: app.fetch, - development: config.environment === 'development', - }); - - logger.info(`Data Service started on port ${PORT}`); -} - -// Register shutdown handlers with priorities -// Priority 1: Queue system (highest priority) -shutdown.onShutdownHigh(async () => { - logger.info('Shutting down queue system...'); - try { - if (queueManager) { - await queueManager.shutdown(); - } - logger.info('Queue system shut down'); - } catch (error) { - logger.error('Error shutting down queue system', { error }); - } -}, 'Queue System'); - -// Priority 1: HTTP Server (high priority) -shutdown.onShutdownHigh(async () => { - if (server) { - logger.info('Stopping HTTP server...'); - try { - server.stop(); - logger.info('HTTP server stopped'); - } catch (error) { - logger.error('Error stopping HTTP server', { error }); - } - } -}, 'HTTP Server'); - -// Priority 2: Database connections (medium priority) -shutdown.onShutdownMedium(async () => { - logger.info('Disconnecting from databases...'); - try { - const { disconnectMongoDB } = await import('@stock-bot/mongodb-client'); - const { disconnectPostgreSQL } = await import('@stock-bot/postgres-client'); - - await disconnectMongoDB(); - await disconnectPostgreSQL(); - logger.info('Database connections closed'); - } catch (error) { - logger.error('Error closing database connections', { error }); - } -}, 'Databases'); - -// Priority 3: Logger shutdown (lowest priority - runs last) -shutdown.onShutdownLow(async () => { - try { - logger.info('Shutting down loggers...'); - await shutdownLoggers(); - // Don't log after shutdown - } catch { - // Silently ignore logger shutdown errors - } -}, 'Loggers'); - -// Start the service -startServer().catch(error => { - logger.fatal('Failed to start data service', { error }); - process.exit(1); -}); - -logger.info('Data service startup initiated'); - -// ProxyManager class and singleton instance are available via @stock-bot/utils diff --git a/apps/data-service/src/routes/market-data.routes.ts b/apps/data-service/src/routes/market-data.routes.ts deleted file mode 100644 index 62bd74e..0000000 --- a/apps/data-service/src/routes/market-data.routes.ts +++ /dev/null @@ -1,121 +0,0 @@ -/** - * Market data routes - */ -import { Hono } from 'hono'; -import { getLogger } from '@stock-bot/logger'; -import { processItems, QueueManager } from '@stock-bot/queue'; - -const logger = getLogger('market-data-routes'); - -export const marketDataRoutes = new Hono(); - -// Market data endpoints -marketDataRoutes.get('/api/live/:symbol', async c => { - const symbol = c.req.param('symbol'); - logger.info('Live data request', { symbol }); - - try { - // Queue job for live data using Yahoo provider - const queueManager = QueueManager.getInstance(); - const queue = queueManager.getQueue('yahoo-finance'); - const job = await queue.add('live-data', { - handler: 'yahoo-finance', - operation: 'live-data', - payload: { symbol }, - }); - return c.json({ - status: 'success', - message: 'Live data job queued', - jobId: job.id, - symbol, - }); - } catch (error) { - logger.error('Failed to queue live data job', { symbol, error }); - return c.json({ status: 'error', message: 'Failed to queue live data job' }, 500); - } -}); - -marketDataRoutes.get('/api/historical/:symbol', async c => { - const symbol = c.req.param('symbol'); - const from = c.req.query('from'); - const to = c.req.query('to'); - - logger.info('Historical data request', { symbol, from, to }); - - try { - const fromDate = from ? new Date(from) : new Date(Date.now() - 30 * 24 * 60 * 60 * 1000); // 30 days ago - const toDate = to ? new Date(to) : new Date(); // Now - - // Queue job for historical data using Yahoo provider - const queueManager = QueueManager.getInstance(); - const queue = queueManager.getQueue('yahoo-finance'); - const job = await queue.add('historical-data', { - handler: 'yahoo-finance', - operation: 'historical-data', - payload: { - symbol, - from: fromDate.toISOString(), - to: toDate.toISOString(), - }, - }); - - return c.json({ - status: 'success', - message: 'Historical data job queued', - jobId: job.id, - symbol, - from: fromDate, - to: toDate, - }); - } catch (error) { - logger.error('Failed to queue historical data job', { symbol, from, to, error }); - return c.json({ status: 'error', message: 'Failed to queue historical data job' }, 500); - } -}); - -// Batch processing endpoint using new queue system -marketDataRoutes.post('/api/process-symbols', async c => { - try { - const { - symbols, - provider = 'ib', - operation = 'fetch-session', - useBatching = true, - totalDelayHours = 0.0083, // ~30 seconds (30/3600 hours) - batchSize = 10, - } = await c.req.json(); - - if (!symbols || !Array.isArray(symbols) || symbols.length === 0) { - return c.json({ status: 'error', message: 'Invalid symbols array' }, 400); - } - - logger.info('Batch processing symbols', { - count: symbols.length, - provider, - operation, - useBatching, - }); - - const result = await processItems(symbols, provider, { - handler: provider, - operation, - totalDelayHours, - useBatching, - batchSize, - priority: 2, - retries: 2, - removeOnComplete: 5, - removeOnFail: 10, - }); - - return c.json({ - status: 'success', - message: 'Batch processing initiated', - result, - symbols: symbols.length, - }); - } catch (error) { - logger.error('Failed to process symbols batch', { error }); - return c.json({ status: 'error', message: 'Failed to process symbols batch' }, 500); - } -}); diff --git a/apps/data-service/src/routes/queue.routes.ts b/apps/data-service/src/routes/queue.routes.ts deleted file mode 100644 index 20a8d4d..0000000 --- a/apps/data-service/src/routes/queue.routes.ts +++ /dev/null @@ -1,25 +0,0 @@ -import { Hono } from 'hono'; -import { getLogger } from '@stock-bot/logger'; -import { QueueManager } from '@stock-bot/queue'; - -const logger = getLogger('queue-routes'); -const queue = new Hono(); - -// Queue status endpoint -queue.get('/status', async c => { - try { - const queueManager = QueueManager.getInstance(); - const globalStats = await queueManager.getGlobalStats(); - - return c.json({ - status: 'success', - data: globalStats, - message: 'Queue status retrieved successfully' - }); - } catch (error) { - logger.error('Failed to get queue status', { error }); - return c.json({ status: 'error', message: 'Failed to get queue status' }, 500); - } -}); - -export { queue as queueRoutes }; \ No newline at end of file diff --git a/apps/data-service/tsconfig.json b/apps/data-service/tsconfig.json deleted file mode 100644 index d9f09df..0000000 --- a/apps/data-service/tsconfig.json +++ /dev/null @@ -1,14 +0,0 @@ -{ - "extends": "../../tsconfig.app.json", - "references": [ - { "path": "../../libs/types" }, - { "path": "../../libs/config" }, - { "path": "../../libs/logger" }, - { "path": "../../libs/cache" }, - { "path": "../../libs/queue" }, - { "path": "../../libs/mongodb-client" }, - { "path": "../../libs/postgres-client" }, - { "path": "../../libs/questdb-client" }, - { "path": "../../libs/shutdown" } - ] -} diff --git a/apps/data-sync-service/config/default.json b/apps/data-sync-service/config/default.json deleted file mode 100644 index fa994d5..0000000 --- a/apps/data-sync-service/config/default.json +++ /dev/null @@ -1,15 +0,0 @@ -{ - "service": { - "name": "data-sync-service", - "port": 3005, - "host": "0.0.0.0", - "healthCheckPath": "/health", - "metricsPath": "/metrics", - "shutdownTimeout": 30000, - "cors": { - "enabled": true, - "origin": "*", - "credentials": false - } - } -} \ No newline at end of file diff --git a/apps/data-sync-service/src/handlers/exchanges/exchanges.handler.ts b/apps/data-sync-service/src/handlers/exchanges/exchanges.handler.ts deleted file mode 100644 index 06aa283..0000000 --- a/apps/data-sync-service/src/handlers/exchanges/exchanges.handler.ts +++ /dev/null @@ -1,58 +0,0 @@ -import { getLogger } from '@stock-bot/logger'; -import { handlerRegistry, type HandlerConfig, type ScheduledJobConfig } from '@stock-bot/queue'; -import { exchangeOperations } from './operations'; - -const logger = getLogger('exchanges-handler'); - -const HANDLER_NAME = 'exchanges'; - -const exchangesHandlerConfig: HandlerConfig = { - concurrency: 1, - maxAttempts: 3, - scheduledJobs: [ - { - operation: 'sync-all-exchanges', - cronPattern: '0 0 * * 0', // Weekly on Sunday at midnight - payload: { clearFirst: true }, - priority: 10, - immediately: false, - } as ScheduledJobConfig, - { - operation: 'sync-qm-exchanges', - cronPattern: '0 1 * * *', // Daily at 1 AM - payload: {}, - priority: 5, - immediately: false, - } as ScheduledJobConfig, - { - operation: 'sync-ib-exchanges', - cronPattern: '0 3 * * *', // Daily at 3 AM - payload: {}, - priority: 3, - immediately: false, - } as ScheduledJobConfig, - { - operation: 'sync-qm-provider-mappings', - cronPattern: '0 3 * * *', // Daily at 3 AM - payload: {}, - priority: 7, - immediately: false, - } as ScheduledJobConfig, - ], - operations: { - 'sync-all-exchanges': exchangeOperations.syncAllExchanges, - 'sync-qm-exchanges': exchangeOperations.syncQMExchanges, - 'sync-ib-exchanges': exchangeOperations.syncIBExchanges, - 'sync-qm-provider-mappings': exchangeOperations.syncQMProviderMappings, - 'clear-postgresql-data': exchangeOperations.clearPostgreSQLData, - 'get-exchange-stats': exchangeOperations.getExchangeStats, - 'get-provider-mapping-stats': exchangeOperations.getProviderMappingStats, - 'enhanced-sync-status': exchangeOperations['enhanced-sync-status'], - }, -}; - -export function initializeExchangesHandler(): void { - logger.info('Registering exchanges handler...'); - handlerRegistry.registerHandler(HANDLER_NAME, exchangesHandlerConfig); - logger.info('Exchanges handler registered successfully'); -} \ No newline at end of file diff --git a/apps/data-sync-service/src/handlers/symbols/symbols.handler.ts b/apps/data-sync-service/src/handlers/symbols/symbols.handler.ts deleted file mode 100644 index 6fdd17f..0000000 --- a/apps/data-sync-service/src/handlers/symbols/symbols.handler.ts +++ /dev/null @@ -1,41 +0,0 @@ -import { getLogger } from '@stock-bot/logger'; -import { handlerRegistry, type HandlerConfig, type ScheduledJobConfig } from '@stock-bot/queue'; -import { symbolOperations } from './operations'; - -const logger = getLogger('symbols-handler'); - -const HANDLER_NAME = 'symbols'; - -const symbolsHandlerConfig: HandlerConfig = { - concurrency: 1, - maxAttempts: 3, - scheduledJobs: [ - { - operation: 'sync-qm-symbols', - cronPattern: '0 2 * * *', // Daily at 2 AM - payload: {}, - priority: 5, - immediately: false, - } as ScheduledJobConfig, - { - operation: 'sync-symbols-qm', - cronPattern: '0 4 * * *', // Daily at 4 AM - payload: { provider: 'qm', clearFirst: false }, - priority: 5, - immediately: false, - } as ScheduledJobConfig, - ], - operations: { - 'sync-qm-symbols': symbolOperations.syncQMSymbols, - 'sync-symbols-qm': symbolOperations.syncSymbolsFromProvider, - 'sync-symbols-eod': symbolOperations.syncSymbolsFromProvider, - 'sync-symbols-ib': symbolOperations.syncSymbolsFromProvider, - 'sync-status': symbolOperations.getSyncStatus, - }, -}; - -export function initializeSymbolsHandler(): void { - logger.info('Registering symbols handler...'); - handlerRegistry.registerHandler(HANDLER_NAME, symbolsHandlerConfig); - logger.info('Symbols handler registered successfully'); -} \ No newline at end of file diff --git a/apps/data-sync-service/src/index.ts b/apps/data-sync-service/src/index.ts deleted file mode 100644 index cf0ebaf..0000000 --- a/apps/data-sync-service/src/index.ts +++ /dev/null @@ -1,267 +0,0 @@ -// Framework imports -import { initializeServiceConfig } from '@stock-bot/config'; -import { Hono } from 'hono'; -import { cors } from 'hono/cors'; -// Library imports -import { getLogger, setLoggerConfig, shutdownLoggers } from '@stock-bot/logger'; -import { connectMongoDB } from '@stock-bot/mongodb-client'; -import { connectPostgreSQL } from '@stock-bot/postgres-client'; -import { QueueManager, type QueueManagerConfig } from '@stock-bot/queue'; -import { Shutdown } from '@stock-bot/shutdown'; -// Local imports -import { healthRoutes, enhancedSyncRoutes, statsRoutes, syncRoutes } from './routes'; - -const config = initializeServiceConfig(); -console.log('Data Sync Service Configuration:', JSON.stringify(config, null, 2)); -const serviceConfig = config.service; -const databaseConfig = config.database; -const queueConfig = config.queue; - -if (config.log) { - setLoggerConfig({ - logLevel: config.log.level, - logConsole: true, - logFile: false, - environment: config.environment, - hideObject: config.log.hideObject, - }); -} - -// Create logger AFTER config is set -const logger = getLogger('data-sync-service'); - -const app = new Hono(); - -// Add CORS middleware -app.use( - '*', - cors({ - origin: '*', - allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS', 'PATCH'], - allowHeaders: ['Content-Type', 'Authorization'], - credentials: false, - }) -); -const PORT = serviceConfig.port; -let server: ReturnType | null = null; -// Singleton clients are managed in libraries -let queueManager: QueueManager | null = null; - -// Initialize shutdown manager -const shutdown = Shutdown.getInstance({ timeout: 15000 }); - -// Mount routes -app.route('/health', healthRoutes); -app.route('/sync', syncRoutes); -app.route('/sync', enhancedSyncRoutes); -app.route('/sync/stats', statsRoutes); - -// Initialize services -async function initializeServices() { - logger.info('Initializing data sync service...'); - - try { - // Initialize MongoDB client singleton - logger.debug('Connecting to MongoDB...'); - const mongoConfig = databaseConfig.mongodb; - await connectMongoDB({ - uri: mongoConfig.uri, - database: mongoConfig.database, - host: mongoConfig.host || 'localhost', - port: mongoConfig.port || 27017, - timeouts: { - connectTimeout: 30000, - socketTimeout: 30000, - serverSelectionTimeout: 5000, - }, - }); - logger.info('MongoDB connected'); - - // Initialize PostgreSQL client singleton - logger.debug('Connecting to PostgreSQL...'); - const pgConfig = databaseConfig.postgres; - await connectPostgreSQL({ - host: pgConfig.host, - port: pgConfig.port, - database: pgConfig.database, - username: pgConfig.user, - password: pgConfig.password, - poolSettings: { - min: 2, - max: pgConfig.poolSize || 10, - idleTimeoutMillis: pgConfig.idleTimeout || 30000, - }, - }); - logger.info('PostgreSQL connected'); - - // Initialize queue system (with delayed worker start) - logger.debug('Initializing queue system...'); - const queueManagerConfig: QueueManagerConfig = { - redis: queueConfig?.redis || { - host: 'localhost', - port: 6379, - db: 1, - }, - defaultQueueOptions: { - defaultJobOptions: queueConfig?.defaultJobOptions || { - attempts: 3, - backoff: { - type: 'exponential', - delay: 1000, - }, - removeOnComplete: 10, - removeOnFail: 5, - }, - workers: 2, - concurrency: 1, - enableMetrics: true, - enableDLQ: true, - }, - enableScheduledJobs: true, - delayWorkerStart: true, // Prevent workers from starting until all singletons are ready - }; - - queueManager = QueueManager.getOrInitialize(queueManagerConfig); - logger.info('Queue system initialized'); - - // Initialize handlers (register handlers and scheduled jobs) - logger.debug('Initializing sync handlers...'); - const { initializeExchangesHandler } = await import('./handlers/exchanges/exchanges.handler'); - const { initializeSymbolsHandler } = await import('./handlers/symbols/symbols.handler'); - - initializeExchangesHandler(); - initializeSymbolsHandler(); - - logger.info('Sync handlers initialized'); - - // Create scheduled jobs from registered handlers - logger.debug('Creating scheduled jobs from registered handlers...'); - const { handlerRegistry } = await import('@stock-bot/queue'); - const allHandlers = handlerRegistry.getAllHandlers(); - - let totalScheduledJobs = 0; - for (const [handlerName, config] of allHandlers) { - if (config.scheduledJobs && config.scheduledJobs.length > 0) { - const queue = queueManager.getQueue(handlerName); - - for (const scheduledJob of config.scheduledJobs) { - // Include handler and operation info in job data - const jobData = { - handler: handlerName, - operation: scheduledJob.operation, - payload: scheduledJob.payload || {}, - }; - - // Build job options from scheduled job config - const jobOptions = { - priority: scheduledJob.priority, - delay: scheduledJob.delay, - repeat: { - immediately: scheduledJob.immediately, - }, - }; - - await queue.addScheduledJob( - scheduledJob.operation, - jobData, - scheduledJob.cronPattern, - jobOptions - ); - totalScheduledJobs++; - logger.debug('Scheduled job created', { - handler: handlerName, - operation: scheduledJob.operation, - cronPattern: scheduledJob.cronPattern, - immediately: scheduledJob.immediately, - priority: scheduledJob.priority, - }); - } - } - } - logger.info('Scheduled jobs created', { totalJobs: totalScheduledJobs }); - - // Now that all singletons are initialized and jobs are scheduled, start the workers - logger.debug('Starting queue workers...'); - queueManager.startAllWorkers(); - logger.info('Queue workers started'); - - logger.info('All services initialized successfully'); - } catch (error) { - logger.error('Failed to initialize services', { error }); - throw error; - } -} - -// Start server -async function startServer() { - await initializeServices(); - - server = Bun.serve({ - port: PORT, - fetch: app.fetch, - development: config.environment === 'development', - }); - - logger.info(`Data Sync Service started on port ${PORT}`); -} - -// Register shutdown handlers with priorities -// Priority 1: Queue system (highest priority) -shutdown.onShutdownHigh(async () => { - logger.info('Shutting down queue system...'); - try { - if (queueManager) { - await queueManager.shutdown(); - } - logger.info('Queue system shut down'); - } catch (error) { - logger.error('Error shutting down queue system', { error }); - } -}, 'Queue System'); - -// Priority 1: HTTP Server (high priority) -shutdown.onShutdownHigh(async () => { - if (server) { - logger.info('Stopping HTTP server...'); - try { - server.stop(); - logger.info('HTTP server stopped'); - } catch (error) { - logger.error('Error stopping HTTP server', { error }); - } - } -}, 'HTTP Server'); - -// Priority 2: Database connections (medium priority) -shutdown.onShutdownMedium(async () => { - logger.info('Disconnecting from databases...'); - try { - const { disconnectMongoDB } = await import('@stock-bot/mongodb-client'); - const { disconnectPostgreSQL } = await import('@stock-bot/postgres-client'); - - await disconnectMongoDB(); - await disconnectPostgreSQL(); - logger.info('Database connections closed'); - } catch (error) { - logger.error('Error closing database connections', { error }); - } -}, 'Databases'); - -// Priority 3: Logger shutdown (lowest priority - runs last) -shutdown.onShutdownLow(async () => { - try { - logger.info('Shutting down loggers...'); - await shutdownLoggers(); - // Don't log after shutdown - } catch { - // Silently ignore logger shutdown errors - } -}, 'Loggers'); - -// Start the service -startServer().catch(error => { - logger.fatal('Failed to start data sync service', { error }); - process.exit(1); -}); - -logger.info('Data sync service startup initiated'); \ No newline at end of file diff --git a/apps/data-sync-service/src/routes/enhanced-sync.routes.ts b/apps/data-sync-service/src/routes/enhanced-sync.routes.ts deleted file mode 100644 index ba17805..0000000 --- a/apps/data-sync-service/src/routes/enhanced-sync.routes.ts +++ /dev/null @@ -1,96 +0,0 @@ -import { Hono } from 'hono'; -import { getLogger } from '@stock-bot/logger'; -import { QueueManager } from '@stock-bot/queue'; - -const logger = getLogger('enhanced-sync-routes'); -const enhancedSync = new Hono(); - -// Enhanced sync endpoints -enhancedSync.post('/exchanges/all', async c => { - try { - const clearFirst = c.req.query('clear') === 'true'; - const queueManager = QueueManager.getInstance(); - const exchangesQueue = queueManager.getQueue('exchanges'); - - const job = await exchangesQueue.addJob('sync-all-exchanges', { - handler: 'exchanges', - operation: 'sync-all-exchanges', - payload: { clearFirst }, - }); - - return c.json({ success: true, jobId: job.id, message: 'Enhanced exchange sync job queued' }); - } catch (error) { - logger.error('Failed to queue enhanced exchange sync job', { error }); - return c.json( - { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, - 500 - ); - } -}); - -enhancedSync.post('/provider-mappings/qm', async c => { - try { - const queueManager = QueueManager.getInstance(); - const exchangesQueue = queueManager.getQueue('exchanges'); - - const job = await exchangesQueue.addJob('sync-qm-provider-mappings', { - handler: 'exchanges', - operation: 'sync-qm-provider-mappings', - payload: {}, - }); - - return c.json({ success: true, jobId: job.id, message: 'QM provider mappings sync job queued' }); - } catch (error) { - logger.error('Failed to queue QM provider mappings sync job', { error }); - return c.json( - { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, - 500 - ); - } -}); - -enhancedSync.post('/symbols/:provider', async c => { - try { - const provider = c.req.param('provider'); - const clearFirst = c.req.query('clear') === 'true'; - const queueManager = QueueManager.getInstance(); - const symbolsQueue = queueManager.getQueue('symbols'); - - const job = await symbolsQueue.addJob(`sync-symbols-${provider}`, { - handler: 'symbols', - operation: `sync-symbols-${provider}`, - payload: { provider, clearFirst }, - }); - - return c.json({ success: true, jobId: job.id, message: `${provider} symbols sync job queued` }); - } catch (error) { - logger.error('Failed to queue enhanced symbol sync job', { error }); - return c.json( - { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, - 500 - ); - } -}); - -// Enhanced status endpoints -enhancedSync.get('/status/enhanced', async c => { - try { - const queueManager = QueueManager.getInstance(); - const exchangesQueue = queueManager.getQueue('exchanges'); - - const job = await exchangesQueue.addJob('enhanced-sync-status', { - handler: 'exchanges', - operation: 'enhanced-sync-status', - payload: {}, - }); - - // Wait for job to complete and return result - const result = await job.waitUntilFinished(); - return c.json(result); - } catch (error) { - logger.error('Failed to get enhanced sync status', { error }); - return c.json({ error: error instanceof Error ? error.message : 'Unknown error' }, 500); - } -}); - -export { enhancedSync as enhancedSyncRoutes }; \ No newline at end of file diff --git a/apps/data-sync-service/src/routes/stats.routes.ts b/apps/data-sync-service/src/routes/stats.routes.ts deleted file mode 100644 index 8112c9c..0000000 --- a/apps/data-sync-service/src/routes/stats.routes.ts +++ /dev/null @@ -1,49 +0,0 @@ -import { Hono } from 'hono'; -import { getLogger } from '@stock-bot/logger'; -import { QueueManager } from '@stock-bot/queue'; - -const logger = getLogger('stats-routes'); -const stats = new Hono(); - -// Statistics endpoints -stats.get('/exchanges', async c => { - try { - const queueManager = QueueManager.getInstance(); - const exchangesQueue = queueManager.getQueue('exchanges'); - - const job = await exchangesQueue.addJob('get-exchange-stats', { - handler: 'exchanges', - operation: 'get-exchange-stats', - payload: {}, - }); - - // Wait for job to complete and return result - const result = await job.waitUntilFinished(); - return c.json(result); - } catch (error) { - logger.error('Failed to get exchange stats', { error }); - return c.json({ error: error instanceof Error ? error.message : 'Unknown error' }, 500); - } -}); - -stats.get('/provider-mappings', async c => { - try { - const queueManager = QueueManager.getInstance(); - const exchangesQueue = queueManager.getQueue('exchanges'); - - const job = await exchangesQueue.addJob('get-provider-mapping-stats', { - handler: 'exchanges', - operation: 'get-provider-mapping-stats', - payload: {}, - }); - - // Wait for job to complete and return result - const result = await job.waitUntilFinished(); - return c.json(result); - } catch (error) { - logger.error('Failed to get provider mapping stats', { error }); - return c.json({ error: error instanceof Error ? error.message : 'Unknown error' }, 500); - } -}); - -export { stats as statsRoutes }; \ No newline at end of file diff --git a/apps/data-sync-service/src/routes/sync.routes.ts b/apps/data-sync-service/src/routes/sync.routes.ts deleted file mode 100644 index 487e31d..0000000 --- a/apps/data-sync-service/src/routes/sync.routes.ts +++ /dev/null @@ -1,96 +0,0 @@ -import { Hono } from 'hono'; -import { getLogger } from '@stock-bot/logger'; -import { QueueManager } from '@stock-bot/queue'; - -const logger = getLogger('sync-routes'); -const sync = new Hono(); - -// Manual sync trigger endpoints -sync.post('/symbols', async c => { - try { - const queueManager = QueueManager.getInstance(); - const symbolsQueue = queueManager.getQueue('symbols'); - - const job = await symbolsQueue.addJob('sync-qm-symbols', { - handler: 'symbols', - operation: 'sync-qm-symbols', - payload: {}, - }); - - return c.json({ success: true, jobId: job.id, message: 'QM symbols sync job queued' }); - } catch (error) { - logger.error('Failed to queue symbol sync job', { error }); - return c.json( - { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, - 500 - ); - } -}); - -sync.post('/exchanges', async c => { - try { - const queueManager = QueueManager.getInstance(); - const exchangesQueue = queueManager.getQueue('exchanges'); - - const job = await exchangesQueue.addJob('sync-qm-exchanges', { - handler: 'exchanges', - operation: 'sync-qm-exchanges', - payload: {}, - }); - - return c.json({ success: true, jobId: job.id, message: 'QM exchanges sync job queued' }); - } catch (error) { - logger.error('Failed to queue exchange sync job', { error }); - return c.json( - { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, - 500 - ); - } -}); - -// Get sync status -sync.get('/status', async c => { - try { - const queueManager = QueueManager.getInstance(); - const symbolsQueue = queueManager.getQueue('symbols'); - - const job = await symbolsQueue.addJob('sync-status', { - handler: 'symbols', - operation: 'sync-status', - payload: {}, - }); - - // Wait for job to complete and return result - const result = await job.waitUntilFinished(); - return c.json(result); - } catch (error) { - logger.error('Failed to get sync status', { error }); - return c.json({ error: error instanceof Error ? error.message : 'Unknown error' }, 500); - } -}); - -// Clear data endpoint -sync.post('/clear', async c => { - try { - const queueManager = QueueManager.getInstance(); - const exchangesQueue = queueManager.getQueue('exchanges'); - - const job = await exchangesQueue.addJob('clear-postgresql-data', { - handler: 'exchanges', - operation: 'clear-postgresql-data', - payload: {}, - }); - - // Wait for job to complete and return result - const result = await job.waitUntilFinished(); - return c.json({ success: true, result }); - } catch (error) { - logger.error('Failed to clear PostgreSQL data', { error }); - return c.json( - { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, - 500 - ); - } -}); - -export { sync as syncRoutes }; \ No newline at end of file diff --git a/apps/data-sync-service/tsconfig.json b/apps/data-sync-service/tsconfig.json deleted file mode 100644 index d9f09df..0000000 --- a/apps/data-sync-service/tsconfig.json +++ /dev/null @@ -1,14 +0,0 @@ -{ - "extends": "../../tsconfig.app.json", - "references": [ - { "path": "../../libs/types" }, - { "path": "../../libs/config" }, - { "path": "../../libs/logger" }, - { "path": "../../libs/cache" }, - { "path": "../../libs/queue" }, - { "path": "../../libs/mongodb-client" }, - { "path": "../../libs/postgres-client" }, - { "path": "../../libs/questdb-client" }, - { "path": "../../libs/shutdown" } - ] -} diff --git a/apps/stock/README.md b/apps/stock/README.md new file mode 100644 index 0000000..a82978b --- /dev/null +++ b/apps/stock/README.md @@ -0,0 +1,124 @@ +# Stock Trading Bot Application + +A comprehensive stock trading bot application with multiple microservices for data ingestion, processing, and API access. + +## Architecture + +The stock bot consists of the following services: + +- **Config**: Centralized configuration management +- **Data Ingestion**: Handles real-time and historical data collection +- **Data Pipeline**: Processes and transforms market data +- **Web API**: RESTful API for accessing stock data +- **Web App**: Frontend user interface + +## Quick Start + +### Prerequisites + +- Node.js >= 18.0.0 +- Bun >= 1.1.0 +- Turbo +- PostgreSQL, MongoDB, QuestDB, and Redis/Dragonfly running locally + +### Installation + +```bash +# Install all dependencies +bun install + +# Build the configuration package first +bun run build:config +``` + +### Development + +```bash +# Run all services in development mode (using Turbo) +bun run dev + +# Run only backend services +bun run dev:backend + +# Run only frontend +bun run dev:frontend + +# Run specific service +bun run dev:ingestion +bun run dev:pipeline +bun run dev:api +bun run dev:web +``` + +### Production + +```bash +# Build all services (using Turbo) +bun run build + +# Start with PM2 +bun run pm2:start + +# Check status +bun run pm2:status + +# View logs +bun run pm2:logs +``` + +### Configuration + +Configuration is managed centrally in the `config` package. + +- Default config: `config/config/default.json` +- Environment-specific: `config/config/[environment].json` +- Environment variables: Can override any config value + +### Health Checks + +```bash +# Check all services health +bun run health:check +``` + +### Database Management + +```bash +# Run migrations +bun run db:migrate + +# Seed database +bun run db:seed +``` + +## Available Scripts + +| Script | Description | +|--------|-------------| +| `dev` | Run all services in development mode | +| `build` | Build all services | +| `start` | Start all backend services | +| `test` | Run tests for all services | +| `lint` | Lint all services | +| `clean` | Clean build artifacts and dependencies | +| `docker:build` | Build Docker images | +| `pm2:start` | Start services with PM2 | +| `health:check` | Check health of all services | + +## Service Ports + +- Data Ingestion: 2001 +- Data Pipeline: 2002 +- Web API: 2003 +- Web App: 3000 (or next available) + +## Environment Variables + +Key environment variables: + +- `NODE_ENV`: development, test, or production +- `PORT`: Override default service port +- Database connection strings +- API keys for data providers + +See `config/config/default.json` for full configuration options. \ No newline at end of file diff --git a/apps/stock/config/config/default.json b/apps/stock/config/config/default.json new file mode 100644 index 0000000..902d26b --- /dev/null +++ b/apps/stock/config/config/default.json @@ -0,0 +1,228 @@ +{ + "name": "stock-bot", + "version": "1.0.0", + "environment": "development", + "service": { + "name": "stock-bot", + "port": 3000, + "host": "0.0.0.0", + "healthCheckPath": "/health", + "metricsPath": "/metrics", + "shutdownTimeout": 30000, + "cors": { + "enabled": true, + "origin": "*", + "credentials": true + } + }, + "database": { + "postgres": { + "enabled": true, + "host": "localhost", + "port": 5432, + "database": "trading_bot", + "user": "trading_user", + "password": "trading_pass_dev", + "ssl": false, + "poolSize": 20, + "connectionTimeout": 30000, + "idleTimeout": 10000 + }, + "questdb": { + "host": "localhost", + "ilpPort": 9009, + "httpPort": 9000, + "pgPort": 8812, + "database": "questdb", + "user": "admin", + "password": "quest", + "bufferSize": 65536, + "flushInterval": 1000 + }, + "mongodb": { + "uri": "mongodb://trading_admin:trading_mongo_dev@localhost:27017/stock?authSource=admin", + "database": "stock", + "poolSize": 20 + }, + "dragonfly": { + "host": "localhost", + "port": 6379, + "db": 0, + "keyPrefix": "stock-bot:", + "maxRetries": 3, + "retryDelay": 100 + } + }, + "log": { + "level": "info", + "format": "json", + "hideObject": false, + "loki": { + "enabled": false, + "host": "localhost", + "port": 3100, + "labels": {} + } + }, + "redis": { + "enabled": true, + "host": "localhost", + "port": 6379, + "db": 0 + }, + "queue": { + "enabled": true, + "redis": { + "host": "localhost", + "port": 6379, + "db": 1 + }, + "workers": 1, + "concurrency": 1, + "enableScheduledJobs": true, + "delayWorkerStart": false, + "defaultJobOptions": { + "attempts": 3, + "backoff": { + "type": "exponential", + "delay": 1000 + }, + "removeOnComplete": 100, + "removeOnFail": 50, + "timeout": 300000 + } + }, + "http": { + "timeout": 30000, + "retries": 3, + "retryDelay": 1000, + "userAgent": "StockBot/1.0", + "proxy": { + "enabled": false + } + }, + "webshare": { + "apiKey": "", + "apiUrl": "https://proxy.webshare.io/api/v2/", + "enabled": true + }, + "browser": { + "headless": true, + "timeout": 30000 + }, + "proxy": { + "enabled": true, + "cachePrefix": "proxy:", + "ttl": 3600, + "webshare": { + "apiKey": "y8ay534rcbybdkk3evnzmt640xxfhy7252ce2t98", + "apiUrl": "https://proxy.webshare.io/api/v2/" + } + }, + "providers": { + "yahoo": { + "name": "yahoo", + "enabled": true, + "priority": 1, + "rateLimit": { + "maxRequests": 5, + "windowMs": 60000 + }, + "timeout": 30000, + "baseUrl": "https://query1.finance.yahoo.com" + }, + "qm": { + "name": "qm", + "enabled": false, + "priority": 2, + "username": "", + "password": "", + "baseUrl": "https://app.quotemedia.com/quotetools", + "webmasterId": "" + }, + "ib": { + "name": "ib", + "enabled": false, + "priority": 3, + "gateway": { + "host": "localhost", + "port": 5000, + "clientId": 1 + }, + "marketDataType": "delayed" + }, + "eod": { + "name": "eod", + "enabled": false, + "priority": 4, + "apiKey": "", + "baseUrl": "https://eodhistoricaldata.com/api", + "tier": "free" + } + }, + "features": { + "realtime": true, + "backtesting": true, + "paperTrading": true, + "autoTrading": false, + "historicalData": true, + "realtimeData": true, + "fundamentalData": true, + "newsAnalysis": false, + "notifications": false, + "emailAlerts": false, + "smsAlerts": false, + "webhookAlerts": false, + "technicalAnalysis": true, + "sentimentAnalysis": false, + "patternRecognition": false, + "riskManagement": true, + "positionSizing": true, + "stopLoss": true, + "takeProfit": true + }, + "services": { + "dataIngestion": { + "port": 2001, + "workers": 4, + "queues": { + "ceo": { "concurrency": 2 }, + "webshare": { "concurrency": 1 }, + "qm": { "concurrency": 2 }, + "ib": { "concurrency": 1 }, + "proxy": { "concurrency": 1 } + }, + "rateLimit": { + "enabled": true, + "requestsPerSecond": 10 + } + }, + "dataPipeline": { + "port": 2002, + "workers": 2, + "batchSize": 1000, + "processingInterval": 60000, + "queues": { + "exchanges": { "concurrency": 1 }, + "symbols": { "concurrency": 2 } + }, + "syncOptions": { + "maxRetries": 3, + "retryDelay": 5000, + "timeout": 300000 + } + }, + "webApi": { + "port": 2003, + "rateLimitPerMinute": 60, + "cache": { + "ttl": 300, + "checkPeriod": 60 + }, + "cors": { + "origins": ["http://localhost:3000", "http://localhost:4200"], + "credentials": true + } + } + } +} \ No newline at end of file diff --git a/apps/stock/config/config/development.json b/apps/stock/config/config/development.json new file mode 100644 index 0000000..06bd8e9 --- /dev/null +++ b/apps/stock/config/config/development.json @@ -0,0 +1,11 @@ +{ + "environment": "development", + "log": { + "level": "debug", + "format": "pretty" + }, + "features": { + "autoTrading": false, + "paperTrading": true + } +} \ No newline at end of file diff --git a/apps/stock/config/config/production.json b/apps/stock/config/config/production.json new file mode 100644 index 0000000..dd7806e --- /dev/null +++ b/apps/stock/config/config/production.json @@ -0,0 +1,42 @@ +{ + "environment": "production", + "log": { + "level": "warn", + "format": "json", + "loki": { + "enabled": true, + "host": "loki.production.example.com", + "port": 3100 + } + }, + "database": { + "postgres": { + "host": "postgres.production.example.com", + "ssl": true, + "poolSize": 50 + }, + "questdb": { + "host": "questdb.production.example.com" + }, + "mongodb": { + "uri": "mongodb+srv://prod_user:prod_pass@cluster.mongodb.net/stock?retryWrites=true&w=majority", + "poolSize": 50 + }, + "dragonfly": { + "host": "redis.production.example.com", + "password": "production_redis_password" + } + }, + "queue": { + "redis": { + "host": "redis.production.example.com", + "password": "production_redis_password" + } + }, + "features": { + "autoTrading": true, + "notifications": true, + "emailAlerts": true, + "webhookAlerts": true + } +} \ No newline at end of file diff --git a/apps/stock/config/package.json b/apps/stock/config/package.json new file mode 100644 index 0000000..00abb97 --- /dev/null +++ b/apps/stock/config/package.json @@ -0,0 +1,22 @@ +{ + "name": "@stock-bot/stock-config", + "version": "1.0.0", + "description": "Stock trading bot configuration", + "main": "dist/index.js", + "types": "dist/index.d.ts", + "scripts": { + "build": "tsc", + "clean": "rm -rf dist", + "dev": "tsc --watch", + "test": "jest", + "lint": "eslint src --ext .ts" + }, + "dependencies": { + "@stock-bot/config": "*", + "zod": "^3.22.4" + }, + "devDependencies": { + "@types/node": "^20.11.0", + "typescript": "^5.3.3" + } +} \ No newline at end of file diff --git a/apps/stock/config/src/config-instance.ts b/apps/stock/config/src/config-instance.ts new file mode 100644 index 0000000..5c33725 --- /dev/null +++ b/apps/stock/config/src/config-instance.ts @@ -0,0 +1,87 @@ +import { ConfigManager, createAppConfig } from '@stock-bot/config'; +import { stockAppSchema, type StockAppConfig } from './schemas'; +import * as path from 'path'; +import { getLogger } from '@stock-bot/logger'; + +let configInstance: ConfigManager | null = null; + +/** + * Initialize the stock application configuration + * @param serviceName - Optional service name to override port configuration + */ +export function initializeStockConfig(serviceName?: 'dataIngestion' | 'dataPipeline' | 'webApi'): StockAppConfig { + try { + if (!configInstance) { + configInstance = createAppConfig(stockAppSchema, { + configPath: path.join(__dirname, '../config'), + }); + } + + const config = configInstance.initialize(stockAppSchema); + + // If a service name is provided, override the service port + if (serviceName && config.services?.[serviceName]) { + const kebabName = serviceName.replace(/([A-Z])/g, '-$1').toLowerCase().replace(/^-/, ''); + return { + ...config, + service: { + ...config.service, + port: config.services[serviceName].port, + name: serviceName, // Keep original for backward compatibility + serviceName: kebabName // Standard kebab-case name + } + }; + } + + return config; + } catch (error: any) { + const logger = getLogger('stock-config'); + logger.error('Failed to initialize stock configuration:', error.message); + if (error.errors) { + logger.error('Validation errors:', error.errors); + } + throw error; + } +} + +/** + * Get the current stock configuration + */ +export function getStockConfig(): StockAppConfig { + if (!configInstance) { + // Auto-initialize if not already done + return initializeStockConfig(); + } + return configInstance.get(); +} + +/** + * Get configuration for a specific service + */ +export function getServiceConfig(service: 'dataIngestion' | 'dataPipeline' | 'webApi') { + const config = getStockConfig(); + return config.services?.[service]; +} + +/** + * Get configuration for a specific provider + */ +export function getProviderConfig(provider: 'eod' | 'ib' | 'qm' | 'yahoo') { + const config = getStockConfig(); + return config.providers[provider]; +} + +/** + * Check if a feature is enabled + */ +export function isFeatureEnabled(feature: keyof StockAppConfig['features']): boolean { + const config = getStockConfig(); + return config.features[feature]; +} + +/** + * Reset configuration (useful for testing) + */ +export function resetStockConfig(): void { + configInstance = null; +} \ No newline at end of file diff --git a/apps/stock/config/src/index.ts b/apps/stock/config/src/index.ts new file mode 100644 index 0000000..2197dde --- /dev/null +++ b/apps/stock/config/src/index.ts @@ -0,0 +1,15 @@ +// Export schemas +export * from './schemas'; + +// Export config instance functions +export { + initializeStockConfig, + getStockConfig, + getServiceConfig, + getProviderConfig, + isFeatureEnabled, + resetStockConfig, +} from './config-instance'; + +// Re-export type for convenience +export type { StockAppConfig } from './schemas/stock-app.schema'; \ No newline at end of file diff --git a/apps/stock/config/src/schemas/features.schema.ts b/apps/stock/config/src/schemas/features.schema.ts new file mode 100644 index 0000000..5946029 --- /dev/null +++ b/apps/stock/config/src/schemas/features.schema.ts @@ -0,0 +1,35 @@ +import { z } from 'zod'; + +/** + * Feature flags for the stock trading application + */ +export const featuresSchema = z.object({ + // Trading features + realtime: z.boolean().default(true), + backtesting: z.boolean().default(true), + paperTrading: z.boolean().default(true), + autoTrading: z.boolean().default(false), + + // Data features + historicalData: z.boolean().default(true), + realtimeData: z.boolean().default(true), + fundamentalData: z.boolean().default(true), + newsAnalysis: z.boolean().default(false), + + // Notification features + notifications: z.boolean().default(false), + emailAlerts: z.boolean().default(false), + smsAlerts: z.boolean().default(false), + webhookAlerts: z.boolean().default(false), + + // Analysis features + technicalAnalysis: z.boolean().default(true), + sentimentAnalysis: z.boolean().default(false), + patternRecognition: z.boolean().default(false), + + // Risk management + riskManagement: z.boolean().default(true), + positionSizing: z.boolean().default(true), + stopLoss: z.boolean().default(true), + takeProfit: z.boolean().default(true), +}); \ No newline at end of file diff --git a/apps/stock/config/src/schemas/index.ts b/apps/stock/config/src/schemas/index.ts new file mode 100644 index 0000000..6ab54d6 --- /dev/null +++ b/apps/stock/config/src/schemas/index.ts @@ -0,0 +1,3 @@ +export * from './stock-app.schema'; +export * from './providers.schema'; +export * from './features.schema'; \ No newline at end of file diff --git a/apps/stock/config/src/schemas/providers.schema.ts b/apps/stock/config/src/schemas/providers.schema.ts new file mode 100644 index 0000000..992da6a --- /dev/null +++ b/apps/stock/config/src/schemas/providers.schema.ts @@ -0,0 +1,67 @@ +import { z } from 'zod'; + +// Base provider configuration +export const baseProviderConfigSchema = z.object({ + name: z.string(), + enabled: z.boolean().default(true), + priority: z.number().default(0), + rateLimit: z + .object({ + maxRequests: z.number().default(100), + windowMs: z.number().default(60000), + }) + .optional(), + timeout: z.number().default(30000), + retries: z.number().default(3), +}); + +// EOD Historical Data provider +export const eodProviderConfigSchema = baseProviderConfigSchema.extend({ + apiKey: z.string(), + baseUrl: z.string().default('https://eodhistoricaldata.com/api'), + tier: z.enum(['free', 'fundamentals', 'all-in-one']).default('free'), +}); + +// Interactive Brokers provider +export const ibProviderConfigSchema = baseProviderConfigSchema.extend({ + gateway: z.object({ + host: z.string().default('localhost'), + port: z.number().default(5000), + clientId: z.number().default(1), + }), + account: z.string().optional(), + marketDataType: z.enum(['live', 'delayed', 'frozen']).default('delayed'), +}); + +// QuoteMedia provider +export const qmProviderConfigSchema = baseProviderConfigSchema.extend({ + username: z.string(), + password: z.string(), + baseUrl: z.string().default('https://app.quotemedia.com/quotetools'), + webmasterId: z.string(), +}); + +// Yahoo Finance provider +export const yahooProviderConfigSchema = baseProviderConfigSchema.extend({ + baseUrl: z.string().default('https://query1.finance.yahoo.com'), + cookieJar: z.boolean().default(true), + crumb: z.string().optional(), +}); + +// Combined provider configuration +export const providersSchema = z.object({ + eod: eodProviderConfigSchema.optional(), + ib: ibProviderConfigSchema.optional(), + qm: qmProviderConfigSchema.optional(), + yahoo: yahooProviderConfigSchema.optional(), +}); + +// Dynamic provider configuration type +export type ProviderName = 'eod' | 'ib' | 'qm' | 'yahoo'; + +export const providerSchemas = { + eod: eodProviderConfigSchema, + ib: ibProviderConfigSchema, + qm: qmProviderConfigSchema, + yahoo: yahooProviderConfigSchema, +} as const; \ No newline at end of file diff --git a/apps/stock/config/src/schemas/stock-app.schema.ts b/apps/stock/config/src/schemas/stock-app.schema.ts new file mode 100644 index 0000000..570971b --- /dev/null +++ b/apps/stock/config/src/schemas/stock-app.schema.ts @@ -0,0 +1,72 @@ +import { z } from 'zod'; +import { + baseAppSchema, + postgresConfigSchema, + mongodbConfigSchema, + questdbConfigSchema, + dragonflyConfigSchema +} from '@stock-bot/config'; +import { providersSchema } from './providers.schema'; +import { featuresSchema } from './features.schema'; + +/** + * Stock trading application configuration schema + */ +export const stockAppSchema = baseAppSchema.extend({ + // Stock app uses all databases + database: z.object({ + postgres: postgresConfigSchema, + mongodb: mongodbConfigSchema, + questdb: questdbConfigSchema, + dragonfly: dragonflyConfigSchema, + }), + + // Stock-specific providers + providers: providersSchema, + + // Feature flags + features: featuresSchema, + + // Service-specific configurations + services: z.object({ + dataIngestion: z.object({ + port: z.number().default(2001), + workers: z.number().default(4), + queues: z.record(z.object({ + concurrency: z.number().default(1), + })).optional(), + rateLimit: z.object({ + enabled: z.boolean().default(true), + requestsPerSecond: z.number().default(10), + }).optional(), + }).optional(), + dataPipeline: z.object({ + port: z.number().default(2002), + workers: z.number().default(2), + batchSize: z.number().default(1000), + processingInterval: z.number().default(60000), + queues: z.record(z.object({ + concurrency: z.number().default(1), + })).optional(), + syncOptions: z.object({ + maxRetries: z.number().default(3), + retryDelay: z.number().default(5000), + timeout: z.number().default(300000), + }).optional(), + }).optional(), + webApi: z.object({ + port: z.number().default(2003), + rateLimitPerMinute: z.number().default(60), + cache: z.object({ + ttl: z.number().default(300), + checkPeriod: z.number().default(60), + }).optional(), + cors: z.object({ + origins: z.array(z.string()).default(['http://localhost:3000']), + credentials: z.boolean().default(true), + }).optional(), + }).optional(), + }).optional(), +}); + +export type StockAppConfig = z.infer; \ No newline at end of file diff --git a/apps/stock/config/tsconfig.json b/apps/stock/config/tsconfig.json new file mode 100644 index 0000000..59ed31f --- /dev/null +++ b/apps/stock/config/tsconfig.json @@ -0,0 +1,15 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "composite": true, + "declaration": true, + "declarationMap": true + }, + "include": ["src/**/*"], + "exclude": ["node_modules", "dist", "**/*.test.ts"], + "references": [ + { "path": "../../../libs/core/config" } + ] +} \ No newline at end of file diff --git a/apps/stock/data-ingestion/AWILIX-MIGRATION.md b/apps/stock/data-ingestion/AWILIX-MIGRATION.md new file mode 100644 index 0000000..e23d3d1 --- /dev/null +++ b/apps/stock/data-ingestion/AWILIX-MIGRATION.md @@ -0,0 +1,85 @@ +# Awilix DI Container Migration Guide + +This guide explains how to use the new Awilix dependency injection container in the data-ingestion service. + +## Overview + +The Awilix container provides proper dependency injection for decoupled libraries, allowing them to be reused in other projects without stock-bot specific dependencies. + +## Current Implementation + +The data-ingestion service now uses a hybrid approach: +1. Awilix container for ProxyManager and other decoupled services +2. Legacy service factory for backward compatibility + +## Usage Example + +```typescript +// Create Awilix container +const awilixConfig = { + redis: { + host: config.database.dragonfly.host, + port: config.database.dragonfly.port, + db: config.database.dragonfly.db, + }, + mongodb: { + uri: config.database.mongodb.uri, + database: config.database.mongodb.database, + }, + postgres: { + host: config.database.postgres.host, + port: config.database.postgres.port, + database: config.database.postgres.database, + user: config.database.postgres.user, + password: config.database.postgres.password, + }, + proxy: { + cachePrefix: 'proxy:', + ttl: 3600, + }, +}; + +const container = createServiceContainer(awilixConfig); +await initializeServices(container); + +// Access services from container +const proxyManager = container.resolve('proxyManager'); +const cache = container.resolve('cache'); +``` + +## Handler Integration + +Handlers receive services through the enhanced service container: + +```typescript +// Create service adapter with proxy from Awilix +const serviceContainerWithProxy = createServiceAdapter(services); +Object.defineProperty(serviceContainerWithProxy, 'proxy', { + get: () => container.resolve('proxyManager'), + enumerable: true, + configurable: true +}); + +// Handlers can now access proxy service +class MyHandler extends BaseHandler { + async myOperation() { + const proxy = this.proxy.getRandomProxy(); + // Use proxy... + } +} +``` + +## Benefits + +1. **Decoupled Libraries**: Libraries no longer depend on @stock-bot/config +2. **Reusability**: Libraries can be used in other projects +3. **Testability**: Easy to mock dependencies for testing +4. **Type Safety**: Full TypeScript support with Awilix + +## Next Steps + +To fully migrate to Awilix: +1. Update HTTP library to accept dependencies via constructor +2. Update Queue library to accept Redis config via constructor +3. Create actual MongoDB, PostgreSQL, and QuestDB clients in the container +4. Remove legacy service factory once all services are migrated \ No newline at end of file diff --git a/apps/data-sync-service/package.json b/apps/stock/data-ingestion/package.json similarity index 63% rename from apps/data-sync-service/package.json rename to apps/stock/data-ingestion/package.json index bdf6351..fd5e902 100644 --- a/apps/data-sync-service/package.json +++ b/apps/stock/data-ingestion/package.json @@ -1,7 +1,7 @@ { - "name": "@stock-bot/data-sync-service", + "name": "@stock-bot/data-ingestion", "version": "1.0.0", - "description": "Sync service from MongoDB raw data to PostgreSQL master records", + "description": "Market data ingestion from multiple providers with proxy support and rate limiting", "main": "dist/index.js", "type": "module", "scripts": { @@ -14,12 +14,16 @@ "dependencies": { "@stock-bot/cache": "*", "@stock-bot/config": "*", + "@stock-bot/stock-config": "*", + "@stock-bot/di": "*", + "@stock-bot/handlers": "*", "@stock-bot/logger": "*", - "@stock-bot/mongodb-client": "*", - "@stock-bot/postgres-client": "*", - "@stock-bot/questdb-client": "*", + "@stock-bot/mongodb": "*", + "@stock-bot/postgres": "*", + "@stock-bot/questdb": "*", "@stock-bot/queue": "*", "@stock-bot/shutdown": "*", + "@stock-bot/utils": "*", "hono": "^4.0.0" }, "devDependencies": { diff --git a/apps/stock/data-ingestion/src/handlers/ceo/actions/index.ts b/apps/stock/data-ingestion/src/handlers/ceo/actions/index.ts new file mode 100644 index 0000000..247112b --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/ceo/actions/index.ts @@ -0,0 +1,3 @@ +export { updateCeoChannels } from './update-ceo-channels.action'; +export { updateUniqueSymbols } from './update-unique-symbols.action'; +export { processIndividualSymbol } from './process-individual-symbol.action'; diff --git a/apps/stock/data-ingestion/src/handlers/ceo/actions/process-individual-symbol.action.ts b/apps/stock/data-ingestion/src/handlers/ceo/actions/process-individual-symbol.action.ts new file mode 100644 index 0000000..58096c6 --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/ceo/actions/process-individual-symbol.action.ts @@ -0,0 +1,117 @@ +import { getRandomUserAgent } from '@stock-bot/utils'; +import type { CeoHandler } from '../ceo.handler'; + +export async function processIndividualSymbol( + this: CeoHandler, + payload: any, + _context: any +): Promise { + const { ceoId, symbol, timestamp } = payload; + const proxy = this.proxy?.getProxy(); + if (!proxy) { + this.logger.warn('No proxy available for processing individual CEO symbol'); + return; + } + + this.logger.debug('Processing individual CEO symbol', { + ceoId, + timestamp, + }); + try { + // Fetch detailed information for the individual symbol + const response = await this.http.get( + `https://api.ceo.ca/api/get_spiels?channel=${ceoId}&load_more=top` + + (timestamp ? `&until=${timestamp}` : ''), + { + proxy: proxy, + headers: { + 'User-Agent': getRandomUserAgent(), + }, + } + ); + + if (!response.ok) { + throw new Error(`Failed to fetch details for ceoId ${ceoId}: ${response.statusText}`); + } + + const data = await response.json(); + + const spielCount = data.spiels.length; + if (spielCount === 0) { + this.logger.warn(`No spiels found for ceoId ${ceoId}`); + return null; // No data to process + } + const latestSpielTime = data.spiels[0]?.timestamp; + const posts = data.spiels.map((spiel: any) => ({ + ceoId, + spiel: spiel.spiel, + spielReplyToId: spiel.spiel_reply_to_id, + spielReplyTo: spiel.spiel_reply_to, + spielReplyToName: spiel.spiel_reply_to_name, + spielReplyToEdited: spiel.spiel_reply_to_edited, + userId: spiel.user_id, + name: spiel.name, + timestamp: spiel.timestamp, + spielId: spiel.spiel_id, + color: spiel.color, + parentId: spiel.parent_id, + publicId: spiel.public_id, + parentChannel: spiel.parent_channel, + parentTimestamp: spiel.parent_timestamp, + votes: spiel.votes, + editable: spiel.editable, + edited: spiel.edited, + featured: spiel.featured, + verified: spiel.verified, + fake: spiel.fake, + bot: spiel.bot, + voted: spiel.voted, + flagged: spiel.flagged, + ownSpiel: spiel.own_spiel, + score: spiel.score, + savedId: spiel.saved_id, + savedTimestamp: spiel.saved_timestamp, + poll: spiel.poll, + votedInPoll: spiel.voted_in_poll, + })); + + await this.mongodb.batchUpsert('ceoPosts', posts, ['spielId']); + this.logger.info(`Fetched ${spielCount} spiels for ceoId ${ceoId}`); + + // Update Shorts + const shortRes = await this.http.get( + `https://api.ceo.ca/api/short_positions/one?symbol=${symbol}`, + { + proxy: proxy, + headers: { + 'User-Agent': getRandomUserAgent(), + }, + } + ); + + if (shortRes.ok) { + const shortData = await shortRes.json(); + if (shortData && shortData.positions) { + await this.mongodb.batchUpsert('ceoShorts', shortData.positions, ['id']); + } + + await this.scheduleOperation('process-individual-symbol', { + ceoId: ceoId, + timestamp: latestSpielTime, + }, {priority: 0}); + } + + this.logger.info( + `Successfully processed channel ${ceoId} and added channel ${ceoId} at timestamp ${latestSpielTime}` + ); + + return { ceoId, spielCount, timestamp }; + } catch (error) { + this.logger.error(`Failed to process individual symbol ${symbol}`, { + error, + ceoId, + timestamp, + }); + throw error; + } +} diff --git a/apps/stock/data-ingestion/src/handlers/ceo/actions/update-ceo-channels.action.ts b/apps/stock/data-ingestion/src/handlers/ceo/actions/update-ceo-channels.action.ts new file mode 100644 index 0000000..d0be435 --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/ceo/actions/update-ceo-channels.action.ts @@ -0,0 +1,72 @@ +import { getRandomUserAgent } from '@stock-bot/utils'; +import type { CeoHandler } from '../ceo.handler'; + +export async function updateCeoChannels( + this: CeoHandler, + payload: number | undefined +): Promise { + const proxy = this.proxy?.getProxy(); + if (!proxy) { + this.logger.warn('No proxy available for CEO channels update'); + return; + } + let page; + if (payload === undefined) { + page = 1; + } else { + page = payload; + } + + this.logger.info(`Fetching CEO channels for page ${page} with proxy ${proxy}`); + const response = await this.http.get( + 'https://api.ceo.ca/api/home?exchange=all&sort_by=symbol§or=All&tab=companies&page=' + page, + { + proxy: proxy, + headers: { + 'User-Agent': getRandomUserAgent(), + }, + } + ); + const results = await response.json(); + const channels = results.channel_categories[0].channels; + const totalChannels = results.channel_categories[0].total_channels; + const totalPages = Math.ceil(totalChannels / channels.length); + const exchanges: { exchange: string; countryCode: string }[] = []; + const symbols = channels.map((channel: any) => { + // check if exchange is in the exchanges array object + if (!exchanges.find((e: any) => e.exchange === channel.exchange)) { + exchanges.push({ + exchange: channel.exchange, + countryCode: 'CA', + }); + } + const details = channel.company_details || {}; + return { + symbol: channel.symbol, + exchange: channel.exchange, + name: channel.title, + type: channel.type, + ceoId: channel.channel, + marketCap: details.market_cap, + volumeRatio: details.volume_ratio, + avgVolume: details.avg_volume, + stockType: details.stock_type, + issueType: details.issue_type, + sharesOutstanding: details.shares_outstanding, + float: details.float, + }; + }); + + await this.mongodb.batchUpsert('ceoSymbols', symbols, ['symbol', 'exchange']); + await this.mongodb.batchUpsert('ceoExchanges', exchanges, ['exchange']); + + if (page === 1) { + for (let i = 2; i <= totalPages; i++) { + this.logger.info(`Scheduling page ${i} of ${totalPages} for CEO channels`); + await this.scheduleOperation('update-ceo-channels', i); + } + } + + this.logger.info(`Fetched CEO channels for page ${page}/${totalPages}`); + return { page, totalPages }; +} diff --git a/apps/stock/data-ingestion/src/handlers/ceo/actions/update-unique-symbols.action.ts b/apps/stock/data-ingestion/src/handlers/ceo/actions/update-unique-symbols.action.ts new file mode 100644 index 0000000..024e104 --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/ceo/actions/update-unique-symbols.action.ts @@ -0,0 +1,71 @@ +import type { CeoHandler } from '../ceo.handler'; + +export async function updateUniqueSymbols( + this: CeoHandler, + _payload: unknown, + _context: any +): Promise { + this.logger.info('Starting update to get unique CEO symbols by ceoId'); + + try { + // Get unique ceoId values from the ceoSymbols collection + const uniqueCeoIds = await this.mongodb.collection('ceoSymbols').distinct('ceoId'); + + this.logger.info(`Found ${uniqueCeoIds.length} unique CEO IDs`); + + // Get detailed records for each unique ceoId (latest/first record) + const uniqueSymbols = []; + for (const ceoId of uniqueCeoIds) { + const symbol = await this.mongodb + .collection('ceoSymbols') + .findOne({ ceoId }, { sort: { _id: -1 } }); // Get latest record + + if (symbol) { + uniqueSymbols.push(symbol); + } + } + + this.logger.info(`Retrieved ${uniqueSymbols.length} unique symbol records`); + + // Schedule individual jobs for each unique symbol + let scheduledJobs = 0; + for (const symbol of uniqueSymbols) { + // Schedule a job to process this individual symbol + await this.scheduleOperation('process-individual-symbol', { + ceoId: symbol.ceoId, + symbol: symbol.symbol, + }, {priority: 10 }); + scheduledJobs++; + + // Add small delay to avoid overwhelming the queue + if (scheduledJobs % 10 === 0) { + this.logger.debug(`Scheduled ${scheduledJobs} jobs so far`); + } + } + + this.logger.info(`Successfully scheduled ${scheduledJobs} individual symbol update jobs`); + + // Cache the results for monitoring + await this.cacheSet( + 'unique-symbols-last-run', + { + timestamp: new Date().toISOString(), + totalUniqueIds: uniqueCeoIds.length, + totalRecords: uniqueSymbols.length, + scheduledJobs, + }, + 1800 + ); // Cache for 30 minutes + + return { + success: true, + uniqueCeoIds: uniqueCeoIds.length, + uniqueRecords: uniqueSymbols.length, + scheduledJobs, + timestamp: new Date().toISOString(), + }; + } catch (error) { + this.logger.error('Failed to update unique CEO symbols', { error }); + throw error; + } +} diff --git a/apps/stock/data-ingestion/src/handlers/ceo/ceo.handler.ts b/apps/stock/data-ingestion/src/handlers/ceo/ceo.handler.ts new file mode 100644 index 0000000..443f97e --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/ceo/ceo.handler.ts @@ -0,0 +1,34 @@ +import { + BaseHandler, + Handler, + Operation, + ScheduledOperation, + type IServiceContainer, +} from '@stock-bot/handlers'; +import { processIndividualSymbol, updateCeoChannels, updateUniqueSymbols } from './actions'; + +@Handler('ceo') +// @Disabled() +export class CeoHandler extends BaseHandler { + constructor(services: IServiceContainer) { + super(services); // Handler name read from @Handler decorator + } + + @ScheduledOperation('update-ceo-channels', '0 */15 * * *', { + priority: 7, + immediately: false, + description: 'Get all CEO symbols and exchanges', + }) + updateCeoChannels = updateCeoChannels; + + @Operation('update-unique-symbols') + @ScheduledOperation('process-unique-symbols', '0 0 1 * *', { + priority: 5, + immediately: false, + description: 'Process unique CEO symbols and schedule individual jobs', + }) + updateUniqueSymbols = updateUniqueSymbols; + + @Operation('process-individual-symbol') + processIndividualSymbol = processIndividualSymbol; +} diff --git a/apps/stock/data-ingestion/src/handlers/example/example.handler.ts b/apps/stock/data-ingestion/src/handlers/example/example.handler.ts new file mode 100644 index 0000000..5bb745a --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/example/example.handler.ts @@ -0,0 +1,96 @@ +/** + * Example Handler - Demonstrates ergonomic handler patterns + * Shows inline operations, service helpers, and scheduled operations + */ + +import { + BaseHandler, + Disabled, + Handler, + Operation, + ScheduledOperation, + type ExecutionContext, + type IServiceContainer, +} from '@stock-bot/handlers'; + +@Handler('example') +@Disabled() +export class ExampleHandler extends BaseHandler { + constructor(services: IServiceContainer) { + super(services); + } + + /** + * Simple inline operation - no separate action file needed + */ + @Operation('get-stats') + async getStats(): Promise<{ total: number; active: number; cached: boolean }> { + // Use collection helper for cleaner MongoDB access + const total = await this.collection('items').countDocuments(); + const active = await this.collection('items').countDocuments({ status: 'active' }); + + // Use cache helpers with automatic prefixing + const cached = await this.cacheGet('last-total'); + await this.cacheSet('last-total', total, 300); // 5 minutes + + // Use log helper with automatic handler context + this.log('info', 'Stats retrieved', { total, active }); + + return { total, active, cached: cached !== null }; + } + + /** + * Scheduled operation using combined decorator + */ + @ScheduledOperation('cleanup-old-items', '0 2 * * *', { + priority: 5, + description: 'Clean up items older than 30 days', + }) + async cleanupOldItems(): Promise<{ deleted: number }> { + const thirtyDaysAgo = new Date(Date.now() - 30 * 24 * 60 * 60 * 1000); + + const result = await this.collection('items').deleteMany({ + createdAt: { $lt: thirtyDaysAgo }, + }); + + this.log('info', 'Cleanup completed', { deleted: result.deletedCount }); + + // Schedule a follow-up task + await this.scheduleIn('generate-report', { type: 'cleanup' }, 60); // 1 minute + + return { deleted: result.deletedCount }; + } + + /** + * Operation that uses proxy service + */ + @Operation('fetch-external-data') + async fetchExternalData(input: { url: string }): Promise<{ data: any }> { + const proxyUrl = this.proxy.getProxy(); + + if (!proxyUrl) { + throw new Error('No proxy available'); + } + + // Use HTTP client with proxy + const response = await this.http.get(input.url, { + proxy: proxyUrl, + timeout: 10000, + }); + + // Cache the result + await this.cacheSet(`external:${input.url}`, response.data, 3600); + + return { data: response.data }; + } + + /** + * Complex operation that still uses action file + */ + @Operation('process-batch') + async processBatch(input: any, _context: ExecutionContext): Promise { + // For complex operations, still use action files + const { processBatch } = await import('./actions/batch.action'); + return processBatch(this, input); + } +} diff --git a/apps/stock/data-ingestion/src/handlers/ib/actions/fetch-exchanges-and-symbols.action.ts b/apps/stock/data-ingestion/src/handlers/ib/actions/fetch-exchanges-and-symbols.action.ts new file mode 100644 index 0000000..dc8d8ac --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/ib/actions/fetch-exchanges-and-symbols.action.ts @@ -0,0 +1,42 @@ +import type { IServiceContainer } from '@stock-bot/handlers'; +import { fetchSession } from './fetch-session.action'; +import { fetchExchanges } from './fetch-exchanges.action'; +import { fetchSymbols } from './fetch-symbols.action'; + +export async function fetchExchangesAndSymbols(services: IServiceContainer): Promise { + services.logger.info('Starting IB exchanges and symbols fetch job'); + + try { + // Fetch session headers first + const sessionHeaders = await fetchSession(services); + if (!sessionHeaders) { + services.logger.error('Failed to get session headers for IB job'); + return { success: false, error: 'No session headers' }; + } + + services.logger.info('Session headers obtained, fetching exchanges...'); + + // Fetch exchanges + const exchanges = await fetchExchanges(services); + services.logger.info('Fetched exchanges from IB', { count: exchanges?.length || 0 }); + + // Fetch symbols + services.logger.info('Fetching symbols...'); + const symbols = await fetchSymbols(services); + services.logger.info('Fetched symbols from IB', { count: symbols?.length || 0 }); + + return { + success: true, + exchangesCount: exchanges?.length || 0, + symbolsCount: symbols?.length || 0, + }; + } catch (error) { + services.logger.error('Failed to fetch IB exchanges and symbols', { error }); + return { + success: false, + error: error instanceof Error ? error.message : 'Unknown error', + }; + } +} + + diff --git a/apps/data-service/src/handlers/ib/operations/exchanges.operations.ts b/apps/stock/data-ingestion/src/handlers/ib/actions/fetch-exchanges.action.ts similarity index 55% rename from apps/data-service/src/handlers/ib/operations/exchanges.operations.ts rename to apps/stock/data-ingestion/src/handlers/ib/actions/fetch-exchanges.action.ts index 4260442..9a8916c 100644 --- a/apps/data-service/src/handlers/ib/operations/exchanges.operations.ts +++ b/apps/stock/data-ingestion/src/handlers/ib/actions/fetch-exchanges.action.ts @@ -1,16 +1,16 @@ -/** - * IB Exchanges Operations - Fetching exchange data from IB API - */ -import { getMongoDBClient } from '@stock-bot/mongodb-client'; -import { OperationContext } from '@stock-bot/utils'; - +import type { IServiceContainer } from '@stock-bot/handlers'; import { IB_CONFIG } from '../shared/config'; +import { fetchSession } from './fetch-session.action'; -export async function fetchExchanges(sessionHeaders: Record): Promise { - const ctx = OperationContext.create('ib', 'exchanges'); - +export async function fetchExchanges(services: IServiceContainer): Promise { try { - ctx.logger.info('🔍 Fetching exchanges with session headers...'); + // First get session headers + const sessionHeaders = await fetchSession(services); + if (!sessionHeaders) { + throw new Error('Failed to get session headers'); + } + + services.logger.info('🔍 Fetching exchanges with session headers...'); // The URL for the exchange data API const exchangeUrl = IB_CONFIG.BASE_URL + IB_CONFIG.EXCHANGE_API; @@ -28,7 +28,7 @@ export async function fetchExchanges(sessionHeaders: Record): Pr 'X-Requested-With': 'XMLHttpRequest', }; - ctx.logger.info('📤 Making request to exchange API...', { + services.logger.info('📤 Making request to exchange API...', { url: exchangeUrl, headerCount: Object.keys(requestHeaders).length, }); @@ -41,7 +41,7 @@ export async function fetchExchanges(sessionHeaders: Record): Pr }); if (!response.ok) { - ctx.logger.error('❌ Exchange API request failed', { + services.logger.error('❌ Exchange API request failed', { status: response.status, statusText: response.statusText, }); @@ -50,18 +50,19 @@ export async function fetchExchanges(sessionHeaders: Record): Pr const data = await response.json(); const exchanges = data?.exchanges || []; - ctx.logger.info('✅ Exchange data fetched successfully'); + services.logger.info('✅ Exchange data fetched successfully'); - ctx.logger.info('Saving IB exchanges to MongoDB...'); - const client = getMongoDBClient(); - await client.batchUpsert('ibExchanges', exchanges, ['id', 'country_code']); - ctx.logger.info('✅ Exchange IB data saved to MongoDB:', { + services.logger.info('Saving IB exchanges to MongoDB...'); + await services.mongodb.batchUpsert('ibExchanges', exchanges, ['id', 'country_code']); + services.logger.info('✅ Exchange IB data saved to MongoDB:', { count: exchanges.length, }); return exchanges; } catch (error) { - ctx.logger.error('❌ Failed to fetch exchanges', { error }); + services.logger.error('❌ Failed to fetch exchanges', { error }); return null; } -} \ No newline at end of file +} + + diff --git a/apps/stock/data-ingestion/src/handlers/ib/actions/fetch-session.action.ts b/apps/stock/data-ingestion/src/handlers/ib/actions/fetch-session.action.ts new file mode 100644 index 0000000..1560c53 --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/ib/actions/fetch-session.action.ts @@ -0,0 +1,84 @@ +import { Browser } from '@stock-bot/browser'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import { IB_CONFIG } from '../shared/config'; + +export async function fetchSession(services: IServiceContainer): Promise | undefined> { + try { + await Browser.initialize({ + headless: true, + timeout: IB_CONFIG.BROWSER_TIMEOUT, + blockResources: false, + }); + services.logger.info('✅ Browser initialized'); + + const { page } = await Browser.createPageWithProxy( + IB_CONFIG.BASE_URL + IB_CONFIG.PRODUCTS_PAGE, + IB_CONFIG.DEFAULT_PROXY + ); + services.logger.info('✅ Page created with proxy'); + + const headersPromise = new Promise | undefined>(resolve => { + let resolved = false; + + page.onNetworkEvent(event => { + if (event.url.includes('/webrest/search/product-types/summary')) { + if (event.type === 'request') { + try { + resolve(event.headers); + } catch (e) { + resolve(undefined); + services.logger.debug('Raw Summary Response error', { error: (e as Error).message }); + } + } + } + }); + + // Timeout fallback + setTimeout(() => { + if (!resolved) { + resolved = true; + services.logger.warn('Timeout waiting for headers'); + resolve(undefined); + } + }, IB_CONFIG.HEADERS_TIMEOUT); + }); + + services.logger.info('⏳ Waiting for page load...'); + await page.waitForLoadState('domcontentloaded', { timeout: IB_CONFIG.PAGE_LOAD_TIMEOUT }); + services.logger.info('✅ Page loaded'); + + //Products tabs + services.logger.info('🔍 Looking for Products tab...'); + const productsTab = page.locator('#productSearchTab[role="tab"][href="#products"]'); + await productsTab.waitFor({ timeout: IB_CONFIG.ELEMENT_TIMEOUT }); + services.logger.info('✅ Found Products tab'); + services.logger.info('🖱️ Clicking Products tab...'); + await productsTab.click(); + services.logger.info('✅ Products tab clicked'); + + // New Products Checkbox + services.logger.info('🔍 Looking for "New Products Only" radio button...'); + const radioButton = page.locator('span.checkbox-text:has-text("New Products Only")'); + await radioButton.waitFor({ timeout: IB_CONFIG.ELEMENT_TIMEOUT }); + services.logger.info(`🎯 Found "New Products Only" radio button`); + await radioButton.first().click(); + services.logger.info('✅ "New Products Only" radio button clicked'); + + // Wait for and return headers immediately when captured + services.logger.info('⏳ Waiting for headers to be captured...'); + const headers = await headersPromise; + page.close(); + if (headers) { + services.logger.info('✅ Headers captured successfully'); + } else { + services.logger.warn('⚠️ No headers were captured'); + } + + return headers; + } catch (error) { + services.logger.error('Failed to fetch IB symbol summary', { error }); + return; + } +} + + diff --git a/apps/data-service/src/handlers/ib/operations/symbols.operations.ts b/apps/stock/data-ingestion/src/handlers/ib/actions/fetch-symbols.action.ts similarity index 59% rename from apps/data-service/src/handlers/ib/operations/symbols.operations.ts rename to apps/stock/data-ingestion/src/handlers/ib/actions/fetch-symbols.action.ts index 94653df..90d097f 100644 --- a/apps/data-service/src/handlers/ib/operations/symbols.operations.ts +++ b/apps/stock/data-ingestion/src/handlers/ib/actions/fetch-symbols.action.ts @@ -1,18 +1,17 @@ -/** - * IB Symbols Operations - Fetching symbol data from IB API - */ -import { getMongoDBClient } from '@stock-bot/mongodb-client'; -import { OperationContext } from '@stock-bot/utils'; - +import type { IServiceContainer } from '@stock-bot/handlers'; import { IB_CONFIG } from '../shared/config'; +import { fetchSession } from './fetch-session.action'; -// Fetch symbols from IB using the session headers -export async function fetchSymbols(sessionHeaders: Record): Promise { - const ctx = OperationContext.create('ib', 'symbols'); - +export async function fetchSymbols(services: IServiceContainer): Promise { try { - ctx.logger.info('🔍 Fetching symbols with session headers...'); - + // First get session headers + const sessionHeaders = await fetchSession(services); + if (!sessionHeaders) { + throw new Error('Failed to get session headers'); + } + + services.logger.info('🔍 Fetching symbols with session headers...'); + // Prepare headers - include all session headers plus any additional ones const requestHeaders = { ...sessionHeaders, @@ -39,18 +38,15 @@ export async function fetchSymbols(sessionHeaders: Record): Prom }; // Get Summary - const summaryResponse = await fetch( - IB_CONFIG.BASE_URL + IB_CONFIG.SUMMARY_API, - { - method: 'POST', - headers: requestHeaders, - proxy: IB_CONFIG.DEFAULT_PROXY, - body: JSON.stringify(requestBody), - } - ); + const summaryResponse = await fetch(IB_CONFIG.BASE_URL + IB_CONFIG.SUMMARY_API, { + method: 'POST', + headers: requestHeaders, + proxy: IB_CONFIG.DEFAULT_PROXY, + body: JSON.stringify(requestBody), + }); if (!summaryResponse.ok) { - ctx.logger.error('❌ Summary API request failed', { + services.logger.error('❌ Summary API request failed', { status: summaryResponse.status, statusText: summaryResponse.statusText, }); @@ -58,36 +54,33 @@ export async function fetchSymbols(sessionHeaders: Record): Prom } const summaryData = await summaryResponse.json(); - ctx.logger.info('✅ IB Summary data fetched successfully', { + services.logger.info('✅ IB Summary data fetched successfully', { totalCount: summaryData[0].totalCount, }); const symbols = []; requestBody.pageSize = IB_CONFIG.PAGE_SIZE; const pageCount = Math.ceil(summaryData[0].totalCount / IB_CONFIG.PAGE_SIZE) || 0; - ctx.logger.info('Fetching Symbols for IB', { pageCount }); - + services.logger.info('Fetching Symbols for IB', { pageCount }); + const symbolPromises = []; for (let page = 1; page <= pageCount; page++) { requestBody.pageNumber = page; // Fetch symbols for the current page - const symbolsResponse = fetch( - IB_CONFIG.BASE_URL + IB_CONFIG.PRODUCTS_API, - { - method: 'POST', - headers: requestHeaders, - proxy: IB_CONFIG.DEFAULT_PROXY, - body: JSON.stringify(requestBody), - } - ); + const symbolsResponse = fetch(IB_CONFIG.BASE_URL + IB_CONFIG.PRODUCTS_API, { + method: 'POST', + headers: requestHeaders, + proxy: IB_CONFIG.DEFAULT_PROXY, + body: JSON.stringify(requestBody), + }); symbolPromises.push(symbolsResponse); } - + const responses = await Promise.all(symbolPromises); for (const response of responses) { if (!response.ok) { - ctx.logger.error('❌ Symbols API request failed', { + services.logger.error('❌ Symbols API request failed', { status: response.status, statusText: response.statusText, }); @@ -98,28 +91,29 @@ export async function fetchSymbols(sessionHeaders: Record): Prom if (symJson && symJson.length > 0) { symbols.push(...symJson); } else { - ctx.logger.warn('⚠️ No symbols found in response'); + services.logger.warn('⚠️ No symbols found in response'); continue; } } - + if (symbols.length === 0) { - ctx.logger.warn('⚠️ No symbols fetched from IB'); + services.logger.warn('⚠️ No symbols fetched from IB'); return null; } - ctx.logger.info('✅ IB symbols fetched successfully, saving to DB...', { + services.logger.info('✅ IB symbols fetched successfully, saving to DB...', { totalSymbols: symbols.length, }); - const client = getMongoDBClient(); - await client.batchUpsert('ib_symbols', symbols, ['symbol', 'exchangeId']); - ctx.logger.info('Saved IB symbols to DB', { + await services.mongodb.batchUpsert('ib_symbols', symbols, ['symbol', 'exchangeId']); + services.logger.info('Saved IB symbols to DB', { totalSymbols: symbols.length, }); return symbols; } catch (error) { - ctx.logger.error('❌ Failed to fetch symbols', { error }); + services.logger.error('❌ Failed to fetch symbols', { error }); return null; } -} \ No newline at end of file +} + + diff --git a/apps/stock/data-ingestion/src/handlers/ib/actions/index.ts b/apps/stock/data-ingestion/src/handlers/ib/actions/index.ts new file mode 100644 index 0000000..04dde8e --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/ib/actions/index.ts @@ -0,0 +1,5 @@ +export { fetchSession } from './fetch-session.action'; +export { fetchExchanges } from './fetch-exchanges.action'; +export { fetchSymbols } from './fetch-symbols.action'; +export { fetchExchangesAndSymbols } from './fetch-exchanges-and-symbols.action'; + diff --git a/apps/stock/data-ingestion/src/handlers/ib/ib.handler.ts b/apps/stock/data-ingestion/src/handlers/ib/ib.handler.ts new file mode 100644 index 0000000..0748dbb --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/ib/ib.handler.ts @@ -0,0 +1,42 @@ +import { + BaseHandler, + Handler, + Operation, + ScheduledOperation, + type IServiceContainer, +} from '@stock-bot/handlers'; +import { fetchExchanges, fetchExchangesAndSymbols, fetchSession, fetchSymbols } from './actions'; + +@Handler('ib') +export class IbHandler extends BaseHandler { + constructor(services: IServiceContainer) { + super(services); + } + + @Operation('fetch-session') + async fetchSession(): Promise | undefined> { + return fetchSession(this); + } + + @Operation('fetch-exchanges') + async fetchExchanges(): Promise { + return fetchExchanges(this); + } + + @Operation('fetch-symbols') + async fetchSymbols(): Promise { + return fetchSymbols(this); + } + + @Operation('ib-exchanges-and-symbols') + @ScheduledOperation('ib-exchanges-and-symbols', '0 0 * * 0', { + priority: 5, + description: 'Fetch and update IB exchanges and symbols data', + immediately: false, + }) + async fetchExchangesAndSymbols(): Promise { + return fetchExchangesAndSymbols(this); + } +} + + diff --git a/apps/data-service/src/handlers/ib/shared/config.ts b/apps/stock/data-ingestion/src/handlers/ib/shared/config.ts similarity index 98% rename from apps/data-service/src/handlers/ib/shared/config.ts rename to apps/stock/data-ingestion/src/handlers/ib/shared/config.ts index 1f09326..91bf09c 100644 --- a/apps/data-service/src/handlers/ib/shared/config.ts +++ b/apps/stock/data-ingestion/src/handlers/ib/shared/config.ts @@ -8,16 +8,17 @@ export const IB_CONFIG = { EXCHANGE_API: '/webrest/exchanges', SUMMARY_API: '/webrest/search/product-types/summary', PRODUCTS_API: '/webrest/search/products-by-filters', - + // Browser configuration BROWSER_TIMEOUT: 10000, PAGE_LOAD_TIMEOUT: 20000, ELEMENT_TIMEOUT: 5000, HEADERS_TIMEOUT: 30000, - + // API configuration DEFAULT_PROXY: 'http://doimvbnb-US-rotate:w5fpiwrb9895@p.webshare.io:80', PAGE_SIZE: 500, PRODUCT_COUNTRIES: ['CA', 'US'], PRODUCT_TYPES: ['STK'], -}; \ No newline at end of file +}; + diff --git a/apps/stock/data-ingestion/src/handlers/index.ts b/apps/stock/data-ingestion/src/handlers/index.ts new file mode 100644 index 0000000..e572141 --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/index.ts @@ -0,0 +1,61 @@ +/** + * Handler auto-registration + * Automatically discovers and registers all handlers + */ + +import type { IServiceContainer } from '@stock-bot/handlers'; +import { autoRegisterHandlers } from '@stock-bot/handlers'; +import { getLogger } from '@stock-bot/logger'; +// Import handlers for bundling (ensures they're included in the build) +import './ceo/ceo.handler'; +import './ib/ib.handler'; +import './proxy/proxy.handler'; +import './qm/qm.handler'; +import './webshare/webshare.handler'; + +// Add more handler imports as needed + +const logger = getLogger('handler-init'); + +/** + * Initialize and register all handlers automatically + */ +export async function initializeAllHandlers(serviceContainer: IServiceContainer): Promise { + try { + // Auto-register all handlers in this directory + const result = await autoRegisterHandlers(__dirname, serviceContainer, { + pattern: '.handler.', + exclude: ['test', 'spec'], + dryRun: false, + serviceName: 'data-ingestion', + }); + + logger.info('Handler auto-registration complete', { + registered: result.registered, + failed: result.failed, + }); + + if (result.failed.length > 0) { + logger.error('Some handlers failed to register', { failed: result.failed }); + } + } catch (error) { + logger.error('Handler auto-registration failed', { error }); + // Fall back to manual registration + await manualHandlerRegistration(serviceContainer); + } +} + +/** + * Manual fallback registration + */ +async function manualHandlerRegistration(_serviceContainer: any): Promise { + logger.warn('Falling back to manual handler registration'); + + try { + + logger.info('Manual handler registration complete'); + } catch (error) { + logger.error('Manual handler registration failed', { error }); + throw error; + } +} diff --git a/apps/data-service/src/handlers/proxy/operations/check.operations.ts b/apps/stock/data-ingestion/src/handlers/proxy/operations/check.operations.ts similarity index 68% rename from apps/data-service/src/handlers/proxy/operations/check.operations.ts rename to apps/stock/data-ingestion/src/handlers/proxy/operations/check.operations.ts index d49ead3..9a6cbc3 100644 --- a/apps/data-service/src/handlers/proxy/operations/check.operations.ts +++ b/apps/stock/data-ingestion/src/handlers/proxy/operations/check.operations.ts @@ -1,28 +1,23 @@ /** * Proxy Check Operations - Checking proxy functionality */ -import { HttpClient, ProxyInfo } from '@stock-bot/http'; -import { OperationContext } from '@stock-bot/utils'; - +import type { OperationContext } from '@stock-bot/di'; +import { getLogger } from '@stock-bot/logger'; +import type { ProxyInfo } from '@stock-bot/proxy'; +import { fetch } from '@stock-bot/utils'; import { PROXY_CONFIG } from '../shared/config'; -import { ProxyStatsManager } from '../shared/proxy-manager'; - -// Shared HTTP client -let httpClient: HttpClient; - -function getHttpClient(ctx: OperationContext): HttpClient { - if (!httpClient) { - httpClient = new HttpClient({ timeout: 10000 }, ctx.logger); - } - return httpClient; -} /** * Check if a proxy is working */ export async function checkProxy(proxy: ProxyInfo): Promise { - const ctx = OperationContext.create('proxy', 'check'); - + const ctx = { + logger: getLogger('proxy-check'), + resolve: (_name: string) => { + throw new Error(`Service container not available for proxy operations`); + }, + } as any; + let success = false; ctx.logger.debug(`Checking Proxy:`, { protocol: proxy.protocol, @@ -31,22 +26,28 @@ export async function checkProxy(proxy: ProxyInfo): Promise { }); try { - // Test the proxy - const client = getHttpClient(ctx); - const response = await client.get(PROXY_CONFIG.CHECK_URL, { - proxy, - timeout: PROXY_CONFIG.CHECK_TIMEOUT, - }); + // Test the proxy using fetch with proxy support + const proxyUrl = + proxy.username && proxy.password + ? `${proxy.protocol}://${encodeURIComponent(proxy.username)}:${encodeURIComponent(proxy.password)}@${proxy.host}:${proxy.port}` + : `${proxy.protocol}://${proxy.host}:${proxy.port}`; - const isWorking = response.status >= 200 && response.status < 300; + const response = await fetch(PROXY_CONFIG.CHECK_URL, { + proxy: proxyUrl, + signal: AbortSignal.timeout(PROXY_CONFIG.CHECK_TIMEOUT), + logger: ctx.logger, + } as any); + + const data = await response.text(); + + const isWorking = response.ok; const result: ProxyInfo = { ...proxy, isWorking, lastChecked: new Date(), - responseTime: response.responseTime, }; - if (isWorking && !JSON.stringify(response.data).includes(PROXY_CONFIG.CHECK_IP)) { + if (isWorking && !data.includes(PROXY_CONFIG.CHECK_IP)) { success = true; await updateProxyInCache(result, true, ctx); } else { @@ -93,11 +94,17 @@ export async function checkProxy(proxy: ProxyInfo): Promise { /** * Update proxy data in cache with working/total stats and average response time */ -async function updateProxyInCache(proxy: ProxyInfo, isWorking: boolean, ctx: OperationContext): Promise { - const cacheKey = `${PROXY_CONFIG.CACHE_KEY}:${proxy.protocol}://${proxy.host}:${proxy.port}`; +async function updateProxyInCache( + proxy: ProxyInfo, + isWorking: boolean, + ctx: OperationContext +): Promise { + // const _cacheKey = `${PROXY_CONFIG.CACHE_KEY}:${proxy.protocol}://${proxy.host}:${proxy.port}`; try { - const existing: ProxyInfo | null = await ctx.cache.get(cacheKey); + // For now, skip cache operations without service container + // TODO: Pass service container to operations + const existing: ProxyInfo | null = null; // For failed proxies, only update if they already exist if (!isWorking && !existing) { @@ -140,8 +147,9 @@ async function updateProxyInCache(proxy: ProxyInfo, isWorking: boolean, ctx: Ope updated.successRate = updated.total > 0 ? (updated.working / updated.total) * 100 : 0; // Save to cache: reset TTL for working proxies, keep existing TTL for failed ones - const cacheOptions = isWorking ? { ttl: PROXY_CONFIG.CACHE_TTL } : undefined; - await ctx.cache.set(cacheKey, updated, cacheOptions); + // const _cacheOptions = isWorking ? { ttl: PROXY_CONFIG.CACHE_TTL } : undefined; + // Skip cache operations without service container + // TODO: Pass service container to operations ctx.logger.debug(`Updated ${isWorking ? 'working' : 'failed'} proxy in cache`, { proxy: `${proxy.host}:${proxy.port}`, @@ -161,15 +169,8 @@ async function updateProxyInCache(proxy: ProxyInfo, isWorking: boolean, ctx: Ope } function updateProxyStats(sourceId: string, success: boolean, ctx: OperationContext) { - const statsManager = ProxyStatsManager.getInstance(); - const source = statsManager.updateSourceStats(sourceId, success); - - if (!source) { - ctx.logger.warn(`Unknown proxy source: ${sourceId}`); - return; - } + // Stats are now handled by the global ProxyManager + ctx.logger.debug('Proxy check result', { sourceId, success }); - // Cache the updated stats - ctx.cache.set(`${PROXY_CONFIG.CACHE_STATS_KEY}:${source.id}`, source, { ttl: PROXY_CONFIG.CACHE_TTL }) - .catch(error => ctx.logger.debug('Failed to cache proxy stats', { error })); -} \ No newline at end of file + // TODO: Integrate with global ProxyManager stats if needed +} diff --git a/apps/data-service/src/handlers/proxy/operations/fetch.operations.ts b/apps/stock/data-ingestion/src/handlers/proxy/operations/fetch.operations.ts similarity index 74% rename from apps/data-service/src/handlers/proxy/operations/fetch.operations.ts rename to apps/stock/data-ingestion/src/handlers/proxy/operations/fetch.operations.ts index ed6910c..335cc9d 100644 --- a/apps/data-service/src/handlers/proxy/operations/fetch.operations.ts +++ b/apps/stock/data-ingestion/src/handlers/proxy/operations/fetch.operations.ts @@ -1,28 +1,20 @@ /** * Proxy Fetch Operations - Fetching proxies from sources */ -import { HttpClient, ProxyInfo } from '@stock-bot/http'; -import { OperationContext } from '@stock-bot/utils'; +import type { ProxyInfo } from '@stock-bot/proxy'; +import { OperationContext } from '@stock-bot/di'; +import { getLogger } from '@stock-bot/logger'; +import { fetch } from '@stock-bot/utils'; import { PROXY_CONFIG } from '../shared/config'; -import { ProxyStatsManager } from '../shared/proxy-manager'; import type { ProxySource } from '../shared/types'; -// Shared HTTP client -let httpClient: HttpClient; - -function getHttpClient(ctx: OperationContext): HttpClient { - if (!httpClient) { - httpClient = new HttpClient({ timeout: 10000 }, ctx.logger); - } - return httpClient; -} - export async function fetchProxiesFromSources(): Promise { - const ctx = OperationContext.create('proxy', 'fetch-sources'); + const ctx = { + logger: getLogger('proxy-fetch') + } as any; - const statsManager = ProxyStatsManager.getInstance(); - statsManager.resetStats(); + ctx.logger.info('Starting proxy fetch from sources'); const fetchPromises = PROXY_CONFIG.PROXY_SOURCES.map(source => fetchProxiesFromSource(source, ctx)); const results = await Promise.all(fetchPromises); @@ -43,17 +35,17 @@ export async function fetchProxiesFromSource(source: ProxySource, ctx?: Operatio try { ctx.logger.info(`Fetching proxies from ${source.url}`); - const client = getHttpClient(ctx); - const response = await client.get(source.url, { - timeout: 10000, - }); + const response = await fetch(source.url, { + signal: AbortSignal.timeout(10000), + logger: ctx.logger + } as any); - if (response.status !== 200) { + if (!response.ok) { ctx.logger.warn(`Failed to fetch from ${source.url}: ${response.status}`); return []; } - const text = response.data; + const text = await response.text(); const lines = text.split('\n').filter((line: string) => line.trim()); for (const line of lines) { @@ -68,7 +60,7 @@ export async function fetchProxiesFromSource(source: ProxySource, ctx?: Operatio if (parts.length >= 2) { const proxy: ProxyInfo = { source: source.id, - protocol: source.protocol as 'http' | 'https' | 'socks4' | 'socks5', + protocol: source.protocol as 'http' | 'https', host: parts[0], port: parseInt(parts[1]), }; diff --git a/apps/data-service/src/handlers/proxy/operations/query.operations.ts b/apps/stock/data-ingestion/src/handlers/proxy/operations/query.operations.ts similarity index 90% rename from apps/data-service/src/handlers/proxy/operations/query.operations.ts rename to apps/stock/data-ingestion/src/handlers/proxy/operations/query.operations.ts index 87165fd..5778af3 100644 --- a/apps/data-service/src/handlers/proxy/operations/query.operations.ts +++ b/apps/stock/data-ingestion/src/handlers/proxy/operations/query.operations.ts @@ -1,9 +1,8 @@ /** * Proxy Query Operations - Getting active proxies from cache */ -import { ProxyInfo } from '@stock-bot/http'; -import { OperationContext } from '@stock-bot/utils'; - +import { OperationContext } from '@stock-bot/di'; +import type { ProxyInfo } from '@stock-bot/proxy'; import { PROXY_CONFIG } from '../shared/config'; /** @@ -17,7 +16,7 @@ export async function getRandomActiveProxy( minSuccessRate: number = 50 ): Promise { const ctx = OperationContext.create('proxy', 'get-random'); - + try { // Get all active proxy keys from cache const pattern = protocol @@ -56,7 +55,10 @@ export async function getRandomActiveProxy( return proxyData; } } catch (error) { - ctx.logger.debug('Error reading proxy from cache', { key, error: (error as Error).message }); + ctx.logger.debug('Error reading proxy from cache', { + key, + error: (error as Error).message, + }); continue; } } @@ -76,4 +78,4 @@ export async function getRandomActiveProxy( }); return null; } -} \ No newline at end of file +} diff --git a/apps/data-service/src/handlers/proxy/operations/queue.operations.ts b/apps/stock/data-ingestion/src/handlers/proxy/operations/queue.operations.ts similarity index 58% rename from apps/data-service/src/handlers/proxy/operations/queue.operations.ts rename to apps/stock/data-ingestion/src/handlers/proxy/operations/queue.operations.ts index 22b23a8..54114b5 100644 --- a/apps/data-service/src/handlers/proxy/operations/queue.operations.ts +++ b/apps/stock/data-ingestion/src/handlers/proxy/operations/queue.operations.ts @@ -1,14 +1,18 @@ /** * Proxy Queue Operations - Queueing proxy operations */ -import { ProxyInfo } from '@stock-bot/http'; -import { QueueManager } from '@stock-bot/queue'; -import { OperationContext } from '@stock-bot/utils'; +import { OperationContext } from '@stock-bot/di'; +import type { ProxyInfo } from '@stock-bot/proxy'; +import type { IServiceContainer } from '@stock-bot/handlers'; -export async function queueProxyFetch(): Promise { +export async function queueProxyFetch(container: IServiceContainer): Promise { const ctx = OperationContext.create('proxy', 'queue-fetch'); + + const queueManager = container.queue; + if (!queueManager) { + throw new Error('Queue manager not available'); + } - const queueManager = QueueManager.getInstance(); const queue = queueManager.getQueue('proxy'); const job = await queue.add('proxy-fetch', { handler: 'proxy', @@ -22,10 +26,14 @@ export async function queueProxyFetch(): Promise { return jobId; } -export async function queueProxyCheck(proxies: ProxyInfo[]): Promise { +export async function queueProxyCheck(proxies: ProxyInfo[], container: IServiceContainer): Promise { const ctx = OperationContext.create('proxy', 'queue-check'); + + const queueManager = container.queue; + if (!queueManager) { + throw new Error('Queue manager not available'); + } - const queueManager = QueueManager.getInstance(); const queue = queueManager.getQueue('proxy'); const job = await queue.add('proxy-check', { handler: 'proxy', diff --git a/apps/stock/data-ingestion/src/handlers/proxy/proxy.handler.ts b/apps/stock/data-ingestion/src/handlers/proxy/proxy.handler.ts new file mode 100644 index 0000000..b64f1d6 --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/proxy/proxy.handler.ts @@ -0,0 +1,86 @@ +import { + BaseHandler, + Handler, + Operation, + ScheduledOperation, + type IServiceContainer, +} from '@stock-bot/handlers'; +import type { ProxyInfo } from '@stock-bot/proxy'; +import { processItems } from '@stock-bot/queue'; +import { fetchProxiesFromSources } from './operations/fetch.operations'; +import { checkProxy } from './operations/check.operations'; + +@Handler('proxy') +export class ProxyHandler extends BaseHandler { + constructor(services: IServiceContainer) { + super(services); + } + + @Operation('fetch-from-sources') + @ScheduledOperation('proxy-fetch-and-check', '0 0 * * 0', { + priority: 0, + description: 'Fetch and validate proxy list from sources', + // immediately: true, // Don't run immediately during startup to avoid conflicts + }) + async fetchFromSources(): Promise<{ + processed: number; + jobsCreated: number; + batchesCreated?: number; + mode: string; + }> { + // Fetch proxies from all configured sources + this.logger.info('Processing fetch proxies from sources request'); + + const proxies = await fetchProxiesFromSources(); + this.logger.info('Fetched proxies from sources', { count: proxies.length }); + + if (proxies.length === 0) { + this.logger.warn('No proxies fetched from sources'); + return { processed: 0, jobsCreated: 0, mode: 'direct' }; + } + + // Get QueueManager from service container + const queueManager = this.queue; + if (!queueManager) { + throw new Error('Queue manager not available'); + } + + // Batch process the proxies through check-proxy operation + const batchResult = await processItems(proxies, 'proxy', { + handler: 'proxy', + operation: 'check-proxy', + totalDelayHours: 0.083, // 5 minutes (5/60 hours) + batchSize: 50, // Process 50 proxies per batch + priority: 3, + useBatching: true, + retries: 1, + ttl: 30000, // 30 second timeout per proxy check + removeOnComplete: 5, + removeOnFail: 3, + }, queueManager); + + this.logger.info('Batch proxy validation completed', { + totalProxies: proxies.length, + jobsCreated: batchResult.jobsCreated, + mode: batchResult.mode, + batchesCreated: batchResult.batchesCreated, + duration: `${batchResult.duration}ms`, + }); + + return { + processed: proxies.length, + jobsCreated: batchResult.jobsCreated, + batchesCreated: batchResult.batchesCreated, + mode: batchResult.mode, + }; + } + + @Operation('check-proxy') + async checkProxyOperation(payload: ProxyInfo): Promise { + // payload is now the raw proxy info object + this.logger.debug('Processing proxy check request', { + proxy: `${payload.host}:${payload.port}`, + }); + return checkProxy(payload); + } +} \ No newline at end of file diff --git a/apps/data-service/src/handlers/proxy/shared/config.ts b/apps/stock/data-ingestion/src/handlers/proxy/shared/config.ts similarity index 99% rename from apps/data-service/src/handlers/proxy/shared/config.ts rename to apps/stock/data-ingestion/src/handlers/proxy/shared/config.ts index 260605b..06481bb 100644 --- a/apps/data-service/src/handlers/proxy/shared/config.ts +++ b/apps/stock/data-ingestion/src/handlers/proxy/shared/config.ts @@ -137,4 +137,4 @@ export const PROXY_CONFIG = { protocol: 'https', }, ], -}; \ No newline at end of file +}; diff --git a/apps/data-service/src/handlers/proxy/shared/types.ts b/apps/stock/data-ingestion/src/handlers/proxy/shared/types.ts similarity index 99% rename from apps/data-service/src/handlers/proxy/shared/types.ts rename to apps/stock/data-ingestion/src/handlers/proxy/shared/types.ts index b28c618..331cb3f 100644 --- a/apps/data-service/src/handlers/proxy/shared/types.ts +++ b/apps/stock/data-ingestion/src/handlers/proxy/shared/types.ts @@ -10,4 +10,4 @@ export interface ProxySource { total?: number; // Optional, used for stats percentWorking?: number; // Optional, used for stats lastChecked?: Date; // Optional, used for stats -} \ No newline at end of file +} diff --git a/apps/stock/data-ingestion/src/handlers/qm/actions/exchanges.action.ts b/apps/stock/data-ingestion/src/handlers/qm/actions/exchanges.action.ts new file mode 100644 index 0000000..8e7a04b --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/qm/actions/exchanges.action.ts @@ -0,0 +1,19 @@ +/** + * QM Exchanges Operations - Simple exchange data fetching + */ + +import type { IServiceContainer } from '@stock-bot/handlers'; + +export async function fetchExchanges(services: IServiceContainer): Promise { + // Get exchanges from MongoDB + const exchanges = await services.mongodb.collection('qm_exchanges').find({}).toArray(); + + return exchanges; +} + +export async function getExchangeByCode(services: IServiceContainer, code: string): Promise { + // Get specific exchange by code + const exchange = await services.mongodb.collection('qm_exchanges').findOne({ code }); + + return exchange; +} diff --git a/apps/stock/data-ingestion/src/handlers/qm/actions/session.action.ts b/apps/stock/data-ingestion/src/handlers/qm/actions/session.action.ts new file mode 100644 index 0000000..99fc9e7 --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/qm/actions/session.action.ts @@ -0,0 +1,72 @@ +/** + * QM Session Actions - Session management and creation + */ + +import { BaseHandler } from '@stock-bot/core/handlers'; +import { QM_SESSION_IDS, SESSION_CONFIG } from '../shared/config'; +import { QMSessionManager } from '../shared/session-manager'; + +/** + * Check existing sessions and queue creation jobs for needed sessions + */ +export async function checkSessions(handler: BaseHandler): Promise<{ + cleaned: number; + queued: number; + message: string; +}> { + const sessionManager = QMSessionManager.getInstance(); + const cleanedCount = sessionManager.cleanupFailedSessions(); + // Check which session IDs need more sessions and queue creation jobs + let queuedCount = 0; + for (const [sessionType, sessionId] of Object.entries(QM_SESSION_IDS)) { + handler.logger.debug(`Checking session ID: ${sessionId}`); + if (sessionManager.needsMoreSessions(sessionId)) { + const currentCount = sessionManager.getSessions(sessionId).length; + const neededSessions = SESSION_CONFIG.MAX_SESSIONS - currentCount; + for (let i = 0; i < neededSessions; i++) { + await handler.scheduleOperation('create-session', { sessionId, sessionType }); + handler.logger.info(`Queued job to create session for ${sessionType}`); + queuedCount++; + } + } + } + + return { + cleaned: cleanedCount, + queued: queuedCount, + message: `Session check completed: cleaned ${cleanedCount}, queued ${queuedCount}`, + }; +} + +/** + * Create a single session for a specific session ID + */ +export async function createSingleSession( + handler: BaseHandler, + input: any +): Promise<{ sessionId: string; status: string; sessionType: string }> { + const { sessionId: _sessionId, sessionType } = input || {}; + const _sessionManager = QMSessionManager.getInstance(); + + // Get proxy from proxy service + const _proxyString = handler.proxy.getProxy(); + + // const session = { + // proxy: proxyString || 'http://proxy:8080', + // headers: sessionManager.getQmHeaders(), + // successfulCalls: 0, + // failedCalls: 0, + // lastUsed: new Date() + // }; + + handler.logger.info(`Creating session for ${sessionType}`); + + // Add session to manager + // sessionManager.addSession(sessionType, session); + + return { + sessionId: sessionType, + status: 'created', + sessionType, + }; +} diff --git a/apps/stock/data-ingestion/src/handlers/qm/actions/spider.action.ts b/apps/stock/data-ingestion/src/handlers/qm/actions/spider.action.ts new file mode 100644 index 0000000..2e694c5 --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/qm/actions/spider.action.ts @@ -0,0 +1,33 @@ +/** + * QM Spider Operations - Simple symbol discovery + */ + +import type { IServiceContainer } from '@stock-bot/handlers'; +import type { SymbolSpiderJob } from '../shared/types'; + +export async function spiderSymbolSearch( + services: IServiceContainer, + config: SymbolSpiderJob +): Promise<{ foundSymbols: number; depth: number }> { + // Simple spider implementation + // TODO: Implement actual API calls to discover symbols + + // For now, just return mock results + const foundSymbols = Math.floor(Math.random() * 10) + 1; + + return { + foundSymbols, + depth: config.depth, + }; +} + +export async function queueSymbolDiscovery( + services: IServiceContainer, + searchTerms: string[] +): Promise { + // Queue symbol discovery jobs + for (const term of searchTerms) { + // TODO: Queue actual discovery jobs + await services.cache.set(`discovery:${term}`, { queued: true }, 3600); + } +} diff --git a/apps/stock/data-ingestion/src/handlers/qm/actions/symbols.action.ts b/apps/stock/data-ingestion/src/handlers/qm/actions/symbols.action.ts new file mode 100644 index 0000000..54f004c --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/qm/actions/symbols.action.ts @@ -0,0 +1,19 @@ +/** + * QM Symbols Operations - Simple symbol fetching + */ + +import type { IServiceContainer } from '@stock-bot/handlers'; + +export async function searchSymbols(services: IServiceContainer): Promise { + // Get symbols from MongoDB + const symbols = await services.mongodb.collection('qm_symbols').find({}).limit(50).toArray(); + + return symbols; +} + +export async function fetchSymbolData(services: IServiceContainer, symbol: string): Promise { + // Fetch data for a specific symbol + const symbolData = await services.mongodb.collection('qm_symbols').findOne({ symbol }); + + return symbolData; +} diff --git a/apps/stock/data-ingestion/src/handlers/qm/qm.handler.ts b/apps/stock/data-ingestion/src/handlers/qm/qm.handler.ts new file mode 100644 index 0000000..6433566 --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/qm/qm.handler.ts @@ -0,0 +1,103 @@ +import { BaseHandler, Handler, type IServiceContainer } from '@stock-bot/handlers'; + +@Handler('qm') +export class QMHandler extends BaseHandler { + constructor(services: IServiceContainer) { + super(services); // Handler name read from @Handler decorator + } + + // @Operation('check-sessions') + // @QueueSchedule('0 */15 * * *', { + // priority: 7, + // immediately: true, + // description: 'Check and maintain QM sessions' + // }) + // async checkSessions(input: unknown, context: ExecutionContext): Promise { + // // Call the session maintenance action + // const { checkSessions } = await import('./actions/session.action'); + // return await checkSessions(this); + // } + + // @Operation('create-session') + // async createSession(input: unknown, context: ExecutionContext): Promise { + // // Call the individual session creation action + // const { createSingleSession } = await import('./actions/session.action'); + // return await createSingleSession(this, input); + // } + + // @Operation('search-symbols') + // async searchSymbols(_input: unknown, _context: ExecutionContext): Promise { + // this.logger.info('Searching QM symbols with new DI pattern...'); + // try { + // // Check existing symbols in MongoDB + // const symbolsCollection = this.mongodb.collection('qm_symbols'); + // const symbols = await symbolsCollection.find({}).limit(100).toArray(); + + // this.logger.info('QM symbol search completed', { count: symbols.length }); + + // if (symbols && symbols.length > 0) { + // // Cache result for performance + // await this.cache.set('qm-symbols-sample', symbols.slice(0, 10), 1800); + + // return { + // success: true, + // message: 'QM symbol search completed successfully', + // count: symbols.length, + // symbols: symbols.slice(0, 10), // Return first 10 symbols as sample + // }; + // } else { + // // No symbols found - this is expected initially + // this.logger.info('No QM symbols found in database yet'); + // return { + // success: true, + // message: 'No symbols found yet - database is empty', + // count: 0, + // }; + // } + + // } catch (error) { + // this.logger.error('Failed to search QM symbols', { error }); + // throw error; + // } + // } + + // @Operation('spider-symbol-search') + // @QueueSchedule('0 0 * * 0', { + // priority: 10, + // immediately: false, + // description: 'Comprehensive symbol search using QM API' + // }) + // async spiderSymbolSearch(payload: SymbolSpiderJob | undefined, context: ExecutionContext): Promise { + // // Set default payload for scheduled runs + // const jobPayload: SymbolSpiderJob = payload || { + // prefix: null, + // depth: 1, + // source: 'qm', + // maxDepth: 4 + // }; + + // this.logger.info('Starting QM spider symbol search', { payload: jobPayload }); + + // // Store spider job info in cache (temporary data) + // const spiderJobId = `spider:qm:${Date.now()}:${Math.random().toString(36).substr(2, 9)}`; + // const spiderResult = { + // payload: jobPayload, + // startTime: new Date().toISOString(), + // status: 'started', + // jobId: spiderJobId + // }; + + // // Store in cache with 1 hour TTL (temporary data) + // await this.cache.set(spiderJobId, spiderResult, 3600); + // this.logger.debug('Spider job stored in cache', { spiderJobId, ttl: 3600 }); + + // // Schedule follow-up processing if needed + // await this.scheduleOperation('search-symbols', { source: 'spider', spiderJobId }, { delay: 5000 }); + + // return { + // success: true, + // message: 'QM spider search initiated', + // spiderJobId + // }; + // } +} diff --git a/apps/data-service/src/handlers/qm/shared/config.ts b/apps/stock/data-ingestion/src/handlers/qm/shared/config.ts similarity index 80% rename from apps/data-service/src/handlers/qm/shared/config.ts rename to apps/stock/data-ingestion/src/handlers/qm/shared/config.ts index f54deda..9964359 100644 --- a/apps/data-service/src/handlers/qm/shared/config.ts +++ b/apps/stock/data-ingestion/src/handlers/qm/shared/config.ts @@ -2,12 +2,10 @@ * Shared configuration for QM operations */ -import { getRandomUserAgent } from '@stock-bot/http'; - // QM Session IDs for different endpoints export const QM_SESSION_IDS = { LOOKUP: 'dc8c9930437f65d30f6597768800957017bac203a0a50342932757c8dfa158d6', // lookup endpoint - // '5ad521e05faf5778d567f6d0012ec34d6cdbaeb2462f41568f66558bc7b4ced9': [], //4488d072b + // '5ad521e05faf5778d567f6d0012ec34d6cdbaeb2462f41568f66558bc7b4ced9': [], //4488d072b // cc1cbdaf040f76db8f4c94f7d156b9b9b716e1a7509ec9c74a48a47f6b6b9f87: [], //97ff00cf3 // getQuotes // '74963ff42f1db2320d051762b5d3950ff9eab23f9d5c5b592551b4ca0441d086': [], //32ca24e394b // getSplitsBySymbol getBrokerRatingsBySymbol getDividendsBySymbol getEarningsSurprisesBySymbol getEarningsEventsBySymbol // '1e1d7cb1de1fd2fe52684abdea41a446919a5fe12776dfab88615ac1ce1ec2f6': [], //fb5721812d2c // getEnhancedQuotes getProfiles @@ -28,8 +26,6 @@ export const QM_CONFIG = { BASE_URL: 'https://app.quotemedia.com', AUTH_PATH: '/auth/g/authenticate/dataTool/v0/500', LOOKUP_URL: 'https://app.quotemedia.com/datatool/lookup.json', - ORIGIN: 'https://www.quotemedia.com', - REFERER: 'https://www.quotemedia.com/', } as const; // Session management settings @@ -40,17 +36,3 @@ export const SESSION_CONFIG = { SESSION_TIMEOUT: 10000, // 10 seconds API_TIMEOUT: 15000, // 15 seconds } as const; - -/** - * Generate standard QM headers - */ -export function getQmHeaders(): Record { - return { - 'User-Agent': getRandomUserAgent(), - Accept: '*/*', - 'Accept-Language': 'en', - 'Sec-Fetch-Mode': 'cors', - Origin: QM_CONFIG.ORIGIN, - Referer: QM_CONFIG.REFERER, - }; -} \ No newline at end of file diff --git a/apps/data-service/src/handlers/qm/shared/session-manager.ts b/apps/stock/data-ingestion/src/handlers/qm/shared/session-manager.ts similarity index 78% rename from apps/data-service/src/handlers/qm/shared/session-manager.ts rename to apps/stock/data-ingestion/src/handlers/qm/shared/session-manager.ts index b274e7c..ce8d464 100644 --- a/apps/data-service/src/handlers/qm/shared/session-manager.ts +++ b/apps/stock/data-ingestion/src/handlers/qm/shared/session-manager.ts @@ -2,8 +2,9 @@ * QM Session Manager - Centralized session state management */ -import type { QMSession } from './types'; +import { getRandomUserAgent } from '@stock-bot/utils'; import { QM_SESSION_IDS, SESSION_CONFIG } from './config'; +import type { QMSession } from './types'; export class QMSessionManager { private static instance: QMSessionManager | null = null; @@ -32,13 +33,15 @@ export class QMSessionManager { if (!sessions || sessions.length === 0) { return null; } - + // Filter out sessions with excessive failures - const validSessions = sessions.filter(session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS); + const validSessions = sessions.filter( + session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS + ); if (validSessions.length === 0) { return null; } - + return validSessions[Math.floor(Math.random() * validSessions.length)]; } @@ -71,7 +74,7 @@ export class QMSessionManager { */ cleanupFailedSessions(): number { let removedCount = 0; - + Object.keys(this.sessionCache).forEach(sessionId => { const initialCount = this.sessionCache[sessionId].length; this.sessionCache[sessionId] = this.sessionCache[sessionId].filter( @@ -79,16 +82,29 @@ export class QMSessionManager { ); removedCount += initialCount - this.sessionCache[sessionId].length; }); - + return removedCount; } + getQmHeaders(): Record { + return { + 'User-Agent': getRandomUserAgent(), + Accept: '*/*', + 'Accept-Language': 'en', + 'Sec-Fetch-Mode': 'cors', + Origin: 'https://www.quotemedia.com', + Referer: 'https://www.quotemedia.com/', + }; + } + /** * Check if more sessions are needed for a session ID */ needsMoreSessions(sessionId: string): boolean { const sessions = this.sessionCache[sessionId] || []; - const validSessions = sessions.filter(session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS); + const validSessions = sessions.filter( + session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS + ); return validSessions.length < SESSION_CONFIG.MIN_SESSIONS; } @@ -105,18 +121,22 @@ export class QMSessionManager { */ getStats() { const stats: Record = {}; - + Object.entries(this.sessionCache).forEach(([sessionId, sessions]) => { - const validSessions = sessions.filter(session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS); - const failedSessions = sessions.filter(session => session.failedCalls > SESSION_CONFIG.MAX_FAILED_CALLS); - + const validSessions = sessions.filter( + session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS + ); + const failedSessions = sessions.filter( + session => session.failedCalls > SESSION_CONFIG.MAX_FAILED_CALLS + ); + stats[sessionId] = { total: sessions.length, valid: validSessions.length, - failed: failedSessions.length + failed: failedSessions.length, }; }); - + return stats; } @@ -133,4 +153,4 @@ export class QMSessionManager { getInitialized(): boolean { return this.isInitialized; } -} \ No newline at end of file +} diff --git a/apps/data-service/src/handlers/qm/shared/types.ts b/apps/stock/data-ingestion/src/handlers/qm/shared/types.ts similarity index 99% rename from apps/data-service/src/handlers/qm/shared/types.ts rename to apps/stock/data-ingestion/src/handlers/qm/shared/types.ts index 9897459..1855a0c 100644 --- a/apps/data-service/src/handlers/qm/shared/types.ts +++ b/apps/stock/data-ingestion/src/handlers/qm/shared/types.ts @@ -29,4 +29,4 @@ export interface SpiderResult { success: boolean; symbolsFound: number; jobsCreated: number; -} \ No newline at end of file +} diff --git a/apps/data-service/src/handlers/webshare/shared/config.ts b/apps/stock/data-ingestion/src/handlers/webshare/shared/config.ts similarity index 98% rename from apps/data-service/src/handlers/webshare/shared/config.ts rename to apps/stock/data-ingestion/src/handlers/webshare/shared/config.ts index f34aa79..4d3ba82 100644 --- a/apps/data-service/src/handlers/webshare/shared/config.ts +++ b/apps/stock/data-ingestion/src/handlers/webshare/shared/config.ts @@ -7,4 +7,4 @@ export const WEBSHARE_CONFIG = { DEFAULT_MODE: 'direct', DEFAULT_PAGE: 1, TIMEOUT: 10000, -}; \ No newline at end of file +}; diff --git a/apps/stock/data-ingestion/src/handlers/webshare/webshare.handler.ts b/apps/stock/data-ingestion/src/handlers/webshare/webshare.handler.ts new file mode 100644 index 0000000..19cf5c0 --- /dev/null +++ b/apps/stock/data-ingestion/src/handlers/webshare/webshare.handler.ts @@ -0,0 +1,66 @@ +import { + BaseHandler, + Handler, + Operation, + QueueSchedule, + type ExecutionContext, + type IServiceContainer +} from '@stock-bot/handlers'; + +@Handler('webshare') +export class WebShareHandler extends BaseHandler { + constructor(services: IServiceContainer) { + super(services); + } + + @Operation('fetch-proxies') + @QueueSchedule('0 */6 * * *', { // every 6 hours + priority: 3, + immediately: false, // Don't run immediately since ProxyManager fetches on startup + description: 'Refresh proxies from WebShare API', + }) + async fetchProxies(_input: unknown, _context: ExecutionContext): Promise { + this.logger.info('Refreshing proxies from WebShare API'); + + try { + // Check if proxy manager is available + if (!this.proxy) { + this.logger.warn('Proxy manager is not initialized, cannot refresh proxies'); + return { + success: false, + error: 'Proxy manager not initialized', + }; + } + + // Use the proxy manager's refresh method + await this.proxy.refreshProxies(); + + // Get stats after refresh + const stats = this.proxy.getStats(); + const lastFetchTime = this.proxy.getLastFetchTime(); + + this.logger.info('Successfully refreshed proxies', { + total: stats.total, + working: stats.working, + failed: stats.failed, + lastFetchTime, + }); + + // Cache proxy stats for monitoring using handler's cache methods + await this.cacheSet('proxy-count', stats.total, 3600); + await this.cacheSet('working-count', stats.working, 3600); + await this.cacheSet('last-fetch', lastFetchTime?.toISOString() || 'unknown', 1800); + + return { + success: true, + proxiesUpdated: stats.total, + workingProxies: stats.working, + failedProxies: stats.failed, + lastFetchTime, + }; + } catch (error) { + this.logger.error('Failed to refresh proxies', { error }); + throw error; + } + } +} diff --git a/apps/stock/data-ingestion/src/index.ts b/apps/stock/data-ingestion/src/index.ts new file mode 100644 index 0000000..c43f97a --- /dev/null +++ b/apps/stock/data-ingestion/src/index.ts @@ -0,0 +1,80 @@ +/** + * Data Ingestion Service + * Simplified entry point using ServiceApplication framework + */ + +import { initializeStockConfig } from '@stock-bot/stock-config'; +import { + ServiceApplication, +} from '@stock-bot/di'; +import { getLogger } from '@stock-bot/logger'; + +// Local imports +import { initializeAllHandlers } from './handlers'; +import { createRoutes } from './routes/create-routes'; + +// Initialize configuration with service-specific overrides +const config = initializeStockConfig('dataIngestion'); + +// Log the full configuration +const logger = getLogger('data-ingestion'); +logger.info('Service configuration:', config); + +// Create service application +const app = new ServiceApplication( + config, + { + serviceName: 'data-ingestion', + enableHandlers: true, + enableScheduledJobs: true, + corsConfig: { + origin: '*', + allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS', 'PATCH'], + allowHeaders: ['Content-Type', 'Authorization'], + credentials: false, + }, + serviceMetadata: { + version: '1.0.0', + description: 'Market data ingestion from multiple providers', + endpoints: { + health: '/health', + handlers: '/api/handlers', + }, + }, + }, + { + // Lifecycle hooks if needed + onStarted: (_port) => { + const logger = getLogger('data-ingestion'); + logger.info('Data ingestion service startup initiated with ServiceApplication framework'); + }, + } +); + +// Container factory function +async function createContainer(config: any) { + const { ServiceContainerBuilder } = await import('@stock-bot/di'); + + const container = await new ServiceContainerBuilder() + .withConfig(config) + .withOptions({ + enableQuestDB: false, // Data ingestion doesn't need QuestDB yet + enableMongoDB: true, + enablePostgres: config.database?.postgres?.enabled ?? false, + enableCache: true, + enableQueue: true, + enableBrowser: true, // Data ingestion needs browser for web scraping + enableProxy: true, // Data ingestion needs proxy for rate limiting + }) + .build(); // This automatically initializes services + + return container; +} + + +// Start the service +app.start(createContainer, createRoutes, initializeAllHandlers).catch(error => { + const logger = getLogger('data-ingestion'); + logger.fatal('Failed to start data service', { error }); + process.exit(1); +}); \ No newline at end of file diff --git a/apps/stock/data-ingestion/src/routes/create-routes.ts b/apps/stock/data-ingestion/src/routes/create-routes.ts new file mode 100644 index 0000000..f6c29cd --- /dev/null +++ b/apps/stock/data-ingestion/src/routes/create-routes.ts @@ -0,0 +1,74 @@ +/** + * Routes creation with improved DI pattern + */ + +import { Hono } from 'hono'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import { exchangeRoutes } from './exchange.routes'; +import { healthRoutes } from './health.routes'; +import { createQueueRoutes } from './queue.routes'; + +/** + * Creates all routes with access to type-safe services + */ +export function createRoutes(services: IServiceContainer): Hono { + const app = new Hono(); + + // Mount routes that don't need services + app.route('/health', healthRoutes); + + // Mount routes that need services + app.route('/api/exchanges', exchangeRoutes); + app.route('/api/queue', createQueueRoutes(services)); + + // Store services in app context for handlers that need it + app.use('*', async (c, next) => { + c.set('services', services); + await next(); + }); + + // Add a new endpoint to test the improved DI + app.get('/api/di-test', async c => { + try { + const services = c.get('services') as IServiceContainer; + + // Test MongoDB connection + const mongoStats = services.mongodb?.getPoolMetrics?.() || { + status: services.mongodb ? 'connected' : 'disabled', + }; + + // Test PostgreSQL connection + const pgConnected = services.postgres?.connected || false; + + // Test cache + const cacheReady = services.cache?.isReady() || false; + + // Test queue + const queueStats = services.queue?.getGlobalStats() || { status: 'disabled' }; + + return c.json({ + success: true, + message: 'Improved DI pattern is working!', + services: { + mongodb: mongoStats, + postgres: { connected: pgConnected }, + cache: { ready: cacheReady }, + queue: queueStats, + }, + timestamp: new Date().toISOString(), + }); + } catch (error) { + const services = c.get('services') as IServiceContainer; + services.logger.error('DI test endpoint failed', { error }); + return c.json( + { + success: false, + error: error instanceof Error ? error.message : String(error), + }, + 500 + ); + } + }); + + return app; +} diff --git a/apps/data-service/src/routes/exchange.routes.ts b/apps/stock/data-ingestion/src/routes/exchange.routes.ts similarity index 90% rename from apps/data-service/src/routes/exchange.routes.ts rename to apps/stock/data-ingestion/src/routes/exchange.routes.ts index 1752270..4083fd5 100644 --- a/apps/data-service/src/routes/exchange.routes.ts +++ b/apps/stock/data-ingestion/src/routes/exchange.routes.ts @@ -11,7 +11,7 @@ exchange.get('/', async c => { return c.json({ status: 'success', data: [], - message: 'Exchange endpoints will be implemented with database integration' + message: 'Exchange endpoints will be implemented with database integration', }); } catch (error) { logger.error('Failed to get exchanges', { error }); @@ -19,4 +19,4 @@ exchange.get('/', async c => { } }); -export { exchange as exchangeRoutes }; \ No newline at end of file +export { exchange as exchangeRoutes }; diff --git a/apps/data-service/src/routes/health.routes.ts b/apps/stock/data-ingestion/src/routes/health.routes.ts similarity index 88% rename from apps/data-service/src/routes/health.routes.ts rename to apps/stock/data-ingestion/src/routes/health.routes.ts index 0490543..dd3e9b9 100644 --- a/apps/data-service/src/routes/health.routes.ts +++ b/apps/stock/data-ingestion/src/routes/health.routes.ts @@ -6,7 +6,7 @@ const health = new Hono(); health.get('/', c => { return c.json({ status: 'healthy', - service: 'data-service', + service: 'data-ingestion', timestamp: new Date().toISOString(), }); }); diff --git a/apps/data-service/src/routes/index.ts b/apps/stock/data-ingestion/src/routes/index.ts similarity index 100% rename from apps/data-service/src/routes/index.ts rename to apps/stock/data-ingestion/src/routes/index.ts diff --git a/apps/stock/data-ingestion/src/routes/market-data.routes.ts b/apps/stock/data-ingestion/src/routes/market-data.routes.ts new file mode 100644 index 0000000..562ccbe --- /dev/null +++ b/apps/stock/data-ingestion/src/routes/market-data.routes.ts @@ -0,0 +1,142 @@ +/** + * Market data routes + */ +import { Hono } from 'hono'; +import { getLogger } from '@stock-bot/logger'; +import { processItems } from '@stock-bot/queue'; +import type { IServiceContainer } from '@stock-bot/handlers'; + +const logger = getLogger('market-data-routes'); + +export function createMarketDataRoutes(container: IServiceContainer) { + const marketDataRoutes = new Hono(); + + // Market data endpoints + marketDataRoutes.get('/api/live/:symbol', async c => { + const symbol = c.req.param('symbol'); + logger.info('Live data request', { symbol }); + + try { + // Queue job for live data using Yahoo provider + const queueManager = container.queue; + if (!queueManager) { + return c.json({ status: 'error', message: 'Queue manager not available' }, 503); + } + + const queue = queueManager.getQueue('yahoo-finance'); + const job = await queue.add('live-data', { + handler: 'yahoo-finance', + operation: 'live-data', + payload: { symbol }, + }); + return c.json({ + status: 'success', + message: 'Live data job queued', + jobId: job.id, + symbol, + }); + } catch (error) { + logger.error('Failed to queue live data job', { symbol, error }); + return c.json({ status: 'error', message: 'Failed to queue live data job' }, 500); + } + }); + + marketDataRoutes.get('/api/historical/:symbol', async c => { + const symbol = c.req.param('symbol'); + const from = c.req.query('from'); + const to = c.req.query('to'); + + logger.info('Historical data request', { symbol, from, to }); + + try { + const fromDate = from ? new Date(from) : new Date(Date.now() - 30 * 24 * 60 * 60 * 1000); // 30 days ago + const toDate = to ? new Date(to) : new Date(); // Now + + // Queue job for historical data using Yahoo provider + const queueManager = container.queue; + if (!queueManager) { + return c.json({ status: 'error', message: 'Queue manager not available' }, 503); + } + + const queue = queueManager.getQueue('yahoo-finance'); + const job = await queue.add('historical-data', { + handler: 'yahoo-finance', + operation: 'historical-data', + payload: { + symbol, + from: fromDate.toISOString(), + to: toDate.toISOString(), + }, + }); + + return c.json({ + status: 'success', + message: 'Historical data job queued', + jobId: job.id, + symbol, + from: fromDate, + to: toDate, + }); + } catch (error) { + logger.error('Failed to queue historical data job', { symbol, from, to, error }); + return c.json({ status: 'error', message: 'Failed to queue historical data job' }, 500); + } + }); + + // Batch processing endpoint using new queue system + marketDataRoutes.post('/api/process-symbols', async c => { + try { + const { + symbols, + provider = 'ib', + operation = 'fetch-session', + useBatching = true, + totalDelayHours = 0.0083, // ~30 seconds (30/3600 hours) + batchSize = 10, + } = await c.req.json(); + + if (!symbols || !Array.isArray(symbols) || symbols.length === 0) { + return c.json({ status: 'error', message: 'Invalid symbols array' }, 400); + } + + logger.info('Batch processing symbols', { + count: symbols.length, + provider, + operation, + useBatching, + }); + + const queueManager = container.queue; + if (!queueManager) { + return c.json({ status: 'error', message: 'Queue manager not available' }, 503); + } + + const result = await processItems(symbols, provider, { + handler: provider, + operation, + totalDelayHours, + useBatching, + batchSize, + priority: 2, + retries: 2, + removeOnComplete: 5, + removeOnFail: 10, + }, queueManager); + + return c.json({ + status: 'success', + message: 'Batch processing initiated', + result, + symbols: symbols.length, + }); + } catch (error) { + logger.error('Failed to process symbols batch', { error }); + return c.json({ status: 'error', message: 'Failed to process symbols batch' }, 500); + } + }); + + return marketDataRoutes; +} + +// Legacy export for backward compatibility +export const marketDataRoutes = createMarketDataRoutes({} as IServiceContainer); \ No newline at end of file diff --git a/apps/stock/data-ingestion/src/routes/queue.routes.ts b/apps/stock/data-ingestion/src/routes/queue.routes.ts new file mode 100644 index 0000000..d3cd595 --- /dev/null +++ b/apps/stock/data-ingestion/src/routes/queue.routes.ts @@ -0,0 +1,35 @@ +import { Hono } from 'hono'; +import { getLogger } from '@stock-bot/logger'; +import type { IServiceContainer } from '@stock-bot/handlers'; + +const logger = getLogger('queue-routes'); + +export function createQueueRoutes(container: IServiceContainer) { + const queue = new Hono(); + + // Queue status endpoint + queue.get('/status', async c => { + try { + const queueManager = container.queue; + if (!queueManager) { + return c.json({ status: 'error', message: 'Queue manager not available' }, 503); + } + + const globalStats = await queueManager.getGlobalStats(); + + return c.json({ + status: 'success', + data: globalStats, + message: 'Queue status retrieved successfully', + }); + } catch (error) { + logger.error('Failed to get queue status', { error }); + return c.json({ status: 'error', message: 'Failed to get queue status' }, 500); + } + }); + + return queue; +} + +// Legacy export for backward compatibility +export const queueRoutes = createQueueRoutes({} as IServiceContainer); \ No newline at end of file diff --git a/apps/data-service/src/types/exchange.types.ts b/apps/stock/data-ingestion/src/types/exchange.types.ts similarity index 99% rename from apps/data-service/src/types/exchange.types.ts rename to apps/stock/data-ingestion/src/types/exchange.types.ts index 13a10b3..6fcbdd8 100644 --- a/apps/data-service/src/types/exchange.types.ts +++ b/apps/stock/data-ingestion/src/types/exchange.types.ts @@ -37,4 +37,4 @@ export interface IBSymbol { name?: string; currency?: string; // Add other properties as needed -} \ No newline at end of file +} diff --git a/apps/data-service/src/types/job-payloads.ts b/apps/stock/data-ingestion/src/types/job-payloads.ts similarity index 99% rename from apps/data-service/src/types/job-payloads.ts rename to apps/stock/data-ingestion/src/types/job-payloads.ts index af2f82c..05144b8 100644 --- a/apps/data-service/src/types/job-payloads.ts +++ b/apps/stock/data-ingestion/src/types/job-payloads.ts @@ -90,4 +90,4 @@ export interface FetchWebShareProxiesResult extends CountableJobResult { // No payload job types (for operations that don't need input) export interface NoPayload { // Empty interface for operations that don't need payload -} \ No newline at end of file +} diff --git a/apps/data-service/src/utils/symbol-search.util.ts b/apps/stock/data-ingestion/src/utils/symbol-search.util.ts similarity index 98% rename from apps/data-service/src/utils/symbol-search.util.ts rename to apps/stock/data-ingestion/src/utils/symbol-search.util.ts index a84e68f..fd40f1c 100644 --- a/apps/data-service/src/utils/symbol-search.util.ts +++ b/apps/stock/data-ingestion/src/utils/symbol-search.util.ts @@ -1,5 +1,5 @@ +import { sleep } from '@stock-bot/di'; import { getLogger } from '@stock-bot/logger'; -import { sleep } from '@stock-bot/utils'; const logger = getLogger('symbol-search-util'); diff --git a/apps/stock/data-ingestion/tsconfig.json b/apps/stock/data-ingestion/tsconfig.json new file mode 100644 index 0000000..0ac761f --- /dev/null +++ b/apps/stock/data-ingestion/tsconfig.json @@ -0,0 +1,18 @@ +{ + "extends": "../../tsconfig.app.json", + "references": [ + { "path": "../../libs/core/types" }, + { "path": "../../libs/core/config" }, + { "path": "../../libs/core/logger" }, + { "path": "../../libs/core/di" }, + { "path": "../../libs/core/handlers" }, + { "path": "../../libs/data/cache" }, + { "path": "../../libs/data/mongodb" }, + { "path": "../../libs/data/postgres" }, + { "path": "../../libs/data/questdb" }, + { "path": "../../libs/services/queue" }, + { "path": "../../libs/services/shutdown" }, + { "path": "../../libs/utils" }, + { "path": "../config" } + ] +} diff --git a/apps/data-sync-service/README.md b/apps/stock/data-pipeline/README.md similarity index 100% rename from apps/data-sync-service/README.md rename to apps/stock/data-pipeline/README.md diff --git a/apps/data-service/package.json b/apps/stock/data-pipeline/package.json similarity index 69% rename from apps/data-service/package.json rename to apps/stock/data-pipeline/package.json index 22f0f8d..c10b086 100644 --- a/apps/data-service/package.json +++ b/apps/stock/data-pipeline/package.json @@ -1,7 +1,7 @@ { - "name": "@stock-bot/data-service", + "name": "@stock-bot/data-pipeline", "version": "1.0.0", - "description": "Combined data ingestion and historical data service", + "description": "Data processing pipeline for syncing and transforming raw data to normalized records", "main": "dist/index.js", "type": "module", "scripts": { @@ -14,10 +14,11 @@ "dependencies": { "@stock-bot/cache": "*", "@stock-bot/config": "*", + "@stock-bot/stock-config": "*", "@stock-bot/logger": "*", - "@stock-bot/mongodb-client": "*", - "@stock-bot/postgres-client": "*", - "@stock-bot/questdb-client": "*", + "@stock-bot/mongodb": "*", + "@stock-bot/postgres": "*", + "@stock-bot/questdb": "*", "@stock-bot/queue": "*", "@stock-bot/shutdown": "*", "hono": "^4.0.0" diff --git a/apps/stock/data-pipeline/src/container-setup.ts b/apps/stock/data-pipeline/src/container-setup.ts new file mode 100644 index 0000000..1482d04 --- /dev/null +++ b/apps/stock/data-pipeline/src/container-setup.ts @@ -0,0 +1,34 @@ +/** + * Service Container Setup for Data Pipeline + * Configures dependency injection for the data pipeline service + */ + +import type { IServiceContainer } from '@stock-bot/handlers'; +import { getLogger } from '@stock-bot/logger'; +import type { AppConfig } from '@stock-bot/config'; + +const logger = getLogger('data-pipeline-container'); + +/** + * Configure the service container for data pipeline workloads + */ +export function setupServiceContainer( + config: AppConfig, + container: IServiceContainer +): IServiceContainer { + logger.info('Configuring data pipeline service container...'); + + // Data pipeline specific configuration + // This service does more complex queries and transformations + const poolSizes = { + mongodb: config.environment === 'production' ? 40 : 20, + postgres: config.environment === 'production' ? 50 : 25, + cache: config.environment === 'production' ? 30 : 15, + }; + + logger.info('Data pipeline pool sizes configured', poolSizes); + + // The container is already configured with connections + // Just return it with our logging + return container; +} \ No newline at end of file diff --git a/apps/stock/data-pipeline/src/handlers/exchanges/exchanges.handler.ts b/apps/stock/data-pipeline/src/handlers/exchanges/exchanges.handler.ts new file mode 100644 index 0000000..c1f7800 --- /dev/null +++ b/apps/stock/data-pipeline/src/handlers/exchanges/exchanges.handler.ts @@ -0,0 +1,111 @@ +import { + BaseHandler, + Handler, + Operation, + ScheduledOperation, + type IServiceContainer, +} from '@stock-bot/handlers'; +import { clearPostgreSQLData } from './operations/clear-postgresql-data.operations'; +import { getSyncStatus } from './operations/enhanced-sync-status.operations'; +import { getExchangeStats } from './operations/exchange-stats.operations'; +import { getProviderMappingStats } from './operations/provider-mapping-stats.operations'; +import { syncQMExchanges } from './operations/qm-exchanges.operations'; +import { syncAllExchanges } from './operations/sync-all-exchanges.operations'; +import { syncIBExchanges } from './operations/sync-ib-exchanges.operations'; +import { syncQMProviderMappings } from './operations/sync-qm-provider-mappings.operations'; + +@Handler('exchanges') +export class ExchangesHandler extends BaseHandler { + constructor(services: IServiceContainer) { + super(services); + } + + /** + * Sync all exchanges - weekly full sync + */ + @Operation('sync-all-exchanges') + @ScheduledOperation('sync-all-exchanges', '0 0 * * 0', { + priority: 10, + description: 'Weekly full exchange sync on Sunday at midnight', + }) + async syncAllExchanges(payload?: { clearFirst?: boolean }): Promise { + const finalPayload = payload || { clearFirst: true }; + this.log('info', 'Starting sync of all exchanges', finalPayload); + return syncAllExchanges(finalPayload, this.services); + } + + /** + * Sync exchanges from QuestionsAndMethods + */ + @Operation('sync-qm-exchanges') + @ScheduledOperation('sync-qm-exchanges', '0 1 * * *', { + priority: 5, + description: 'Daily sync of QM exchanges at 1 AM', + }) + async syncQMExchanges(): Promise { + this.log('info', 'Starting QM exchanges sync...'); + return syncQMExchanges({}, this.services); + } + + /** + * Sync exchanges from Interactive Brokers + */ + @Operation('sync-ib-exchanges') + @ScheduledOperation('sync-ib-exchanges', '0 3 * * *', { + priority: 3, + description: 'Daily sync of IB exchanges at 3 AM', + }) + async syncIBExchanges(): Promise { + this.log('info', 'Starting IB exchanges sync...'); + return syncIBExchanges({}, this.services); + } + + /** + * Sync provider mappings from QuestionsAndMethods + */ + @Operation('sync-qm-provider-mappings') + @ScheduledOperation('sync-qm-provider-mappings', '0 3 * * *', { + priority: 7, + description: 'Daily sync of QM provider mappings at 3 AM', + }) + async syncQMProviderMappings(): Promise { + this.log('info', 'Starting QM provider mappings sync...'); + return syncQMProviderMappings({}, this.services); + } + + /** + * Clear PostgreSQL data - maintenance operation + */ + @Operation('clear-postgresql-data') + async clearPostgreSQLData(payload: { type?: 'exchanges' | 'provider_mappings' | 'all' }): Promise { + this.log('warn', 'Clearing PostgreSQL data', payload); + return clearPostgreSQLData(payload, this.services); + } + + /** + * Get exchange statistics + */ + @Operation('get-exchange-stats') + async getExchangeStats(): Promise { + this.log('info', 'Getting exchange statistics...'); + return getExchangeStats({}, this.services); + } + + /** + * Get provider mapping statistics + */ + @Operation('get-provider-mapping-stats') + async getProviderMappingStats(): Promise { + this.log('info', 'Getting provider mapping statistics...'); + return getProviderMappingStats({}, this.services); + } + + /** + * Get enhanced sync status + */ + @Operation('enhanced-sync-status') + async getEnhancedSyncStatus(): Promise { + this.log('info', 'Getting enhanced sync status...'); + return getSyncStatus({}, this.services); + } +} \ No newline at end of file diff --git a/apps/data-sync-service/src/handlers/exchanges/operations/clear-postgresql-data.operations.ts b/apps/stock/data-pipeline/src/handlers/exchanges/operations/clear-postgresql-data.operations.ts similarity index 82% rename from apps/data-sync-service/src/handlers/exchanges/operations/clear-postgresql-data.operations.ts rename to apps/stock/data-pipeline/src/handlers/exchanges/operations/clear-postgresql-data.operations.ts index 8880530..733cd35 100644 --- a/apps/data-sync-service/src/handlers/exchanges/operations/clear-postgresql-data.operations.ts +++ b/apps/stock/data-pipeline/src/handlers/exchanges/operations/clear-postgresql-data.operations.ts @@ -1,10 +1,13 @@ import { getLogger } from '@stock-bot/logger'; -import { getPostgreSQLClient } from '@stock-bot/postgres-client'; +import type { IServiceContainer } from '@stock-bot/handlers'; import type { JobPayload } from '../../../types/job-payloads'; const logger = getLogger('enhanced-sync-clear-postgresql-data'); -export async function clearPostgreSQLData(payload: JobPayload): Promise<{ +export async function clearPostgreSQLData( + payload: JobPayload, + container: IServiceContainer +): Promise<{ exchangesCleared: number; symbolsCleared: number; mappingsCleared: number; @@ -12,8 +15,8 @@ export async function clearPostgreSQLData(payload: JobPayload): Promise<{ logger.info('Clearing existing PostgreSQL data...'); try { - const postgresClient = getPostgreSQLClient(); - + const postgresClient = container.postgres; + // Start transaction for atomic operations await postgresClient.query('BEGIN'); @@ -21,9 +24,7 @@ export async function clearPostgreSQLData(payload: JobPayload): Promise<{ const exchangeCountResult = await postgresClient.query( 'SELECT COUNT(*) as count FROM exchanges' ); - const symbolCountResult = await postgresClient.query( - 'SELECT COUNT(*) as count FROM symbols' - ); + const symbolCountResult = await postgresClient.query('SELECT COUNT(*) as count FROM symbols'); const mappingCountResult = await postgresClient.query( 'SELECT COUNT(*) as count FROM provider_mappings' ); @@ -52,9 +53,9 @@ export async function clearPostgreSQLData(payload: JobPayload): Promise<{ return { exchangesCleared, symbolsCleared, mappingsCleared }; } catch (error) { - const postgresClient = getPostgreSQLClient(); + const postgresClient = container.postgres; await postgresClient.query('ROLLBACK'); logger.error('Failed to clear PostgreSQL data', { error }); throw error; } -} \ No newline at end of file +} diff --git a/apps/data-sync-service/src/handlers/exchanges/operations/enhanced-sync-status.operations.ts b/apps/stock/data-pipeline/src/handlers/exchanges/operations/enhanced-sync-status.operations.ts similarity index 76% rename from apps/data-sync-service/src/handlers/exchanges/operations/enhanced-sync-status.operations.ts rename to apps/stock/data-pipeline/src/handlers/exchanges/operations/enhanced-sync-status.operations.ts index 2a42451..96e5ad1 100644 --- a/apps/data-sync-service/src/handlers/exchanges/operations/enhanced-sync-status.operations.ts +++ b/apps/stock/data-pipeline/src/handlers/exchanges/operations/enhanced-sync-status.operations.ts @@ -1,14 +1,17 @@ import { getLogger } from '@stock-bot/logger'; -import { getPostgreSQLClient } from '@stock-bot/postgres-client'; +import type { IServiceContainer } from '@stock-bot/handlers'; import type { JobPayload, SyncStatus } from '../../../types/job-payloads'; const logger = getLogger('enhanced-sync-status'); -export async function getSyncStatus(payload: JobPayload): Promise { +export async function getSyncStatus( + payload: JobPayload, + container: IServiceContainer +): Promise { logger.info('Getting comprehensive sync status...'); try { - const postgresClient = getPostgreSQLClient(); + const postgresClient = container.postgres; const query = ` SELECT provider, data_type as "dataType", last_sync_at as "lastSyncAt", last_sync_count as "lastSyncCount", sync_errors as "syncErrors" @@ -16,11 +19,11 @@ export async function getSyncStatus(payload: JobPayload): Promise ORDER BY provider, data_type `; const result = await postgresClient.query(query); - + logger.info(`Retrieved sync status for ${result.rows.length} entries`); return result.rows; } catch (error) { logger.error('Failed to get sync status', { error }); throw error; } -} \ No newline at end of file +} diff --git a/apps/data-sync-service/src/handlers/exchanges/operations/exchange-stats.operations.ts b/apps/stock/data-pipeline/src/handlers/exchanges/operations/exchange-stats.operations.ts similarity index 76% rename from apps/data-sync-service/src/handlers/exchanges/operations/exchange-stats.operations.ts rename to apps/stock/data-pipeline/src/handlers/exchanges/operations/exchange-stats.operations.ts index 74806d3..eeb6c59 100644 --- a/apps/data-sync-service/src/handlers/exchanges/operations/exchange-stats.operations.ts +++ b/apps/stock/data-pipeline/src/handlers/exchanges/operations/exchange-stats.operations.ts @@ -1,14 +1,17 @@ import { getLogger } from '@stock-bot/logger'; -import { getPostgreSQLClient } from '@stock-bot/postgres-client'; +import type { IServiceContainer } from '@stock-bot/handlers'; import type { JobPayload } from '../../../types/job-payloads'; const logger = getLogger('enhanced-sync-exchange-stats'); -export async function getExchangeStats(payload: JobPayload): Promise { +export async function getExchangeStats( + payload: JobPayload, + container: IServiceContainer +): Promise { logger.info('Getting exchange statistics...'); try { - const postgresClient = getPostgreSQLClient(); + const postgresClient = container.postgres; const query = ` SELECT COUNT(*) as total_exchanges, @@ -18,11 +21,11 @@ export async function getExchangeStats(payload: JobPayload): Promise { FROM exchanges `; const result = await postgresClient.query(query); - + logger.info('Retrieved exchange statistics'); return result.rows[0]; } catch (error) { logger.error('Failed to get exchange statistics', { error }); throw error; } -} \ No newline at end of file +} diff --git a/apps/data-sync-service/src/handlers/exchanges/operations/index.ts b/apps/stock/data-pipeline/src/handlers/exchanges/operations/index.ts similarity index 97% rename from apps/data-sync-service/src/handlers/exchanges/operations/index.ts rename to apps/stock/data-pipeline/src/handlers/exchanges/operations/index.ts index b5157d7..b798ee5 100644 --- a/apps/data-sync-service/src/handlers/exchanges/operations/index.ts +++ b/apps/stock/data-pipeline/src/handlers/exchanges/operations/index.ts @@ -1,19 +1,19 @@ -import { syncAllExchanges } from './sync-all-exchanges.operations'; -import { syncQMExchanges } from './qm-exchanges.operations'; -import { syncIBExchanges } from './sync-ib-exchanges.operations'; -import { syncQMProviderMappings } from './sync-qm-provider-mappings.operations'; -import { clearPostgreSQLData } from './clear-postgresql-data.operations'; -import { getExchangeStats } from './exchange-stats.operations'; -import { getProviderMappingStats } from './provider-mapping-stats.operations'; -import { getSyncStatus } from './enhanced-sync-status.operations'; - -export const exchangeOperations = { - syncAllExchanges, - syncQMExchanges, - syncIBExchanges, - syncQMProviderMappings, - clearPostgreSQLData, - getExchangeStats, - getProviderMappingStats, - 'enhanced-sync-status': getSyncStatus, -}; \ No newline at end of file +import { clearPostgreSQLData } from './clear-postgresql-data.operations'; +import { getSyncStatus } from './enhanced-sync-status.operations'; +import { getExchangeStats } from './exchange-stats.operations'; +import { getProviderMappingStats } from './provider-mapping-stats.operations'; +import { syncQMExchanges } from './qm-exchanges.operations'; +import { syncAllExchanges } from './sync-all-exchanges.operations'; +import { syncIBExchanges } from './sync-ib-exchanges.operations'; +import { syncQMProviderMappings } from './sync-qm-provider-mappings.operations'; + +export const exchangeOperations = { + syncAllExchanges, + syncQMExchanges, + syncIBExchanges, + syncQMProviderMappings, + clearPostgreSQLData, + getExchangeStats, + getProviderMappingStats, + 'enhanced-sync-status': getSyncStatus, +}; diff --git a/apps/data-sync-service/src/handlers/exchanges/operations/provider-mapping-stats.operations.ts b/apps/stock/data-pipeline/src/handlers/exchanges/operations/provider-mapping-stats.operations.ts similarity index 80% rename from apps/data-sync-service/src/handlers/exchanges/operations/provider-mapping-stats.operations.ts rename to apps/stock/data-pipeline/src/handlers/exchanges/operations/provider-mapping-stats.operations.ts index 416f8dc..3a1381a 100644 --- a/apps/data-sync-service/src/handlers/exchanges/operations/provider-mapping-stats.operations.ts +++ b/apps/stock/data-pipeline/src/handlers/exchanges/operations/provider-mapping-stats.operations.ts @@ -1,14 +1,17 @@ import { getLogger } from '@stock-bot/logger'; -import { getPostgreSQLClient } from '@stock-bot/postgres-client'; +import type { IServiceContainer } from '@stock-bot/handlers'; import type { JobPayload } from '../../../types/job-payloads'; const logger = getLogger('enhanced-sync-provider-mapping-stats'); -export async function getProviderMappingStats(payload: JobPayload): Promise { +export async function getProviderMappingStats( + payload: JobPayload, + container: IServiceContainer +): Promise { logger.info('Getting provider mapping statistics...'); try { - const postgresClient = getPostgreSQLClient(); + const postgresClient = container.postgres; const query = ` SELECT provider, @@ -22,11 +25,11 @@ export async function getProviderMappingStats(payload: JobPayload): Promise ORDER BY provider `; const result = await postgresClient.query(query); - + logger.info('Retrieved provider mapping statistics'); return result.rows; } catch (error) { logger.error('Failed to get provider mapping statistics', { error }); throw error; } -} \ No newline at end of file +} diff --git a/apps/data-sync-service/src/handlers/exchanges/operations/qm-exchanges.operations.ts b/apps/stock/data-pipeline/src/handlers/exchanges/operations/qm-exchanges.operations.ts similarity index 80% rename from apps/data-sync-service/src/handlers/exchanges/operations/qm-exchanges.operations.ts rename to apps/stock/data-pipeline/src/handlers/exchanges/operations/qm-exchanges.operations.ts index 5646854..0715ab5 100644 --- a/apps/data-sync-service/src/handlers/exchanges/operations/qm-exchanges.operations.ts +++ b/apps/stock/data-pipeline/src/handlers/exchanges/operations/qm-exchanges.operations.ts @@ -1,103 +1,114 @@ -import { getLogger } from '@stock-bot/logger'; -import { getMongoDBClient } from '@stock-bot/mongodb-client'; -import { getPostgreSQLClient } from '@stock-bot/postgres-client'; -import type { JobPayload } from '../../../types/job-payloads'; - -const logger = getLogger('sync-qm-exchanges'); - -export async function syncQMExchanges(payload: JobPayload): Promise<{ processed: number; created: number; updated: number }> { - logger.info('Starting QM exchanges sync...'); - - try { - const mongoClient = getMongoDBClient(); - const postgresClient = getPostgreSQLClient(); - - // 1. Get all QM exchanges from MongoDB - const qmExchanges = await mongoClient.find('qmExchanges', {}); - logger.info(`Found ${qmExchanges.length} QM exchanges to process`); - - let created = 0; - let updated = 0; - - for (const exchange of qmExchanges) { - try { - // 2. Check if exchange exists - const existingExchange = await findExchange(exchange.exchangeCode, postgresClient); - - if (existingExchange) { - // Update existing - await updateExchange(existingExchange.id, exchange, postgresClient); - updated++; - } else { - // Create new - await createExchange(exchange, postgresClient); - created++; - } - } catch (error) { - logger.error('Failed to process exchange', { error, exchange: exchange.exchangeCode }); - } - } - - // 3. Update sync status - await updateSyncStatus('qm', 'exchanges', qmExchanges.length, postgresClient); - - const result = { processed: qmExchanges.length, created, updated }; - logger.info('QM exchanges sync completed', result); - return result; - } catch (error) { - logger.error('QM exchanges sync failed', { error }); - throw error; - } -} - -// Helper functions -async function findExchange(exchangeCode: string, postgresClient: any): Promise { - const query = 'SELECT * FROM exchanges WHERE code = $1'; - const result = await postgresClient.query(query, [exchangeCode]); - return result.rows[0] || null; -} - -async function createExchange(qmExchange: any, postgresClient: any): Promise { - const query = ` - INSERT INTO exchanges (code, name, country, currency, visible) - VALUES ($1, $2, $3, $4, $5) - ON CONFLICT (code) DO NOTHING - `; - - await postgresClient.query(query, [ - qmExchange.exchangeCode || qmExchange.exchange, - qmExchange.exchangeShortName || qmExchange.name, - qmExchange.countryCode || 'US', - 'USD', // Default currency, can be improved - true, // New exchanges are visible by default - ]); -} - -async function updateExchange(exchangeId: string, qmExchange: any, postgresClient: any): Promise { - const query = ` - UPDATE exchanges - SET name = COALESCE($2, name), - country = COALESCE($3, country), - updated_at = NOW() - WHERE id = $1 - `; - - await postgresClient.query(query, [ - exchangeId, - qmExchange.exchangeShortName || qmExchange.name, - qmExchange.countryCode, - ]); -} - -async function updateSyncStatus(provider: string, dataType: string, count: number, postgresClient: any): Promise { - const query = ` - UPDATE sync_status - SET last_sync_at = NOW(), - last_sync_count = $3, - sync_errors = NULL, - updated_at = NOW() - WHERE provider = $1 AND data_type = $2 - `; - - await postgresClient.query(query, [provider, dataType, count]); -} \ No newline at end of file +import { getLogger } from '@stock-bot/logger'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import type { JobPayload } from '../../../types/job-payloads'; + +const logger = getLogger('sync-qm-exchanges'); + +export async function syncQMExchanges( + payload: JobPayload, + container: IServiceContainer +): Promise<{ processed: number; created: number; updated: number }> { + logger.info('Starting QM exchanges sync...'); + + try { + const mongoClient = container.mongodb; + const postgresClient = container.postgres; + + // 1. Get all QM exchanges from MongoDB + const qmExchanges = await mongoClient.find('qmExchanges', {}); + logger.info(`Found ${qmExchanges.length} QM exchanges to process`); + + let created = 0; + let updated = 0; + + for (const exchange of qmExchanges) { + try { + // 2. Check if exchange exists + const existingExchange = await findExchange(exchange.exchangeCode, postgresClient); + + if (existingExchange) { + // Update existing + await updateExchange(existingExchange.id, exchange, postgresClient); + updated++; + } else { + // Create new + await createExchange(exchange, postgresClient); + created++; + } + } catch (error) { + logger.error('Failed to process exchange', { error, exchange: exchange.exchangeCode }); + } + } + + // 3. Update sync status + await updateSyncStatus('qm', 'exchanges', qmExchanges.length, postgresClient); + + const result = { processed: qmExchanges.length, created, updated }; + logger.info('QM exchanges sync completed', result); + return result; + } catch (error) { + logger.error('QM exchanges sync failed', { error }); + throw error; + } +} + +// Helper functions +async function findExchange(exchangeCode: string, postgresClient: any): Promise { + const query = 'SELECT * FROM exchanges WHERE code = $1'; + const result = await postgresClient.query(query, [exchangeCode]); + return result.rows[0] || null; +} + +async function createExchange(qmExchange: any, postgresClient: any): Promise { + const query = ` + INSERT INTO exchanges (code, name, country, currency, visible) + VALUES ($1, $2, $3, $4, $5) + ON CONFLICT (code) DO NOTHING + `; + + await postgresClient.query(query, [ + qmExchange.exchangeCode || qmExchange.exchange, + qmExchange.exchangeShortName || qmExchange.name, + qmExchange.countryCode || 'US', + 'USD', // Default currency, can be improved + true, // New exchanges are visible by default + ]); +} + +async function updateExchange( + exchangeId: string, + qmExchange: any, + postgresClient: any +): Promise { + const query = ` + UPDATE exchanges + SET name = COALESCE($2, name), + country = COALESCE($3, country), + updated_at = NOW() + WHERE id = $1 + `; + + await postgresClient.query(query, [ + exchangeId, + qmExchange.exchangeShortName || qmExchange.name, + qmExchange.countryCode, + ]); +} + +async function updateSyncStatus( + provider: string, + dataType: string, + count: number, + postgresClient: any +): Promise { + const query = ` + UPDATE sync_status + SET last_sync_at = NOW(), + last_sync_count = $3, + sync_errors = NULL, + updated_at = NOW() + WHERE provider = $1 AND data_type = $2 + `; + + await postgresClient.query(query, [provider, dataType, count]); +} diff --git a/apps/data-sync-service/src/handlers/exchanges/operations/sync-all-exchanges.operations.ts b/apps/stock/data-pipeline/src/handlers/exchanges/operations/sync-all-exchanges.operations.ts similarity index 78% rename from apps/data-sync-service/src/handlers/exchanges/operations/sync-all-exchanges.operations.ts rename to apps/stock/data-pipeline/src/handlers/exchanges/operations/sync-all-exchanges.operations.ts index 5c289dd..9dbfd57 100644 --- a/apps/data-sync-service/src/handlers/exchanges/operations/sync-all-exchanges.operations.ts +++ b/apps/stock/data-pipeline/src/handlers/exchanges/operations/sync-all-exchanges.operations.ts @@ -1,267 +1,282 @@ -import { getLogger } from '@stock-bot/logger'; -import { getMongoDBClient } from "@stock-bot/mongodb-client"; -import { getPostgreSQLClient } from '@stock-bot/postgres-client'; -import type { JobPayload, SyncResult } from '../../../types/job-payloads'; - -const logger = getLogger('enhanced-sync-all-exchanges'); - -export async function syncAllExchanges(payload: JobPayload): Promise { - const clearFirst = payload.clearFirst || true; - logger.info('Starting comprehensive exchange sync...', { clearFirst }); - - const result: SyncResult = { - processed: 0, - created: 0, - updated: 0, - skipped: 0, - errors: 0, - }; - - try { - const postgresClient = getPostgreSQLClient(); - - // Clear existing data if requested - if (clearFirst) { - await clearPostgreSQLData(postgresClient); - } - - // Start transaction for atomic operations - await postgresClient.query('BEGIN'); - - // 1. Sync from EOD exchanges (comprehensive global data) - const eodResult = await syncEODExchanges(); - mergeResults(result, eodResult); - - // 2. Sync from IB exchanges (detailed asset information) - const ibResult = await syncIBExchanges(); - mergeResults(result, ibResult); - - // 3. Update sync status - await updateSyncStatus('all', 'exchanges', result.processed, postgresClient); - - await postgresClient.query('COMMIT'); - - logger.info('Comprehensive exchange sync completed', result); - return result; - } catch (error) { - const postgresClient = getPostgreSQLClient(); - await postgresClient.query('ROLLBACK'); - logger.error('Comprehensive exchange sync failed', { error }); - throw error; - } -} - -async function clearPostgreSQLData(postgresClient: any): Promise { - logger.info('Clearing existing PostgreSQL data...'); - - // Clear data in correct order (respect foreign keys) - await postgresClient.query('DELETE FROM provider_mappings'); - await postgresClient.query('DELETE FROM symbols'); - await postgresClient.query('DELETE FROM exchanges'); - - // Reset sync status - await postgresClient.query( - 'UPDATE sync_status SET last_sync_at = NULL, last_sync_count = 0, sync_errors = NULL' - ); - - logger.info('PostgreSQL data cleared successfully'); -} - -async function syncEODExchanges(): Promise { - const mongoClient = getMongoDBClient(); - const exchanges = await mongoClient.find('eodExchanges', { active: true }); - const result: SyncResult = { processed: 0, created: 0, updated: 0, skipped: 0, errors: 0 }; - - for (const exchange of exchanges) { - try { - // Create provider exchange mapping for EOD - await createProviderExchangeMapping( - 'eod', // provider - exchange.Code, - exchange.Name, - exchange.CountryISO2, - exchange.Currency, - 0.95 // very high confidence for EOD data - ); - - result.processed++; - result.created++; // Count as created mapping - } catch (error) { - logger.error('Failed to process EOD exchange', { error, exchange }); - result.errors++; - } - } - - return result; -} - -async function syncIBExchanges(): Promise { - const mongoClient = getMongoDBClient(); - const exchanges = await mongoClient.find('ibExchanges', {}); - const result: SyncResult = { processed: 0, created: 0, updated: 0, skipped: 0, errors: 0 }; - - for (const exchange of exchanges) { - try { - // Create provider exchange mapping for IB - await createProviderExchangeMapping( - 'ib', // provider - exchange.exchange_id, - exchange.name, - exchange.country_code, - 'USD', // IB doesn't specify currency, default to USD - 0.85 // good confidence for IB data - ); - - result.processed++; - result.created++; // Count as created mapping - } catch (error) { - logger.error('Failed to process IB exchange', { error, exchange }); - result.errors++; - } - } - - return result; -} - -async function createProviderExchangeMapping( - provider: string, - providerExchangeCode: string, - providerExchangeName: string, - countryCode: string | null, - currency: string | null, - confidence: number -): Promise { - if (!providerExchangeCode) { - return; - } - - const postgresClient = getPostgreSQLClient(); - - // Check if mapping already exists - const existingMapping = await findProviderExchangeMapping(provider, providerExchangeCode); - if (existingMapping) { - // Don't override existing mappings to preserve manual work - return; - } - - // Find or create master exchange - const masterExchange = await findOrCreateMasterExchange( - providerExchangeCode, - providerExchangeName, - countryCode, - currency - ); - - // Create the provider exchange mapping - const query = ` - INSERT INTO provider_exchange_mappings - (provider, provider_exchange_code, provider_exchange_name, master_exchange_id, - country_code, currency, confidence, active, auto_mapped) - VALUES ($1, $2, $3, $4, $5, $6, $7, false, true) - ON CONFLICT (provider, provider_exchange_code) DO NOTHING - `; - - await postgresClient.query(query, [ - provider, - providerExchangeCode, - providerExchangeName, - masterExchange.id, - countryCode, - currency, - confidence, - ]); -} - -async function findOrCreateMasterExchange( - providerCode: string, - providerName: string, - countryCode: string | null, - currency: string | null -): Promise { - const postgresClient = getPostgreSQLClient(); - - // First, try to find exact match - let masterExchange = await findExchangeByCode(providerCode); - - if (masterExchange) { - return masterExchange; - } - - // Try to find by similar codes (basic mapping) - const basicMapping = getBasicExchangeMapping(providerCode); - if (basicMapping) { - masterExchange = await findExchangeByCode(basicMapping); - if (masterExchange) { - return masterExchange; - } - } - - // Create new master exchange (inactive by default) - const query = ` - INSERT INTO exchanges (code, name, country, currency, active) - VALUES ($1, $2, $3, $4, false) - ON CONFLICT (code) DO UPDATE SET - name = COALESCE(EXCLUDED.name, exchanges.name), - country = COALESCE(EXCLUDED.country, exchanges.country), - currency = COALESCE(EXCLUDED.currency, exchanges.currency) - RETURNING id, code, name, country, currency - `; - - const result = await postgresClient.query(query, [ - providerCode, - providerName || providerCode, - countryCode || 'US', - currency || 'USD', - ]); - - return result.rows[0]; -} - -function getBasicExchangeMapping(providerCode: string): string | null { - const mappings: Record = { - NYE: 'NYSE', - NAS: 'NASDAQ', - TO: 'TSX', - LN: 'LSE', - LON: 'LSE', - }; - - return mappings[providerCode.toUpperCase()] || null; -} - -async function findProviderExchangeMapping(provider: string, providerExchangeCode: string): Promise { - const postgresClient = getPostgreSQLClient(); - const query = 'SELECT * FROM provider_exchange_mappings WHERE provider = $1 AND provider_exchange_code = $2'; - const result = await postgresClient.query(query, [provider, providerExchangeCode]); - return result.rows[0] || null; -} - -async function findExchangeByCode(code: string): Promise { - const postgresClient = getPostgreSQLClient(); - const query = 'SELECT * FROM exchanges WHERE code = $1'; - const result = await postgresClient.query(query, [code]); - return result.rows[0] || null; -} - -async function updateSyncStatus(provider: string, dataType: string, count: number, postgresClient: any): Promise { - const query = ` - INSERT INTO sync_status (provider, data_type, last_sync_at, last_sync_count, sync_errors) - VALUES ($1, $2, NOW(), $3, NULL) - ON CONFLICT (provider, data_type) - DO UPDATE SET - last_sync_at = NOW(), - last_sync_count = EXCLUDED.last_sync_count, - sync_errors = NULL, - updated_at = NOW() - `; - - await postgresClient.query(query, [provider, dataType, count]); -} - -function mergeResults(target: SyncResult, source: SyncResult): void { - target.processed += source.processed; - target.created += source.created; - target.updated += source.updated; - target.skipped += source.skipped; - target.errors += source.errors; -} \ No newline at end of file +import { getLogger } from '@stock-bot/logger'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import type { JobPayload, SyncResult } from '../../../types/job-payloads'; + +const logger = getLogger('enhanced-sync-all-exchanges'); + +export async function syncAllExchanges(payload: JobPayload, container: IServiceContainer): Promise { + const clearFirst = payload.clearFirst || true; + logger.info('Starting comprehensive exchange sync...', { clearFirst }); + + const result: SyncResult = { + processed: 0, + created: 0, + updated: 0, + skipped: 0, + errors: 0, + }; + + try { + const postgresClient = container.postgres; + + // Clear existing data if requested + if (clearFirst) { + await clearPostgreSQLData(postgresClient); + } + + // Start transaction for atomic operations + await postgresClient.query('BEGIN'); + + // 1. Sync from EOD exchanges (comprehensive global data) + const eodResult = await syncEODExchanges(container); + mergeResults(result, eodResult); + + // 2. Sync from IB exchanges (detailed asset information) + const ibResult = await syncIBExchanges(container); + mergeResults(result, ibResult); + + // 3. Update sync status + await updateSyncStatus('all', 'exchanges', result.processed, postgresClient); + + await postgresClient.query('COMMIT'); + + logger.info('Comprehensive exchange sync completed', result); + return result; + } catch (error) { + const postgresClient = container.postgres; + await postgresClient.query('ROLLBACK'); + logger.error('Comprehensive exchange sync failed', { error }); + throw error; + } +} + + +async function clearPostgreSQLData(postgresClient: any): Promise { + logger.info('Clearing existing PostgreSQL data...'); + + // Clear data in correct order (respect foreign keys) + await postgresClient.query('DELETE FROM provider_mappings'); + await postgresClient.query('DELETE FROM symbols'); + await postgresClient.query('DELETE FROM exchanges'); + + // Reset sync status + await postgresClient.query( + 'UPDATE sync_status SET last_sync_at = NULL, last_sync_count = 0, sync_errors = NULL' + ); + + logger.info('PostgreSQL data cleared successfully'); +} + +async function syncEODExchanges(container: IServiceContainer): Promise { + const mongoClient = container.mongodb; + const exchanges = await mongoClient.find('eodExchanges', { active: true }); + const result: SyncResult = { processed: 0, created: 0, updated: 0, skipped: 0, errors: 0 }; + + for (const exchange of exchanges) { + try { + // Create provider exchange mapping for EOD + await createProviderExchangeMapping( + 'eod', // provider + exchange.Code, + exchange.Name, + exchange.CountryISO2, + exchange.Currency, + 0.95, // very high confidence for EOD data + container + ); + + result.processed++; + result.created++; // Count as created mapping + } catch (error) { + logger.error('Failed to process EOD exchange', { error, exchange }); + result.errors++; + } + } + + return result; +} + +async function syncIBExchanges(container: IServiceContainer): Promise { + const mongoClient = container.mongodb; + const exchanges = await mongoClient.find('ibExchanges', {}); + const result: SyncResult = { processed: 0, created: 0, updated: 0, skipped: 0, errors: 0 }; + + for (const exchange of exchanges) { + try { + // Create provider exchange mapping for IB + await createProviderExchangeMapping( + 'ib', // provider + exchange.exchange_id, + exchange.name, + exchange.country_code, + 'USD', // IB doesn't specify currency, default to USD + 0.85, // good confidence for IB data + container + ); + + result.processed++; + result.created++; // Count as created mapping + } catch (error) { + logger.error('Failed to process IB exchange', { error, exchange }); + result.errors++; + } + } + + return result; +} + +async function createProviderExchangeMapping( + provider: string, + providerExchangeCode: string, + providerExchangeName: string, + countryCode: string | null, + currency: string | null, + confidence: number, + container: IServiceContainer +): Promise { + if (!providerExchangeCode) { + return; + } + + const postgresClient = container.postgres; + + // Check if mapping already exists + const existingMapping = await findProviderExchangeMapping(provider, providerExchangeCode, container); + if (existingMapping) { + // Don't override existing mappings to preserve manual work + return; + } + + // Find or create master exchange + const masterExchange = await findOrCreateMasterExchange( + providerExchangeCode, + providerExchangeName, + countryCode, + currency, + container + ); + + // Create the provider exchange mapping + const query = ` + INSERT INTO provider_exchange_mappings + (provider, provider_exchange_code, provider_exchange_name, master_exchange_id, + country_code, currency, confidence, active, auto_mapped) + VALUES ($1, $2, $3, $4, $5, $6, $7, false, true) + ON CONFLICT (provider, provider_exchange_code) DO NOTHING + `; + + await postgresClient.query(query, [ + provider, + providerExchangeCode, + providerExchangeName, + masterExchange.id, + countryCode, + currency, + confidence, + ]); +} + +async function findOrCreateMasterExchange( + providerCode: string, + providerName: string, + countryCode: string | null, + currency: string | null, + container: IServiceContainer +): Promise { + const postgresClient = container.postgres; + + // First, try to find exact match + let masterExchange = await findExchangeByCode(providerCode, container); + + if (masterExchange) { + return masterExchange; + } + + // Try to find by similar codes (basic mapping) + const basicMapping = getBasicExchangeMapping(providerCode); + if (basicMapping) { + masterExchange = await findExchangeByCode(basicMapping, container); + if (masterExchange) { + return masterExchange; + } + } + + // Create new master exchange (inactive by default) + const query = ` + INSERT INTO exchanges (code, name, country, currency, active) + VALUES ($1, $2, $3, $4, false) + ON CONFLICT (code) DO UPDATE SET + name = COALESCE(EXCLUDED.name, exchanges.name), + country = COALESCE(EXCLUDED.country, exchanges.country), + currency = COALESCE(EXCLUDED.currency, exchanges.currency) + RETURNING id, code, name, country, currency + `; + + const result = await postgresClient.query(query, [ + providerCode, + providerName || providerCode, + countryCode || 'US', + currency || 'USD', + ]); + + return result.rows[0]; +} + +function getBasicExchangeMapping(providerCode: string): string | null { + const mappings: Record = { + NYE: 'NYSE', + NAS: 'NASDAQ', + TO: 'TSX', + LN: 'LSE', + LON: 'LSE', + }; + + return mappings[providerCode.toUpperCase()] || null; +} + +async function findProviderExchangeMapping( + provider: string, + providerExchangeCode: string, + container: IServiceContainer +): Promise { + const postgresClient = container.postgres; + const query = + 'SELECT * FROM provider_exchange_mappings WHERE provider = $1 AND provider_exchange_code = $2'; + const result = await postgresClient.query(query, [provider, providerExchangeCode]); + return result.rows[0] || null; +} + +async function findExchangeByCode(code: string, container: IServiceContainer): Promise { + const postgresClient = container.postgres; + const query = 'SELECT * FROM exchanges WHERE code = $1'; + const result = await postgresClient.query(query, [code]); + return result.rows[0] || null; +} + +async function updateSyncStatus( + provider: string, + dataType: string, + count: number, + postgresClient: any +): Promise { + const query = ` + INSERT INTO sync_status (provider, data_type, last_sync_at, last_sync_count, sync_errors) + VALUES ($1, $2, NOW(), $3, NULL) + ON CONFLICT (provider, data_type) + DO UPDATE SET + last_sync_at = NOW(), + last_sync_count = EXCLUDED.last_sync_count, + sync_errors = NULL, + updated_at = NOW() + `; + + await postgresClient.query(query, [provider, dataType, count]); +} + +function mergeResults(target: SyncResult, source: SyncResult): void { + target.processed += source.processed; + target.created += source.created; + target.updated += source.updated; + target.skipped += source.skipped; + target.errors += source.errors; +} diff --git a/apps/data-sync-service/src/handlers/exchanges/operations/sync-ib-exchanges.operations.ts b/apps/stock/data-pipeline/src/handlers/exchanges/operations/sync-ib-exchanges.operations.ts similarity index 89% rename from apps/data-sync-service/src/handlers/exchanges/operations/sync-ib-exchanges.operations.ts rename to apps/stock/data-pipeline/src/handlers/exchanges/operations/sync-ib-exchanges.operations.ts index 3f924e6..909a939 100644 --- a/apps/data-sync-service/src/handlers/exchanges/operations/sync-ib-exchanges.operations.ts +++ b/apps/stock/data-pipeline/src/handlers/exchanges/operations/sync-ib-exchanges.operations.ts @@ -1,206 +1,209 @@ -import { getLogger } from '@stock-bot/logger'; -import { getMongoDBClient } from '@stock-bot/mongodb-client'; -import type { JobPayload } from '../../../types/job-payloads'; -import type { MasterExchange } from '@stock-bot/mongodb-client'; - -const logger = getLogger('sync-ib-exchanges'); - -interface IBExchange { - id?: string; - _id?: any; - name?: string; - code?: string; - country_code?: string; - currency?: string; -} - -export async function syncIBExchanges(payload: JobPayload): Promise<{ syncedCount: number; totalExchanges: number }> { - logger.info('Syncing IB exchanges from database...'); - - try { - const mongoClient = getMongoDBClient(); - const db = mongoClient.getDatabase(); - - // Filter by country code US and CA - const ibExchanges = await db - .collection('ibExchanges') - .find({ - country_code: { $in: ['US', 'CA'] }, - }) - .toArray(); - - logger.info('Found IB exchanges in database', { count: ibExchanges.length }); - - let syncedCount = 0; - - for (const exchange of ibExchanges) { - try { - await createOrUpdateMasterExchange(exchange); - syncedCount++; - - logger.debug('Synced IB exchange', { - ibId: exchange.id, - country: exchange.country_code, - }); - } catch (error) { - logger.error('Failed to sync IB exchange', { exchange: exchange.id, error }); - } - } - - logger.info('IB exchange sync completed', { - syncedCount, - totalExchanges: ibExchanges.length, - }); - - return { syncedCount, totalExchanges: ibExchanges.length }; - } catch (error) { - logger.error('Failed to fetch IB exchanges from database', { error }); - return { syncedCount: 0, totalExchanges: 0 }; - } -} - -/** - * Create or update master exchange record 1:1 from IB exchange - */ -async function createOrUpdateMasterExchange(ibExchange: IBExchange): Promise { - const mongoClient = getMongoDBClient(); - const db = mongoClient.getDatabase(); - const collection = db.collection('masterExchanges'); - - const masterExchangeId = generateMasterExchangeId(ibExchange); - const now = new Date(); - - // Check if master exchange already exists - const existing = await collection.findOne({ masterExchangeId }); - - if (existing) { - // Update existing record - await collection.updateOne( - { masterExchangeId }, - { - $set: { - officialName: ibExchange.name || `Exchange ${ibExchange.id}`, - country: ibExchange.country_code || 'UNKNOWN', - currency: ibExchange.currency || 'USD', - timezone: inferTimezone(ibExchange), - updated_at: now, - }, - } - ); - - logger.debug('Updated existing master exchange', { masterExchangeId }); - } else { - // Create new master exchange - const masterExchange: MasterExchange = { - masterExchangeId, - shortName: masterExchangeId, // Set shortName to masterExchangeId on creation - officialName: ibExchange.name || `Exchange ${ibExchange.id}`, - country: ibExchange.country_code || 'UNKNOWN', - currency: ibExchange.currency || 'USD', - timezone: inferTimezone(ibExchange), - active: false, // Set active to false only on creation - - sourceMappings: { - ib: { - id: ibExchange.id || ibExchange._id?.toString() || 'unknown', - name: ibExchange.name || `Exchange ${ibExchange.id}`, - code: ibExchange.code || ibExchange.id || '', - aliases: generateAliases(ibExchange), - lastUpdated: now, - }, - }, - - confidence: 1.0, // High confidence for direct IB mapping - verified: true, // Mark as verified since it's direct from IB - - // DocumentBase fields - source: 'ib-exchange-sync', - created_at: now, - updated_at: now, - }; - - await collection.insertOne(masterExchange); - logger.debug('Created new master exchange', { masterExchangeId }); - } -} - -/** - * Generate master exchange ID from IB exchange - */ -function generateMasterExchangeId(ibExchange: IBExchange): string { - // Use code if available, otherwise use ID, otherwise generate from name - if (ibExchange.code) { - return ibExchange.code.toUpperCase().replace(/[^A-Z0-9]/g, ''); - } - - if (ibExchange.id) { - return ibExchange.id.toUpperCase().replace(/[^A-Z0-9]/g, ''); - } - - if (ibExchange.name) { - return ibExchange.name - .toUpperCase() - .split(' ') - .slice(0, 2) - .join('_') - .replace(/[^A-Z0-9_]/g, ''); - } - - return 'UNKNOWN_EXCHANGE'; -} - -/** - * Generate aliases for the exchange - */ -function generateAliases(ibExchange: IBExchange): string[] { - const aliases: string[] = []; - - if (ibExchange.name && ibExchange.name.includes(' ')) { - // Add abbreviated version - aliases.push( - ibExchange.name - .split(' ') - .map(w => w[0]) - .join('') - .toUpperCase() - ); - } - - if (ibExchange.code) { - aliases.push(ibExchange.code.toUpperCase()); - } - - return aliases; -} - -/** - * Infer timezone from exchange name/location - */ -function inferTimezone(ibExchange: IBExchange): string { - if (!ibExchange.name) { - return 'UTC'; - } - - const name = ibExchange.name.toUpperCase(); - - if (name.includes('NEW YORK') || name.includes('NYSE') || name.includes('NASDAQ')) { - return 'America/New_York'; - } - if (name.includes('LONDON')) { - return 'Europe/London'; - } - if (name.includes('TOKYO')) { - return 'Asia/Tokyo'; - } - if (name.includes('SHANGHAI')) { - return 'Asia/Shanghai'; - } - if (name.includes('TORONTO')) { - return 'America/Toronto'; - } - if (name.includes('FRANKFURT')) { - return 'Europe/Berlin'; - } - - return 'UTC'; // Default -} \ No newline at end of file +import { getLogger } from '@stock-bot/logger'; +import type { MasterExchange } from '@stock-bot/mongodb'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import type { JobPayload } from '../../../types/job-payloads'; + +const logger = getLogger('sync-ib-exchanges'); + +interface IBExchange { + id?: string; + _id?: any; + name?: string; + code?: string; + country_code?: string; + currency?: string; +} + +export async function syncIBExchanges( + payload: JobPayload, + container: IServiceContainer +): Promise<{ syncedCount: number; totalExchanges: number }> { + logger.info('Syncing IB exchanges from database...'); + + try { + const mongoClient = container.mongodb; + const db = mongoClient.getDatabase(); + + // Filter by country code US and CA + const ibExchanges = await db + .collection('ibExchanges') + .find({ + country_code: { $in: ['US', 'CA'] }, + }) + .toArray(); + + logger.info('Found IB exchanges in database', { count: ibExchanges.length }); + + let syncedCount = 0; + + for (const exchange of ibExchanges) { + try { + await createOrUpdateMasterExchange(exchange, container); + syncedCount++; + + logger.debug('Synced IB exchange', { + ibId: exchange.id, + country: exchange.country_code, + }); + } catch (error) { + logger.error('Failed to sync IB exchange', { exchange: exchange.id, error }); + } + } + + logger.info('IB exchange sync completed', { + syncedCount, + totalExchanges: ibExchanges.length, + }); + + return { syncedCount, totalExchanges: ibExchanges.length }; + } catch (error) { + logger.error('Failed to fetch IB exchanges from database', { error }); + return { syncedCount: 0, totalExchanges: 0 }; + } +} + +/** + * Create or update master exchange record 1:1 from IB exchange + */ +async function createOrUpdateMasterExchange(ibExchange: IBExchange, container: IServiceContainer): Promise { + const mongoClient = container.mongodb; + const db = mongoClient.getDatabase(); + const collection = db.collection('masterExchanges'); + + const masterExchangeId = generateMasterExchangeId(ibExchange); + const now = new Date(); + + // Check if master exchange already exists + const existing = await collection.findOne({ masterExchangeId }); + + if (existing) { + // Update existing record + await collection.updateOne( + { masterExchangeId }, + { + $set: { + officialName: ibExchange.name || `Exchange ${ibExchange.id}`, + country: ibExchange.country_code || 'UNKNOWN', + currency: ibExchange.currency || 'USD', + timezone: inferTimezone(ibExchange), + updated_at: now, + }, + } + ); + + logger.debug('Updated existing master exchange', { masterExchangeId }); + } else { + // Create new master exchange + const masterExchange: MasterExchange = { + masterExchangeId, + shortName: masterExchangeId, // Set shortName to masterExchangeId on creation + officialName: ibExchange.name || `Exchange ${ibExchange.id}`, + country: ibExchange.country_code || 'UNKNOWN', + currency: ibExchange.currency || 'USD', + timezone: inferTimezone(ibExchange), + active: false, // Set active to false only on creation + + sourceMappings: { + ib: { + id: ibExchange.id || ibExchange._id?.toString() || 'unknown', + name: ibExchange.name || `Exchange ${ibExchange.id}`, + code: ibExchange.code || ibExchange.id || '', + aliases: generateAliases(ibExchange), + lastUpdated: now, + }, + }, + + confidence: 1.0, // High confidence for direct IB mapping + verified: true, // Mark as verified since it's direct from IB + + // DocumentBase fields + source: 'ib-exchange-sync', + created_at: now, + updated_at: now, + }; + + await collection.insertOne(masterExchange); + logger.debug('Created new master exchange', { masterExchangeId }); + } +} + +/** + * Generate master exchange ID from IB exchange + */ +function generateMasterExchangeId(ibExchange: IBExchange): string { + // Use code if available, otherwise use ID, otherwise generate from name + if (ibExchange.code) { + return ibExchange.code.toUpperCase().replace(/[^A-Z0-9]/g, ''); + } + + if (ibExchange.id) { + return ibExchange.id.toUpperCase().replace(/[^A-Z0-9]/g, ''); + } + + if (ibExchange.name) { + return ibExchange.name + .toUpperCase() + .split(' ') + .slice(0, 2) + .join('_') + .replace(/[^A-Z0-9_]/g, ''); + } + + return 'UNKNOWN_EXCHANGE'; +} + +/** + * Generate aliases for the exchange + */ +function generateAliases(ibExchange: IBExchange): string[] { + const aliases: string[] = []; + + if (ibExchange.name && ibExchange.name.includes(' ')) { + // Add abbreviated version + aliases.push( + ibExchange.name + .split(' ') + .map(w => w[0]) + .join('') + .toUpperCase() + ); + } + + if (ibExchange.code) { + aliases.push(ibExchange.code.toUpperCase()); + } + + return aliases; +} + +/** + * Infer timezone from exchange name/location + */ +function inferTimezone(ibExchange: IBExchange): string { + if (!ibExchange.name) { + return 'UTC'; + } + + const name = ibExchange.name.toUpperCase(); + + if (name.includes('NEW YORK') || name.includes('NYSE') || name.includes('NASDAQ')) { + return 'America/New_York'; + } + if (name.includes('LONDON')) { + return 'Europe/London'; + } + if (name.includes('TOKYO')) { + return 'Asia/Tokyo'; + } + if (name.includes('SHANGHAI')) { + return 'Asia/Shanghai'; + } + if (name.includes('TORONTO')) { + return 'America/Toronto'; + } + if (name.includes('FRANKFURT')) { + return 'Europe/Berlin'; + } + + return 'UTC'; // Default +} diff --git a/apps/data-sync-service/src/handlers/exchanges/operations/sync-qm-provider-mappings.operations.ts b/apps/stock/data-pipeline/src/handlers/exchanges/operations/sync-qm-provider-mappings.operations.ts similarity index 79% rename from apps/data-sync-service/src/handlers/exchanges/operations/sync-qm-provider-mappings.operations.ts rename to apps/stock/data-pipeline/src/handlers/exchanges/operations/sync-qm-provider-mappings.operations.ts index 73a7107..d37f41b 100644 --- a/apps/data-sync-service/src/handlers/exchanges/operations/sync-qm-provider-mappings.operations.ts +++ b/apps/stock/data-pipeline/src/handlers/exchanges/operations/sync-qm-provider-mappings.operations.ts @@ -1,204 +1,216 @@ -import { getLogger } from '@stock-bot/logger'; -import { getMongoDBClient } from "@stock-bot/mongodb-client"; -import { getPostgreSQLClient } from '@stock-bot/postgres-client'; -import type { JobPayload, SyncResult } from '../../../types/job-payloads'; - -const logger = getLogger('enhanced-sync-qm-provider-mappings'); - -export async function syncQMProviderMappings(payload: JobPayload): Promise { - logger.info('Starting QM provider exchange mappings sync...'); - - const result: SyncResult = { - processed: 0, - created: 0, - updated: 0, - skipped: 0, - errors: 0, - }; - - try { - const mongoClient = getMongoDBClient(); - const postgresClient = getPostgreSQLClient(); - - // Start transaction - await postgresClient.query('BEGIN'); - - // Get unique exchange combinations from QM symbols - const db = mongoClient.getDatabase(); - const pipeline = [ - { - $group: { - _id: { - exchangeCode: '$exchangeCode', - exchange: '$exchange', - countryCode: '$countryCode', - }, - count: { $sum: 1 }, - sampleExchange: { $first: '$exchange' }, - }, - }, - { - $project: { - exchangeCode: '$_id.exchangeCode', - exchange: '$_id.exchange', - countryCode: '$_id.countryCode', - count: 1, - sampleExchange: 1, - }, - }, - ]; - - const qmExchanges = await db.collection('qmSymbols').aggregate(pipeline).toArray(); - logger.info(`Found ${qmExchanges.length} unique QM exchange combinations`); - - for (const exchange of qmExchanges) { - try { - // Create provider exchange mapping for QM - await createProviderExchangeMapping( - 'qm', // provider - exchange.exchangeCode, - exchange.sampleExchange || exchange.exchangeCode, - exchange.countryCode, - exchange.countryCode === 'CA' ? 'CAD' : 'USD', // Simple currency mapping - 0.8 // good confidence for QM data - ); - - result.processed++; - result.created++; - } catch (error) { - logger.error('Failed to process QM exchange mapping', { error, exchange }); - result.errors++; - } - } - - await postgresClient.query('COMMIT'); - - logger.info('QM provider exchange mappings sync completed', result); - return result; - } catch (error) { - const postgresClient = getPostgreSQLClient(); - await postgresClient.query('ROLLBACK'); - logger.error('QM provider exchange mappings sync failed', { error }); - throw error; - } -} - -async function createProviderExchangeMapping( - provider: string, - providerExchangeCode: string, - providerExchangeName: string, - countryCode: string | null, - currency: string | null, - confidence: number -): Promise { - if (!providerExchangeCode) { - return; - } - - const postgresClient = getPostgreSQLClient(); - - // Check if mapping already exists - const existingMapping = await findProviderExchangeMapping(provider, providerExchangeCode); - if (existingMapping) { - // Don't override existing mappings to preserve manual work - return; - } - - // Find or create master exchange - const masterExchange = await findOrCreateMasterExchange( - providerExchangeCode, - providerExchangeName, - countryCode, - currency - ); - - // Create the provider exchange mapping - const query = ` - INSERT INTO provider_exchange_mappings - (provider, provider_exchange_code, provider_exchange_name, master_exchange_id, - country_code, currency, confidence, active, auto_mapped) - VALUES ($1, $2, $3, $4, $5, $6, $7, false, true) - ON CONFLICT (provider, provider_exchange_code) DO NOTHING - `; - - await postgresClient.query(query, [ - provider, - providerExchangeCode, - providerExchangeName, - masterExchange.id, - countryCode, - currency, - confidence, - ]); -} - -async function findProviderExchangeMapping(provider: string, providerExchangeCode: string): Promise { - const postgresClient = getPostgreSQLClient(); - const query = 'SELECT * FROM provider_exchange_mappings WHERE provider = $1 AND provider_exchange_code = $2'; - const result = await postgresClient.query(query, [provider, providerExchangeCode]); - return result.rows[0] || null; -} - -async function findOrCreateMasterExchange( - providerCode: string, - providerName: string, - countryCode: string | null, - currency: string | null -): Promise { - const postgresClient = getPostgreSQLClient(); - - // First, try to find exact match - let masterExchange = await findExchangeByCode(providerCode); - - if (masterExchange) { - return masterExchange; - } - - // Try to find by similar codes (basic mapping) - const basicMapping = getBasicExchangeMapping(providerCode); - if (basicMapping) { - masterExchange = await findExchangeByCode(basicMapping); - if (masterExchange) { - return masterExchange; - } - } - - // Create new master exchange (inactive by default) - const query = ` - INSERT INTO exchanges (code, name, country, currency, active) - VALUES ($1, $2, $3, $4, false) - ON CONFLICT (code) DO UPDATE SET - name = COALESCE(EXCLUDED.name, exchanges.name), - country = COALESCE(EXCLUDED.country, exchanges.country), - currency = COALESCE(EXCLUDED.currency, exchanges.currency) - RETURNING id, code, name, country, currency - `; - - const result = await postgresClient.query(query, [ - providerCode, - providerName || providerCode, - countryCode || 'US', - currency || 'USD', - ]); - - return result.rows[0]; -} - -function getBasicExchangeMapping(providerCode: string): string | null { - const mappings: Record = { - NYE: 'NYSE', - NAS: 'NASDAQ', - TO: 'TSX', - LN: 'LSE', - LON: 'LSE', - }; - - return mappings[providerCode.toUpperCase()] || null; -} - -async function findExchangeByCode(code: string): Promise { - const postgresClient = getPostgreSQLClient(); - const query = 'SELECT * FROM exchanges WHERE code = $1'; - const result = await postgresClient.query(query, [code]); - return result.rows[0] || null; -} \ No newline at end of file +import { getLogger } from '@stock-bot/logger'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import type { JobPayload, SyncResult } from '../../../types/job-payloads'; + +const logger = getLogger('enhanced-sync-qm-provider-mappings'); + +export async function syncQMProviderMappings( + payload: JobPayload, + container: IServiceContainer +): Promise { + logger.info('Starting QM provider exchange mappings sync...'); + + const result: SyncResult = { + processed: 0, + created: 0, + updated: 0, + skipped: 0, + errors: 0, + }; + + try { + const mongoClient = container.mongodb; + const postgresClient = container.postgres; + + // Start transaction + await postgresClient.query('BEGIN'); + + // Get unique exchange combinations from QM symbols + const db = mongoClient.getDatabase(); + const pipeline = [ + { + $group: { + _id: { + exchangeCode: '$exchangeCode', + exchange: '$exchange', + countryCode: '$countryCode', + }, + count: { $sum: 1 }, + sampleExchange: { $first: '$exchange' }, + }, + }, + { + $project: { + exchangeCode: '$_id.exchangeCode', + exchange: '$_id.exchange', + countryCode: '$_id.countryCode', + count: 1, + sampleExchange: 1, + }, + }, + ]; + + const qmExchanges = await db.collection('qmSymbols').aggregate(pipeline).toArray(); + logger.info(`Found ${qmExchanges.length} unique QM exchange combinations`); + + for (const exchange of qmExchanges) { + try { + // Create provider exchange mapping for QM + await createProviderExchangeMapping( + 'qm', // provider + exchange.exchangeCode, + exchange.sampleExchange || exchange.exchangeCode, + exchange.countryCode, + exchange.countryCode === 'CA' ? 'CAD' : 'USD', // Simple currency mapping + 0.8, // good confidence for QM data + container + ); + + result.processed++; + result.created++; + } catch (error) { + logger.error('Failed to process QM exchange mapping', { error, exchange }); + result.errors++; + } + } + + await postgresClient.query('COMMIT'); + + logger.info('QM provider exchange mappings sync completed', result); + return result; + } catch (error) { + const postgresClient = container.postgres; + await postgresClient.query('ROLLBACK'); + logger.error('QM provider exchange mappings sync failed', { error }); + throw error; + } +} + + +async function createProviderExchangeMapping( + provider: string, + providerExchangeCode: string, + providerExchangeName: string, + countryCode: string | null, + currency: string | null, + confidence: number, + container: IServiceContainer +): Promise { + if (!providerExchangeCode) { + return; + } + + const postgresClient = container.postgres; + + // Check if mapping already exists + const existingMapping = await findProviderExchangeMapping(provider, providerExchangeCode, container); + if (existingMapping) { + // Don't override existing mappings to preserve manual work + return; + } + + // Find or create master exchange + const masterExchange = await findOrCreateMasterExchange( + providerExchangeCode, + providerExchangeName, + countryCode, + currency, + container + ); + + // Create the provider exchange mapping + const query = ` + INSERT INTO provider_exchange_mappings + (provider, provider_exchange_code, provider_exchange_name, master_exchange_id, + country_code, currency, confidence, active, auto_mapped) + VALUES ($1, $2, $3, $4, $5, $6, $7, false, true) + ON CONFLICT (provider, provider_exchange_code) DO NOTHING + `; + + await postgresClient.query(query, [ + provider, + providerExchangeCode, + providerExchangeName, + masterExchange.id, + countryCode, + currency, + confidence, + ]); +} + +async function findProviderExchangeMapping( + provider: string, + providerExchangeCode: string, + container: IServiceContainer +): Promise { + const postgresClient = container.postgres; + const query = + 'SELECT * FROM provider_exchange_mappings WHERE provider = $1 AND provider_exchange_code = $2'; + const result = await postgresClient.query(query, [provider, providerExchangeCode]); + return result.rows[0] || null; +} + +async function findOrCreateMasterExchange( + providerCode: string, + providerName: string, + countryCode: string | null, + currency: string | null, + container: IServiceContainer +): Promise { + const postgresClient = container.postgres; + + // First, try to find exact match + let masterExchange = await findExchangeByCode(providerCode, container); + + if (masterExchange) { + return masterExchange; + } + + // Try to find by similar codes (basic mapping) + const basicMapping = getBasicExchangeMapping(providerCode); + if (basicMapping) { + masterExchange = await findExchangeByCode(basicMapping, container); + if (masterExchange) { + return masterExchange; + } + } + + // Create new master exchange (inactive by default) + const query = ` + INSERT INTO exchanges (code, name, country, currency, active) + VALUES ($1, $2, $3, $4, false) + ON CONFLICT (code) DO UPDATE SET + name = COALESCE(EXCLUDED.name, exchanges.name), + country = COALESCE(EXCLUDED.country, exchanges.country), + currency = COALESCE(EXCLUDED.currency, exchanges.currency) + RETURNING id, code, name, country, currency + `; + + const result = await postgresClient.query(query, [ + providerCode, + providerName || providerCode, + countryCode || 'US', + currency || 'USD', + ]); + + return result.rows[0]; +} + +function getBasicExchangeMapping(providerCode: string): string | null { + const mappings: Record = { + NYE: 'NYSE', + NAS: 'NASDAQ', + TO: 'TSX', + LN: 'LSE', + LON: 'LSE', + }; + + return mappings[providerCode.toUpperCase()] || null; +} + +async function findExchangeByCode(code: string, container: IServiceContainer): Promise { + const postgresClient = container.postgres; + const query = 'SELECT * FROM exchanges WHERE code = $1'; + const result = await postgresClient.query(query, [code]); + return result.rows[0] || null; +} diff --git a/apps/stock/data-pipeline/src/handlers/index.ts b/apps/stock/data-pipeline/src/handlers/index.ts new file mode 100644 index 0000000..754e8d2 --- /dev/null +++ b/apps/stock/data-pipeline/src/handlers/index.ts @@ -0,0 +1,42 @@ +/** + * Handler auto-registration for data pipeline service + * Automatically discovers and registers all handlers + */ + +import type { IServiceContainer } from '@stock-bot/handlers'; +import { autoRegisterHandlers } from '@stock-bot/handlers'; +import { getLogger } from '@stock-bot/logger'; + +// Import handlers for bundling (ensures they're included in the build) +import './exchanges/exchanges.handler'; +import './symbols/symbols.handler'; + +const logger = getLogger('pipeline-handler-init'); + +/** + * Initialize and register all handlers automatically + */ +export async function initializeAllHandlers(container: IServiceContainer): Promise { + logger.info('Initializing data pipeline handlers...'); + + try { + // Auto-register all handlers in this directory + const result = await autoRegisterHandlers(__dirname, container, { + pattern: '.handler.', + exclude: ['test', 'spec', '.old'], + dryRun: false, + }); + + logger.info('Handler auto-registration complete', { + registered: result.registered, + failed: result.failed, + }); + + if (result.failed.length > 0) { + logger.error('Some handlers failed to register', { failed: result.failed }); + } + } catch (error) { + logger.error('Handler auto-registration failed', { error }); + throw error; + } +} \ No newline at end of file diff --git a/apps/data-sync-service/src/handlers/symbols/operations/index.ts b/apps/stock/data-pipeline/src/handlers/symbols/operations/index.ts similarity index 96% rename from apps/data-sync-service/src/handlers/symbols/operations/index.ts rename to apps/stock/data-pipeline/src/handlers/symbols/operations/index.ts index 378fdd1..b4b431d 100644 --- a/apps/data-sync-service/src/handlers/symbols/operations/index.ts +++ b/apps/stock/data-pipeline/src/handlers/symbols/operations/index.ts @@ -1,9 +1,9 @@ -import { syncQMSymbols } from './qm-symbols.operations'; -import { syncSymbolsFromProvider } from './sync-symbols-from-provider.operations'; -import { getSyncStatus } from './sync-status.operations'; - -export const symbolOperations = { - syncQMSymbols, - syncSymbolsFromProvider, - getSyncStatus, -}; \ No newline at end of file +import { syncQMSymbols } from './qm-symbols.operations'; +import { getSyncStatus } from './sync-status.operations'; +import { syncSymbolsFromProvider } from './sync-symbols-from-provider.operations'; + +export const symbolOperations = { + syncQMSymbols, + syncSymbolsFromProvider, + getSyncStatus, +}; diff --git a/apps/data-sync-service/src/handlers/symbols/operations/qm-symbols.operations.ts b/apps/stock/data-pipeline/src/handlers/symbols/operations/qm-symbols.operations.ts similarity index 83% rename from apps/data-sync-service/src/handlers/symbols/operations/qm-symbols.operations.ts rename to apps/stock/data-pipeline/src/handlers/symbols/operations/qm-symbols.operations.ts index 6f86841..eedfb21 100644 --- a/apps/data-sync-service/src/handlers/symbols/operations/qm-symbols.operations.ts +++ b/apps/stock/data-pipeline/src/handlers/symbols/operations/qm-symbols.operations.ts @@ -1,168 +1,184 @@ -import { getLogger } from '@stock-bot/logger'; -import { getMongoDBClient } from '@stock-bot/mongodb-client'; -import { getPostgreSQLClient } from '@stock-bot/postgres-client'; -import type { JobPayload } from '../../../types/job-payloads'; - -const logger = getLogger('sync-qm-symbols'); - -export async function syncQMSymbols(payload: JobPayload): Promise<{ processed: number; created: number; updated: number }> { - logger.info('Starting QM symbols sync...'); - - try { - const mongoClient = getMongoDBClient(); - const postgresClient = getPostgreSQLClient(); - - // 1. Get all QM symbols from MongoDB - const qmSymbols = await mongoClient.find('qmSymbols', {}); - logger.info(`Found ${qmSymbols.length} QM symbols to process`); - - let created = 0; - let updated = 0; - - for (const symbol of qmSymbols) { - try { - // 2. Resolve exchange - const exchangeId = await resolveExchange(symbol.exchangeCode || symbol.exchange, postgresClient); - - if (!exchangeId) { - logger.warn('Unknown exchange, skipping symbol', { - symbol: symbol.symbol, - exchange: symbol.exchangeCode || symbol.exchange, - }); - continue; - } - - // 3. Check if symbol exists - const existingSymbol = await findSymbol(symbol.symbol, exchangeId, postgresClient); - - if (existingSymbol) { - // Update existing - await updateSymbol(existingSymbol.id, symbol, postgresClient); - await upsertProviderMapping(existingSymbol.id, 'qm', symbol, postgresClient); - updated++; - } else { - // Create new - const newSymbolId = await createSymbol(symbol, exchangeId, postgresClient); - await upsertProviderMapping(newSymbolId, 'qm', symbol, postgresClient); - created++; - } - } catch (error) { - logger.error('Failed to process symbol', { error, symbol: symbol.symbol }); - } - } - - // 4. Update sync status - await updateSyncStatus('qm', 'symbols', qmSymbols.length, postgresClient); - - const result = { processed: qmSymbols.length, created, updated }; - logger.info('QM symbols sync completed', result); - return result; - } catch (error) { - logger.error('QM symbols sync failed', { error }); - throw error; - } -} - -// Helper functions -async function resolveExchange(exchangeCode: string, postgresClient: any): Promise { - if (!exchangeCode) return null; - - // Simple mapping - expand this as needed - const exchangeMap: Record = { - NASDAQ: 'NASDAQ', - NYSE: 'NYSE', - TSX: 'TSX', - TSE: 'TSX', // TSE maps to TSX - LSE: 'LSE', - CME: 'CME', - }; - - const normalizedCode = exchangeMap[exchangeCode.toUpperCase()]; - if (!normalizedCode) { - return null; - } - - const query = 'SELECT id FROM exchanges WHERE code = $1'; - const result = await postgresClient.query(query, [normalizedCode]); - return result.rows[0]?.id || null; -} - -async function findSymbol(symbol: string, exchangeId: string, postgresClient: any): Promise { - const query = 'SELECT * FROM symbols WHERE symbol = $1 AND exchange_id = $2'; - const result = await postgresClient.query(query, [symbol, exchangeId]); - return result.rows[0] || null; -} - -async function createSymbol(qmSymbol: any, exchangeId: string, postgresClient: any): Promise { - const query = ` - INSERT INTO symbols (symbol, exchange_id, company_name, country, currency) - VALUES ($1, $2, $3, $4, $5) - RETURNING id - `; - - const result = await postgresClient.query(query, [ - qmSymbol.symbol, - exchangeId, - qmSymbol.companyName || qmSymbol.name, - qmSymbol.countryCode || 'US', - qmSymbol.currency || 'USD', - ]); - - return result.rows[0].id; -} - -async function updateSymbol(symbolId: string, qmSymbol: any, postgresClient: any): Promise { - const query = ` - UPDATE symbols - SET company_name = COALESCE($2, company_name), - country = COALESCE($3, country), - currency = COALESCE($4, currency), - updated_at = NOW() - WHERE id = $1 - `; - - await postgresClient.query(query, [ - symbolId, - qmSymbol.companyName || qmSymbol.name, - qmSymbol.countryCode, - qmSymbol.currency, - ]); -} - -async function upsertProviderMapping( - symbolId: string, - provider: string, - qmSymbol: any, - postgresClient: any -): Promise { - const query = ` - INSERT INTO provider_mappings - (symbol_id, provider, provider_symbol, provider_exchange, last_seen) - VALUES ($1, $2, $3, $4, NOW()) - ON CONFLICT (provider, provider_symbol) - DO UPDATE SET - symbol_id = EXCLUDED.symbol_id, - provider_exchange = EXCLUDED.provider_exchange, - last_seen = NOW() - `; - - await postgresClient.query(query, [ - symbolId, - provider, - qmSymbol.qmSearchCode || qmSymbol.symbol, - qmSymbol.exchangeCode || qmSymbol.exchange, - ]); -} - -async function updateSyncStatus(provider: string, dataType: string, count: number, postgresClient: any): Promise { - const query = ` - UPDATE sync_status - SET last_sync_at = NOW(), - last_sync_count = $3, - sync_errors = NULL, - updated_at = NOW() - WHERE provider = $1 AND data_type = $2 - `; - - await postgresClient.query(query, [provider, dataType, count]); -} \ No newline at end of file +import { getLogger } from '@stock-bot/logger'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import type { JobPayload } from '../../../types/job-payloads'; + +const logger = getLogger('sync-qm-symbols'); + +export async function syncQMSymbols( + payload: JobPayload, + container: IServiceContainer +): Promise<{ processed: number; created: number; updated: number }> { + logger.info('Starting QM symbols sync...'); + + try { + const mongoClient = container.mongodb; + const postgresClient = container.postgres; + + // 1. Get all QM symbols from MongoDB + const qmSymbols = await mongoClient.find('qmSymbols', {}); + logger.info(`Found ${qmSymbols.length} QM symbols to process`); + + let created = 0; + let updated = 0; + + for (const symbol of qmSymbols) { + try { + // 2. Resolve exchange + const exchangeId = await resolveExchange( + symbol.exchangeCode || symbol.exchange, + postgresClient + ); + + if (!exchangeId) { + logger.warn('Unknown exchange, skipping symbol', { + symbol: symbol.symbol, + exchange: symbol.exchangeCode || symbol.exchange, + }); + continue; + } + + // 3. Check if symbol exists + const existingSymbol = await findSymbol(symbol.symbol, exchangeId, postgresClient); + + if (existingSymbol) { + // Update existing + await updateSymbol(existingSymbol.id, symbol, postgresClient); + await upsertProviderMapping(existingSymbol.id, 'qm', symbol, postgresClient); + updated++; + } else { + // Create new + const newSymbolId = await createSymbol(symbol, exchangeId, postgresClient); + await upsertProviderMapping(newSymbolId, 'qm', symbol, postgresClient); + created++; + } + } catch (error) { + logger.error('Failed to process symbol', { error, symbol: symbol.symbol }); + } + } + + // 4. Update sync status + await updateSyncStatus('qm', 'symbols', qmSymbols.length, postgresClient); + + const result = { processed: qmSymbols.length, created, updated }; + logger.info('QM symbols sync completed', result); + return result; + } catch (error) { + logger.error('QM symbols sync failed', { error }); + throw error; + } +} + +// Helper functions +async function resolveExchange(exchangeCode: string, postgresClient: any): Promise { + if (!exchangeCode) { + return null; + } + + // Simple mapping - expand this as needed + const exchangeMap: Record = { + NASDAQ: 'NASDAQ', + NYSE: 'NYSE', + TSX: 'TSX', + TSE: 'TSX', // TSE maps to TSX + LSE: 'LSE', + CME: 'CME', + }; + + const normalizedCode = exchangeMap[exchangeCode.toUpperCase()]; + if (!normalizedCode) { + return null; + } + + const query = 'SELECT id FROM exchanges WHERE code = $1'; + const result = await postgresClient.query(query, [normalizedCode]); + return result.rows[0]?.id || null; +} + +async function findSymbol(symbol: string, exchangeId: string, postgresClient: any): Promise { + const query = 'SELECT * FROM symbols WHERE symbol = $1 AND exchange_id = $2'; + const result = await postgresClient.query(query, [symbol, exchangeId]); + return result.rows[0] || null; +} + +async function createSymbol( + qmSymbol: any, + exchangeId: string, + postgresClient: any +): Promise { + const query = ` + INSERT INTO symbols (symbol, exchange_id, company_name, country, currency) + VALUES ($1, $2, $3, $4, $5) + RETURNING id + `; + + const result = await postgresClient.query(query, [ + qmSymbol.symbol, + exchangeId, + qmSymbol.companyName || qmSymbol.name, + qmSymbol.countryCode || 'US', + qmSymbol.currency || 'USD', + ]); + + return result.rows[0].id; +} + +async function updateSymbol(symbolId: string, qmSymbol: any, postgresClient: any): Promise { + const query = ` + UPDATE symbols + SET company_name = COALESCE($2, company_name), + country = COALESCE($3, country), + currency = COALESCE($4, currency), + updated_at = NOW() + WHERE id = $1 + `; + + await postgresClient.query(query, [ + symbolId, + qmSymbol.companyName || qmSymbol.name, + qmSymbol.countryCode, + qmSymbol.currency, + ]); +} + +async function upsertProviderMapping( + symbolId: string, + provider: string, + qmSymbol: any, + postgresClient: any +): Promise { + const query = ` + INSERT INTO provider_mappings + (symbol_id, provider, provider_symbol, provider_exchange, last_seen) + VALUES ($1, $2, $3, $4, NOW()) + ON CONFLICT (provider, provider_symbol) + DO UPDATE SET + symbol_id = EXCLUDED.symbol_id, + provider_exchange = EXCLUDED.provider_exchange, + last_seen = NOW() + `; + + await postgresClient.query(query, [ + symbolId, + provider, + qmSymbol.qmSearchCode || qmSymbol.symbol, + qmSymbol.exchangeCode || qmSymbol.exchange, + ]); +} + +async function updateSyncStatus( + provider: string, + dataType: string, + count: number, + postgresClient: any +): Promise { + const query = ` + UPDATE sync_status + SET last_sync_at = NOW(), + last_sync_count = $3, + sync_errors = NULL, + updated_at = NOW() + WHERE provider = $1 AND data_type = $2 + `; + + await postgresClient.query(query, [provider, dataType, count]); +} diff --git a/apps/data-sync-service/src/handlers/symbols/operations/sync-status.operations.ts b/apps/stock/data-pipeline/src/handlers/symbols/operations/sync-status.operations.ts similarity index 68% rename from apps/data-sync-service/src/handlers/symbols/operations/sync-status.operations.ts rename to apps/stock/data-pipeline/src/handlers/symbols/operations/sync-status.operations.ts index 768b199..d9b0719 100644 --- a/apps/data-sync-service/src/handlers/symbols/operations/sync-status.operations.ts +++ b/apps/stock/data-pipeline/src/handlers/symbols/operations/sync-status.operations.ts @@ -1,21 +1,24 @@ -import { getLogger } from '@stock-bot/logger'; -import { getPostgreSQLClient } from '@stock-bot/postgres-client'; -import type { JobPayload } from '../../../types/job-payloads'; - -const logger = getLogger('sync-status'); - -export async function getSyncStatus(payload: JobPayload): Promise[]> { - logger.info('Getting sync status...'); - - try { - const postgresClient = getPostgreSQLClient(); - const query = 'SELECT * FROM sync_status ORDER BY provider, data_type'; - const result = await postgresClient.query(query); - - logger.info(`Retrieved sync status for ${result.rows.length} entries`); - return result.rows; - } catch (error) { - logger.error('Failed to get sync status', { error }); - throw error; - } -} \ No newline at end of file +import { getLogger } from '@stock-bot/logger'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import type { JobPayload } from '../../../types/job-payloads'; + +const logger = getLogger('sync-status'); + +export async function getSyncStatus( + payload: JobPayload, + container: IServiceContainer +): Promise[]> { + logger.info('Getting sync status...'); + + try { + const postgresClient = container.postgres; + const query = 'SELECT * FROM sync_status ORDER BY provider, data_type'; + const result = await postgresClient.query(query); + + logger.info(`Retrieved sync status for ${result.rows.length} entries`); + return result.rows; + } catch (error) { + logger.error('Failed to get sync status', { error }); + throw error; + } +} diff --git a/apps/data-sync-service/src/handlers/symbols/operations/sync-symbols-from-provider.operations.ts b/apps/stock/data-pipeline/src/handlers/symbols/operations/sync-symbols-from-provider.operations.ts similarity index 75% rename from apps/data-sync-service/src/handlers/symbols/operations/sync-symbols-from-provider.operations.ts rename to apps/stock/data-pipeline/src/handlers/symbols/operations/sync-symbols-from-provider.operations.ts index 7a8b9a6..7dbba35 100644 --- a/apps/data-sync-service/src/handlers/symbols/operations/sync-symbols-from-provider.operations.ts +++ b/apps/stock/data-pipeline/src/handlers/symbols/operations/sync-symbols-from-provider.operations.ts @@ -1,216 +1,237 @@ -import { getLogger } from '@stock-bot/logger'; -import { getMongoDBClient } from "@stock-bot/mongodb-client"; -import { getPostgreSQLClient } from '@stock-bot/postgres-client'; -import type { JobPayload, SyncResult } from '../../../types/job-payloads'; - -const logger = getLogger('enhanced-sync-symbols-from-provider'); - -export async function syncSymbolsFromProvider(payload: JobPayload): Promise { - const provider = payload.provider; - const clearFirst = payload.clearFirst || false; - - if (!provider) { - throw new Error('Provider is required in payload'); - } - - logger.info(`Starting ${provider} symbols sync...`, { clearFirst }); - - const result: SyncResult = { - processed: 0, - created: 0, - updated: 0, - skipped: 0, - errors: 0, - }; - - try { - const mongoClient = getMongoDBClient(); - const postgresClient = getPostgreSQLClient(); - - // Clear existing data if requested (only symbols and mappings, keep exchanges) - if (clearFirst) { - await postgresClient.query('BEGIN'); - await postgresClient.query('DELETE FROM provider_mappings'); - await postgresClient.query('DELETE FROM symbols'); - await postgresClient.query('COMMIT'); - logger.info('Cleared existing symbols and mappings before sync'); - } - - // Start transaction - await postgresClient.query('BEGIN'); - - let symbols: Record[] = []; - - // Get symbols based on provider - const db = mongoClient.getDatabase(); - switch (provider.toLowerCase()) { - case 'qm': - symbols = await db.collection('qmSymbols').find({}).toArray(); - break; - case 'eod': - symbols = await db.collection('eodSymbols').find({}).toArray(); - break; - case 'ib': - symbols = await db.collection('ibSymbols').find({}).toArray(); - break; - default: - throw new Error(`Unsupported provider: ${provider}`); - } - - logger.info(`Found ${symbols.length} ${provider} symbols to process`); - result.processed = symbols.length; - - for (const symbol of symbols) { - try { - await processSingleSymbol(symbol, provider, result); - } catch (error) { - logger.error('Failed to process symbol', { - error, - symbol: symbol.symbol || symbol.code, - provider, - }); - result.errors++; - } - } - - // Update sync status - await updateSyncStatus(provider, 'symbols', result.processed, postgresClient); - - await postgresClient.query('COMMIT'); - - logger.info(`${provider} symbols sync completed`, result); - return result; - } catch (error) { - const postgresClient = getPostgreSQLClient(); - await postgresClient.query('ROLLBACK'); - logger.error(`${provider} symbols sync failed`, { error }); - throw error; - } -} - -async function processSingleSymbol(symbol: any, provider: string, result: SyncResult): Promise { - const symbolCode = symbol.symbol || symbol.code; - const exchangeCode = symbol.exchangeCode || symbol.exchange || symbol.exchange_id; - - if (!symbolCode || !exchangeCode) { - result.skipped++; - return; - } - - // Find active provider exchange mapping - const providerMapping = await findActiveProviderExchangeMapping(provider, exchangeCode); - - if (!providerMapping) { - result.skipped++; - return; - } - - // Check if symbol exists - const existingSymbol = await findSymbolByCodeAndExchange( - symbolCode, - providerMapping.master_exchange_id - ); - - if (existingSymbol) { - await updateSymbol(existingSymbol.id, symbol); - await upsertProviderMapping(existingSymbol.id, provider, symbol); - result.updated++; - } else { - const newSymbolId = await createSymbol(symbol, providerMapping.master_exchange_id); - await upsertProviderMapping(newSymbolId, provider, symbol); - result.created++; - } -} - -async function findActiveProviderExchangeMapping(provider: string, providerExchangeCode: string): Promise { - const postgresClient = getPostgreSQLClient(); - const query = ` - SELECT pem.*, e.code as master_exchange_code - FROM provider_exchange_mappings pem - JOIN exchanges e ON pem.master_exchange_id = e.id - WHERE pem.provider = $1 AND pem.provider_exchange_code = $2 AND pem.active = true - `; - const result = await postgresClient.query(query, [provider, providerExchangeCode]); - return result.rows[0] || null; -} - -async function findSymbolByCodeAndExchange(symbol: string, exchangeId: string): Promise { - const postgresClient = getPostgreSQLClient(); - const query = 'SELECT * FROM symbols WHERE symbol = $1 AND exchange_id = $2'; - const result = await postgresClient.query(query, [symbol, exchangeId]); - return result.rows[0] || null; -} - -async function createSymbol(symbol: any, exchangeId: string): Promise { - const postgresClient = getPostgreSQLClient(); - const query = ` - INSERT INTO symbols (symbol, exchange_id, company_name, country, currency) - VALUES ($1, $2, $3, $4, $5) - RETURNING id - `; - - const result = await postgresClient.query(query, [ - symbol.symbol || symbol.code, - exchangeId, - symbol.companyName || symbol.name || symbol.company_name, - symbol.countryCode || symbol.country_code || 'US', - symbol.currency || 'USD', - ]); - - return result.rows[0].id; -} - -async function updateSymbol(symbolId: string, symbol: any): Promise { - const postgresClient = getPostgreSQLClient(); - const query = ` - UPDATE symbols - SET company_name = COALESCE($2, company_name), - country = COALESCE($3, country), - currency = COALESCE($4, currency), - updated_at = NOW() - WHERE id = $1 - `; - - await postgresClient.query(query, [ - symbolId, - symbol.companyName || symbol.name || symbol.company_name, - symbol.countryCode || symbol.country_code, - symbol.currency, - ]); -} - -async function upsertProviderMapping(symbolId: string, provider: string, symbol: any): Promise { - const postgresClient = getPostgreSQLClient(); - const query = ` - INSERT INTO provider_mappings - (symbol_id, provider, provider_symbol, provider_exchange, last_seen) - VALUES ($1, $2, $3, $4, NOW()) - ON CONFLICT (provider, provider_symbol) - DO UPDATE SET - symbol_id = EXCLUDED.symbol_id, - provider_exchange = EXCLUDED.provider_exchange, - last_seen = NOW() - `; - - await postgresClient.query(query, [ - symbolId, - provider, - symbol.qmSearchCode || symbol.symbol || symbol.code, - symbol.exchangeCode || symbol.exchange || symbol.exchange_id, - ]); -} - -async function updateSyncStatus(provider: string, dataType: string, count: number, postgresClient: any): Promise { - const query = ` - INSERT INTO sync_status (provider, data_type, last_sync_at, last_sync_count, sync_errors) - VALUES ($1, $2, NOW(), $3, NULL) - ON CONFLICT (provider, data_type) - DO UPDATE SET - last_sync_at = NOW(), - last_sync_count = EXCLUDED.last_sync_count, - sync_errors = NULL, - updated_at = NOW() - `; - - await postgresClient.query(query, [provider, dataType, count]); -} \ No newline at end of file +import { getLogger } from '@stock-bot/logger'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import type { JobPayload, SyncResult } from '../../../types/job-payloads'; + +const logger = getLogger('enhanced-sync-symbols-from-provider'); + +export async function syncSymbolsFromProvider( + payload: JobPayload, + container: IServiceContainer +): Promise { + const provider = payload.provider; + const clearFirst = payload.clearFirst || false; + + if (!provider) { + throw new Error('Provider is required in payload'); + } + + logger.info(`Starting ${provider} symbols sync...`, { clearFirst }); + + const result: SyncResult = { + processed: 0, + created: 0, + updated: 0, + skipped: 0, + errors: 0, + }; + + try { + const mongoClient = container.mongodb; + const postgresClient = container.postgres; + + // Clear existing data if requested (only symbols and mappings, keep exchanges) + if (clearFirst) { + await postgresClient.query('BEGIN'); + await postgresClient.query('DELETE FROM provider_mappings'); + await postgresClient.query('DELETE FROM symbols'); + await postgresClient.query('COMMIT'); + logger.info('Cleared existing symbols and mappings before sync'); + } + + // Start transaction + await postgresClient.query('BEGIN'); + + let symbols: Record[] = []; + + // Get symbols based on provider + const db = mongoClient.getDatabase(); + switch (provider.toLowerCase()) { + case 'qm': + symbols = await db.collection('qmSymbols').find({}).toArray(); + break; + case 'eod': + symbols = await db.collection('eodSymbols').find({}).toArray(); + break; + case 'ib': + symbols = await db.collection('ibSymbols').find({}).toArray(); + break; + default: + throw new Error(`Unsupported provider: ${provider}`); + } + + logger.info(`Found ${symbols.length} ${provider} symbols to process`); + result.processed = symbols.length; + + for (const symbol of symbols) { + try { + await processSingleSymbol(symbol, provider, result, container); + } catch (error) { + logger.error('Failed to process symbol', { + error, + symbol: symbol.symbol || symbol.code, + provider, + }); + result.errors++; + } + } + + // Update sync status + await updateSyncStatus(provider, 'symbols', result.processed, container.postgres); + + await postgresClient.query('COMMIT'); + + logger.info(`${provider} symbols sync completed`, result); + return result; + } catch (error) { + await container.postgres.query('ROLLBACK'); + logger.error(`${provider} symbols sync failed`, { error }); + throw error; + } +} + +async function processSingleSymbol( + symbol: any, + provider: string, + result: SyncResult, + container: IServiceContainer +): Promise { + const symbolCode = symbol.symbol || symbol.code; + const exchangeCode = symbol.exchangeCode || symbol.exchange || symbol.exchange_id; + + if (!symbolCode || !exchangeCode) { + result.skipped++; + return; + } + + // Find active provider exchange mapping + const providerMapping = await findActiveProviderExchangeMapping(provider, exchangeCode, container); + + if (!providerMapping) { + result.skipped++; + return; + } + + // Check if symbol exists + const existingSymbol = await findSymbolByCodeAndExchange( + symbolCode, + providerMapping.master_exchange_id, + container + ); + + if (existingSymbol) { + await updateSymbol(existingSymbol.id, symbol, container); + await upsertProviderMapping(existingSymbol.id, provider, symbol, container); + result.updated++; + } else { + const newSymbolId = await createSymbol(symbol, providerMapping.master_exchange_id, container); + await upsertProviderMapping(newSymbolId, provider, symbol, container); + result.created++; + } +} + +async function findActiveProviderExchangeMapping( + provider: string, + providerExchangeCode: string, + container: IServiceContainer +): Promise { + const postgresClient = container.postgres; + const query = ` + SELECT pem.*, e.code as master_exchange_code + FROM provider_exchange_mappings pem + JOIN exchanges e ON pem.master_exchange_id = e.id + WHERE pem.provider = $1 AND pem.provider_exchange_code = $2 AND pem.active = true + `; + const result = await postgresClient.query(query, [provider, providerExchangeCode]); + return result.rows[0] || null; +} + +async function findSymbolByCodeAndExchange(symbol: string, exchangeId: string, container: IServiceContainer): Promise { + const postgresClient = container.postgres; + const query = 'SELECT * FROM symbols WHERE symbol = $1 AND exchange_id = $2'; + const result = await postgresClient.query(query, [symbol, exchangeId]); + return result.rows[0] || null; +} + +async function createSymbol(symbol: any, exchangeId: string, container: IServiceContainer): Promise { + const postgresClient = container.postgres; + const query = ` + INSERT INTO symbols (symbol, exchange_id, company_name, country, currency) + VALUES ($1, $2, $3, $4, $5) + RETURNING id + `; + + const result = await postgresClient.query(query, [ + symbol.symbol || symbol.code, + exchangeId, + symbol.companyName || symbol.name || symbol.company_name, + symbol.countryCode || symbol.country_code || 'US', + symbol.currency || 'USD', + ]); + + return result.rows[0].id; +} + +async function updateSymbol(symbolId: string, symbol: any, container: IServiceContainer): Promise { + const postgresClient = container.postgres; + const query = ` + UPDATE symbols + SET company_name = COALESCE($2, company_name), + country = COALESCE($3, country), + currency = COALESCE($4, currency), + updated_at = NOW() + WHERE id = $1 + `; + + await postgresClient.query(query, [ + symbolId, + symbol.companyName || symbol.name || symbol.company_name, + symbol.countryCode || symbol.country_code, + symbol.currency, + ]); +} + +async function upsertProviderMapping( + symbolId: string, + provider: string, + symbol: any, + container: IServiceContainer +): Promise { + const postgresClient = container.postgres; + const query = ` + INSERT INTO provider_mappings + (symbol_id, provider, provider_symbol, provider_exchange, last_seen) + VALUES ($1, $2, $3, $4, NOW()) + ON CONFLICT (provider, provider_symbol) + DO UPDATE SET + symbol_id = EXCLUDED.symbol_id, + provider_exchange = EXCLUDED.provider_exchange, + last_seen = NOW() + `; + + await postgresClient.query(query, [ + symbolId, + provider, + symbol.qmSearchCode || symbol.symbol || symbol.code, + symbol.exchangeCode || symbol.exchange || symbol.exchange_id, + ]); +} + +async function updateSyncStatus( + provider: string, + dataType: string, + count: number, + postgresClient: any +): Promise { + const query = ` + INSERT INTO sync_status (provider, data_type, last_sync_at, last_sync_count, sync_errors) + VALUES ($1, $2, NOW(), $3, NULL) + ON CONFLICT (provider, data_type) + DO UPDATE SET + last_sync_at = NOW(), + last_sync_count = EXCLUDED.last_sync_count, + sync_errors = NULL, + updated_at = NOW() + `; + + await postgresClient.query(query, [provider, dataType, count]); +} diff --git a/apps/stock/data-pipeline/src/handlers/symbols/symbols.handler.ts b/apps/stock/data-pipeline/src/handlers/symbols/symbols.handler.ts new file mode 100644 index 0000000..5ab6d0a --- /dev/null +++ b/apps/stock/data-pipeline/src/handlers/symbols/symbols.handler.ts @@ -0,0 +1,68 @@ +import { + BaseHandler, + Handler, + Operation, + ScheduledOperation, + type IServiceContainer, +} from '@stock-bot/handlers'; +import { syncQMSymbols } from './operations/qm-symbols.operations'; +import { syncSymbolsFromProvider } from './operations/sync-symbols-from-provider.operations'; +import { getSyncStatus } from './operations/sync-status.operations'; + +@Handler('symbols') +export class SymbolsHandler extends BaseHandler { + constructor(services: IServiceContainer) { + super(services); + } + + /** + * Sync symbols from QuestionsAndMethods API + */ + @ScheduledOperation('sync-qm-symbols', '0 2 * * *', { + priority: 5, + description: 'Daily sync of QM symbols at 2 AM', + }) + async syncQMSymbols(): Promise<{ processed: number; created: number; updated: number }> { + this.log('info', 'Starting QM symbols sync...'); + return syncQMSymbols({}, this.services); + } + + /** + * Sync symbols from specific provider + */ + @Operation('sync-symbols-qm') + @ScheduledOperation('sync-symbols-qm', '0 4 * * *', { + priority: 5, + description: 'Daily sync of symbols from QM provider at 4 AM', + }) + async syncSymbolsQM(): Promise { + return this.syncSymbolsFromProvider({ provider: 'qm', clearFirst: false }); + } + + @Operation('sync-symbols-eod') + async syncSymbolsEOD(payload: { provider: string; clearFirst?: boolean }): Promise { + return this.syncSymbolsFromProvider({ ...payload, provider: 'eod' }); + } + + @Operation('sync-symbols-ib') + async syncSymbolsIB(payload: { provider: string; clearFirst?: boolean }): Promise { + return this.syncSymbolsFromProvider({ ...payload, provider: 'ib' }); + } + + /** + * Get sync status for symbols + */ + @Operation('sync-status') + async getSyncStatus(): Promise { + this.log('info', 'Getting symbol sync status...'); + return getSyncStatus({}, this.services); + } + + /** + * Internal method to sync symbols from a provider + */ + private async syncSymbolsFromProvider(payload: { provider: string; clearFirst?: boolean }): Promise { + this.log('info', 'Syncing symbols from provider', { provider: payload.provider }); + return syncSymbolsFromProvider(payload, this.services); + } +} \ No newline at end of file diff --git a/apps/stock/data-pipeline/src/index.ts b/apps/stock/data-pipeline/src/index.ts new file mode 100644 index 0000000..36df5e0 --- /dev/null +++ b/apps/stock/data-pipeline/src/index.ts @@ -0,0 +1,86 @@ +/** + * Data Pipeline Service + * Simplified entry point using ServiceApplication framework + */ + +import { initializeStockConfig } from '@stock-bot/stock-config'; +import { ServiceApplication } from '@stock-bot/di'; +import { getLogger } from '@stock-bot/logger'; + +// Local imports +import { initializeAllHandlers } from './handlers'; +import { createRoutes } from './routes/create-routes'; +import { setupServiceContainer } from './container-setup'; + +// Initialize configuration with service-specific overrides +const config = initializeStockConfig('dataPipeline'); + +// Log the full configuration +const logger = getLogger('data-pipeline'); +logger.info('Service configuration:', config); + +// Create service application +const app = new ServiceApplication( + config, + { + serviceName: 'data-pipeline', + enableHandlers: true, + enableScheduledJobs: true, + corsConfig: { + origin: '*', + allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS', 'PATCH'], + allowHeaders: ['Content-Type', 'Authorization'], + credentials: false, + }, + serviceMetadata: { + version: '1.0.0', + description: 'Data processing and transformation pipeline', + endpoints: { + health: '/health', + operations: '/api/operations', + }, + }, + }, + { + // Custom lifecycle hooks + onContainerReady: (container) => { + // Setup service-specific configuration + const enhancedContainer = setupServiceContainer(config, container); + return enhancedContainer; + }, + onStarted: (_port) => { + const logger = getLogger('data-pipeline'); + logger.info('Data pipeline service startup initiated with ServiceApplication framework'); + }, + } +); + +// Container factory function +async function createContainer(config: any) { + const { ServiceContainerBuilder } = await import('@stock-bot/di'); + const builder = new ServiceContainerBuilder(); + + const container = await builder + .withConfig(config) + .withOptions({ + enableQuestDB: false, // Disabled for now due to auth issues + // Data pipeline needs all databases + enableMongoDB: true, + enablePostgres: true, + enableCache: true, + enableQueue: true, + enableBrowser: false, // Data pipeline doesn't need browser + enableProxy: false, // Data pipeline doesn't need proxy + skipInitialization: false, // Let builder handle initialization + }) + .build(); + + return container; +} + +// Start the service +app.start(createContainer, createRoutes, initializeAllHandlers).catch(error => { + const logger = getLogger('data-pipeline'); + logger.fatal('Failed to start data pipeline service', { error }); + process.exit(1); +}); \ No newline at end of file diff --git a/apps/stock/data-pipeline/src/routes/create-routes.ts b/apps/stock/data-pipeline/src/routes/create-routes.ts new file mode 100644 index 0000000..13bf479 --- /dev/null +++ b/apps/stock/data-pipeline/src/routes/create-routes.ts @@ -0,0 +1,29 @@ +/** + * Route factory for data pipeline service + * Creates routes with access to the service container + */ + +import { Hono } from 'hono'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import { healthRoutes } from './health.routes'; +import { createSyncRoutes } from './sync.routes'; +import { createEnhancedSyncRoutes } from './enhanced-sync.routes'; +import { createStatsRoutes } from './stats.routes'; + +export function createRoutes(container: IServiceContainer): Hono { + const app = new Hono(); + + // Add container to context for all routes + app.use('*', async (c, next) => { + c.set('container', container); + await next(); + }); + + // Mount routes + app.route('/health', healthRoutes); + app.route('/sync', createSyncRoutes(container)); + app.route('/sync', createEnhancedSyncRoutes(container)); + app.route('/sync/stats', createStatsRoutes(container)); + + return app; +} \ No newline at end of file diff --git a/apps/stock/data-pipeline/src/routes/enhanced-sync.routes.ts b/apps/stock/data-pipeline/src/routes/enhanced-sync.routes.ts new file mode 100644 index 0000000..a70a126 --- /dev/null +++ b/apps/stock/data-pipeline/src/routes/enhanced-sync.routes.ts @@ -0,0 +1,154 @@ +import { Hono } from 'hono'; +import { getLogger } from '@stock-bot/logger'; +import type { IServiceContainer } from '@stock-bot/handlers'; + +const logger = getLogger('enhanced-sync-routes'); + +export function createEnhancedSyncRoutes(container: IServiceContainer) { + const enhancedSync = new Hono(); + + // Enhanced sync endpoints + enhancedSync.post('/exchanges/all', async c => { + try { + const clearFirst = c.req.query('clear') === 'true'; + const queueManager = container.queue; + if (!queueManager) { + return c.json({ success: false, error: 'Queue manager not available' }, 503); + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + + const job = await exchangesQueue.addJob('sync-all-exchanges', { + handler: 'exchanges', + operation: 'sync-all-exchanges', + payload: { clearFirst }, + }); + + return c.json({ success: true, jobId: job.id, message: 'Enhanced exchange sync job queued' }); + } catch (error) { + logger.error('Failed to queue enhanced exchange sync job', { error }); + return c.json( + { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, + 500 + ); + } + }); + + enhancedSync.post('/provider-mappings/qm', async c => { + try { + const queueManager = container.queue; + if (!queueManager) { + return c.json({ success: false, error: 'Queue manager not available' }, 503); + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + + const job = await exchangesQueue.addJob('sync-qm-provider-mappings', { + handler: 'exchanges', + operation: 'sync-qm-provider-mappings', + payload: {}, + }); + + return c.json({ + success: true, + jobId: job.id, + message: 'QM provider mappings sync job queued', + }); + } catch (error) { + logger.error('Failed to queue QM provider mappings sync job', { error }); + return c.json( + { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, + 500 + ); + } + }); + + enhancedSync.post('/provider-mappings/ib', async c => { + try { + const queueManager = container.queue; + if (!queueManager) { + return c.json({ success: false, error: 'Queue manager not available' }, 503); + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + + const job = await exchangesQueue.addJob('sync-ib-exchanges', { + handler: 'exchanges', + operation: 'sync-ib-exchanges', + payload: {}, + }); + + return c.json({ + success: true, + jobId: job.id, + message: 'IB exchanges sync job queued', + }); + } catch (error) { + logger.error('Failed to queue IB exchanges sync job', { error }); + return c.json( + { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, + 500 + ); + } + }); + + enhancedSync.get('/status', async c => { + try { + const queueManager = container.queue; + if (!queueManager) { + return c.json({ success: false, error: 'Queue manager not available' }, 503); + } + + const symbolsQueue = queueManager.getQueue('symbols'); + + const job = await symbolsQueue.addJob('sync-status', { + handler: 'symbols', + operation: 'sync-status', + payload: {}, + }); + + return c.json({ success: true, jobId: job.id, message: 'Sync status job queued' }); + } catch (error) { + logger.error('Failed to queue sync status job', { error }); + return c.json( + { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, + 500 + ); + } + }); + + enhancedSync.post('/clear/postgresql', async c => { + try { + const dataType = c.req.query('type') as 'exchanges' | 'provider_mappings' | 'all'; + const queueManager = container.queue; + if (!queueManager) { + return c.json({ success: false, error: 'Queue manager not available' }, 503); + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + + const job = await exchangesQueue.addJob('clear-postgresql-data', { + handler: 'exchanges', + operation: 'clear-postgresql-data', + payload: { dataType: dataType || 'all' }, + }); + + return c.json({ + success: true, + jobId: job.id, + message: 'PostgreSQL data clear job queued', + }); + } catch (error) { + logger.error('Failed to queue PostgreSQL clear job', { error }); + return c.json( + { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, + 500 + ); + } + }); + + return enhancedSync; +} + +// Legacy export for backward compatibility +export const enhancedSyncRoutes = createEnhancedSyncRoutes({} as IServiceContainer); \ No newline at end of file diff --git a/apps/data-sync-service/src/routes/health.routes.ts b/apps/stock/data-pipeline/src/routes/health.routes.ts similarity index 75% rename from apps/data-sync-service/src/routes/health.routes.ts rename to apps/stock/data-pipeline/src/routes/health.routes.ts index 68d8afd..092481b 100644 --- a/apps/data-sync-service/src/routes/health.routes.ts +++ b/apps/stock/data-pipeline/src/routes/health.routes.ts @@ -6,9 +6,9 @@ const health = new Hono(); health.get('/', c => { return c.json({ status: 'healthy', - service: 'data-sync-service', + service: 'data-pipeline', timestamp: new Date().toISOString(), }); }); -export { health as healthRoutes }; \ No newline at end of file +export { health as healthRoutes }; diff --git a/apps/data-sync-service/src/routes/index.ts b/apps/stock/data-pipeline/src/routes/index.ts similarity index 79% rename from apps/data-sync-service/src/routes/index.ts rename to apps/stock/data-pipeline/src/routes/index.ts index b106768..143fac2 100644 --- a/apps/data-sync-service/src/routes/index.ts +++ b/apps/stock/data-pipeline/src/routes/index.ts @@ -2,4 +2,4 @@ export { healthRoutes } from './health.routes'; export { syncRoutes } from './sync.routes'; export { enhancedSyncRoutes } from './enhanced-sync.routes'; -export { statsRoutes } from './stats.routes'; \ No newline at end of file +export { statsRoutes } from './stats.routes'; diff --git a/apps/stock/data-pipeline/src/routes/stats.routes.ts b/apps/stock/data-pipeline/src/routes/stats.routes.ts new file mode 100644 index 0000000..d11ae55 --- /dev/null +++ b/apps/stock/data-pipeline/src/routes/stats.routes.ts @@ -0,0 +1,63 @@ +import { Hono } from 'hono'; +import { getLogger } from '@stock-bot/logger'; +import type { IServiceContainer } from '@stock-bot/handlers'; + +const logger = getLogger('stats-routes'); + +export function createStatsRoutes(container: IServiceContainer) { + const stats = new Hono(); + + // Statistics endpoints + stats.get('/exchanges', async c => { + try { + const queueManager = container.queue; + if (!queueManager) { + return c.json({ error: 'Queue manager not available' }, 503); + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + + const job = await exchangesQueue.addJob('get-exchange-stats', { + handler: 'exchanges', + operation: 'get-exchange-stats', + payload: {}, + }); + + // Wait for job to complete and return result + const result = await job.waitUntilFinished(); + return c.json(result); + } catch (error) { + logger.error('Failed to get exchange stats', { error }); + return c.json({ error: error instanceof Error ? error.message : 'Unknown error' }, 500); + } + }); + + stats.get('/provider-mappings', async c => { + try { + const queueManager = container.queue; + if (!queueManager) { + return c.json({ error: 'Queue manager not available' }, 503); + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + + const job = await exchangesQueue.addJob('get-provider-mapping-stats', { + handler: 'exchanges', + operation: 'get-provider-mapping-stats', + payload: {}, + }); + + // Wait for job to complete and return result + const result = await job.waitUntilFinished(); + return c.json(result); + } catch (error) { + logger.error('Failed to get provider mapping stats', { error }); + return c.json({ error: error instanceof Error ? error.message : 'Unknown error' }, 500); + } + }); + + return stats; +} + +// Legacy export for backward compatibility +export const statsRoutes = createStatsRoutes({} as IServiceContainer); \ No newline at end of file diff --git a/apps/stock/data-pipeline/src/routes/sync.routes.ts b/apps/stock/data-pipeline/src/routes/sync.routes.ts new file mode 100644 index 0000000..7d753ca --- /dev/null +++ b/apps/stock/data-pipeline/src/routes/sync.routes.ts @@ -0,0 +1,95 @@ +import { Hono } from 'hono'; +import { getLogger } from '@stock-bot/logger'; +import type { IServiceContainer } from '@stock-bot/handlers'; + +const logger = getLogger('sync-routes'); + +export function createSyncRoutes(container: IServiceContainer) { + const sync = new Hono(); + + // Manual sync trigger endpoints + sync.post('/symbols', async c => { + try { + const queueManager = container.queue; + if (!queueManager) { + return c.json({ success: false, error: 'Queue manager not available' }, 503); + } + + const symbolsQueue = queueManager.getQueue('symbols'); + + const job = await symbolsQueue.addJob('sync-qm-symbols', { + handler: 'symbols', + operation: 'sync-qm-symbols', + payload: {}, + }); + + return c.json({ success: true, jobId: job.id, message: 'QM symbols sync job queued' }); + } catch (error) { + logger.error('Failed to queue symbol sync job', { error }); + return c.json( + { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, + 500 + ); + } + }); + + sync.post('/exchanges', async c => { + try { + const queueManager = container.queue; + if (!queueManager) { + return c.json({ success: false, error: 'Queue manager not available' }, 503); + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + + const job = await exchangesQueue.addJob('sync-qm-exchanges', { + handler: 'exchanges', + operation: 'sync-qm-exchanges', + payload: {}, + }); + + return c.json({ success: true, jobId: job.id, message: 'QM exchanges sync job queued' }); + } catch (error) { + logger.error('Failed to queue exchange sync job', { error }); + return c.json( + { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, + 500 + ); + } + }); + + sync.post('/symbols/:provider', async c => { + try { + const provider = c.req.param('provider'); + const queueManager = container.queue; + if (!queueManager) { + return c.json({ success: false, error: 'Queue manager not available' }, 503); + } + + const symbolsQueue = queueManager.getQueue('symbols'); + + const job = await symbolsQueue.addJob('sync-symbols-from-provider', { + handler: 'symbols', + operation: 'sync-symbols-from-provider', + payload: { provider }, + }); + + return c.json({ + success: true, + jobId: job.id, + message: `${provider} symbols sync job queued`, + }); + } catch (error) { + logger.error('Failed to queue provider symbol sync job', { error }); + return c.json( + { success: false, error: error instanceof Error ? error.message : 'Unknown error' }, + 500 + ); + } + }); + + return sync; +} + +// Legacy export for backward compatibility +export const syncRoutes = createSyncRoutes({} as IServiceContainer); \ No newline at end of file diff --git a/apps/data-sync-service/src/types/job-payloads.ts b/apps/stock/data-pipeline/src/types/job-payloads.ts similarity index 94% rename from apps/data-sync-service/src/types/job-payloads.ts rename to apps/stock/data-pipeline/src/types/job-payloads.ts index 6d53852..6c5f9de 100644 --- a/apps/data-sync-service/src/types/job-payloads.ts +++ b/apps/stock/data-pipeline/src/types/job-payloads.ts @@ -1,27 +1,27 @@ -export interface JobPayload { - [key: string]: any; -} - -export interface SyncResult { - processed: number; - created: number; - updated: number; - skipped: number; - errors: number; -} - -export interface SyncStatus { - provider: string; - dataType: string; - lastSyncAt?: Date; - lastSyncCount: number; - syncErrors?: string; -} - -export interface ExchangeMapping { - id: string; - code: string; - name: string; - country: string; - currency: string; -} \ No newline at end of file +export interface JobPayload { + [key: string]: any; +} + +export interface SyncResult { + processed: number; + created: number; + updated: number; + skipped: number; + errors: number; +} + +export interface SyncStatus { + provider: string; + dataType: string; + lastSyncAt?: Date; + lastSyncCount: number; + syncErrors?: string; +} + +export interface ExchangeMapping { + id: string; + code: string; + name: string; + country: string; + currency: string; +} diff --git a/apps/stock/data-pipeline/tsconfig.json b/apps/stock/data-pipeline/tsconfig.json new file mode 100644 index 0000000..cb4b0dc --- /dev/null +++ b/apps/stock/data-pipeline/tsconfig.json @@ -0,0 +1,14 @@ +{ + "extends": "../../tsconfig.app.json", + "references": [ + { "path": "../../libs/core/types" }, + { "path": "../../libs/core/config" }, + { "path": "../../libs/core/logger" }, + { "path": "../../libs/data/cache" }, + { "path": "../../libs/services/queue" }, + { "path": "../../libs/data/mongodb" }, + { "path": "../../libs/data/postgres" }, + { "path": "../../libs/data/questdb" }, + { "path": "../../libs/services/shutdown" } + ] +} diff --git a/apps/stock/ecosystem.config.js b/apps/stock/ecosystem.config.js new file mode 100644 index 0000000..a94d2a4 --- /dev/null +++ b/apps/stock/ecosystem.config.js @@ -0,0 +1,72 @@ +module.exports = { + apps: [ + { + name: 'stock-ingestion', + script: './data-ingestion/dist/index.js', + instances: 1, + autorestart: true, + watch: false, + max_memory_restart: '1G', + env: { + NODE_ENV: 'production', + PORT: 2001 + }, + env_development: { + NODE_ENV: 'development', + PORT: 2001 + } + }, + { + name: 'stock-pipeline', + script: './data-pipeline/dist/index.js', + instances: 1, + autorestart: true, + watch: false, + max_memory_restart: '1G', + env: { + NODE_ENV: 'production', + PORT: 2002 + }, + env_development: { + NODE_ENV: 'development', + PORT: 2002 + } + }, + { + name: 'stock-api', + script: './web-api/dist/index.js', + instances: 2, + autorestart: true, + watch: false, + max_memory_restart: '1G', + exec_mode: 'cluster', + env: { + NODE_ENV: 'production', + PORT: 2003 + }, + env_development: { + NODE_ENV: 'development', + PORT: 2003 + } + } + ], + + deploy: { + production: { + user: 'deploy', + host: 'production-server', + ref: 'origin/master', + repo: 'git@github.com:username/stock-bot.git', + path: '/var/www/stock-bot', + 'post-deploy': 'cd apps/stock && npm install && npm run build && pm2 reload ecosystem.config.js --env production' + }, + staging: { + user: 'deploy', + host: 'staging-server', + ref: 'origin/develop', + repo: 'git@github.com:username/stock-bot.git', + path: '/var/www/stock-bot-staging', + 'post-deploy': 'cd apps/stock && npm install && npm run build && pm2 reload ecosystem.config.js --env development' + } + } +}; \ No newline at end of file diff --git a/apps/stock/package.json b/apps/stock/package.json new file mode 100644 index 0000000..cd8c82c --- /dev/null +++ b/apps/stock/package.json @@ -0,0 +1,91 @@ +{ + "name": "@stock-bot/stock-app", + "version": "1.0.0", + "private": true, + "description": "Stock trading bot application", + "scripts": { + "dev": "turbo run dev", + "dev:ingestion": "cd data-ingestion && bun run dev", + "dev:pipeline": "cd data-pipeline && bun run dev", + "dev:api": "cd web-api && bun run dev", + "dev:web": "cd web-app && bun run dev", + "dev:backend": "turbo run dev --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-api\"", + "dev:frontend": "turbo run dev --filter=\"@stock-bot/web-app\"", + + "build": "turbo run build", + "build:config": "cd config && bun run build", + "build:services": "turbo run build --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-*\"", + "build:ingestion": "cd data-ingestion && bun run build", + "build:pipeline": "cd data-pipeline && bun run build", + "build:api": "cd web-api && bun run build", + "build:web": "cd web-app && bun run build", + + "start": "turbo run start --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-api\"", + "start:all": "turbo run start", + "start:ingestion": "cd data-ingestion && bun start", + "start:pipeline": "cd data-pipeline && bun start", + "start:api": "cd web-api && bun start", + + "clean": "turbo run clean", + "clean:all": "turbo run clean && rm -rf node_modules", + "clean:ingestion": "cd data-ingestion && rm -rf dist node_modules", + "clean:pipeline": "cd data-pipeline && rm -rf dist node_modules", + "clean:api": "cd web-api && rm -rf dist node_modules", + "clean:web": "cd web-app && rm -rf dist node_modules", + "clean:config": "cd config && rm -rf dist node_modules", + + "test": "turbo run test", + "test:all": "turbo run test", + "test:config": "cd config && bun test", + "test:services": "turbo run test --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-*\"", + "test:ingestion": "cd data-ingestion && bun test", + "test:pipeline": "cd data-pipeline && bun test", + "test:api": "cd web-api && bun test", + + "lint": "turbo run lint", + "lint:all": "turbo run lint", + "lint:config": "cd config && bun run lint", + "lint:services": "turbo run lint --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-*\"", + "lint:ingestion": "cd data-ingestion && bun run lint", + "lint:pipeline": "cd data-pipeline && bun run lint", + "lint:api": "cd web-api && bun run lint", + "lint:web": "cd web-app && bun run lint", + + "install:all": "bun install", + + "docker:build": "docker-compose build", + "docker:up": "docker-compose up", + "docker:down": "docker-compose down", + + "pm2:start": "pm2 start ecosystem.config.js", + "pm2:stop": "pm2 stop all", + "pm2:restart": "pm2 restart all", + "pm2:logs": "pm2 logs", + "pm2:status": "pm2 status", + + "db:migrate": "cd data-ingestion && bun run db:migrate", + "db:seed": "cd data-ingestion && bun run db:seed", + + "health:check": "bun scripts/health-check.js", + "monitor": "bun run pm2:logs", + "status": "bun run pm2:status" + }, + "devDependencies": { + "pm2": "^5.3.0", + "@types/node": "^20.11.0", + "typescript": "^5.3.3", + "turbo": "^2.5.4" + }, + "workspaces": [ + "config", + "data-ingestion", + "data-pipeline", + "web-api", + "web-app" + ], + "engines": { + "node": ">=18.0.0", + "bun": ">=1.1.0" + }, + "packageManager": "bun@1.1.12" +} \ No newline at end of file diff --git a/apps/stock/scripts/health-check.js b/apps/stock/scripts/health-check.js new file mode 100755 index 0000000..62c05e9 --- /dev/null +++ b/apps/stock/scripts/health-check.js @@ -0,0 +1,60 @@ +#!/usr/bin/env node + +const http = require('http'); +const services = [ + { name: 'Data Ingestion', port: 2001 }, + { name: 'Data Pipeline', port: 2002 }, + { name: 'Web API', port: 2003 }, +]; + +console.log('🏥 Stock Bot Health Check\n'); + +async function checkService(service) { + return new Promise((resolve) => { + const options = { + hostname: 'localhost', + port: service.port, + path: '/health', + method: 'GET', + timeout: 5000, + }; + + const req = http.request(options, (res) => { + if (res.statusCode === 200) { + resolve({ ...service, status: '✅ Healthy', code: res.statusCode }); + } else { + resolve({ ...service, status: '⚠️ Unhealthy', code: res.statusCode }); + } + }); + + req.on('error', (err) => { + resolve({ ...service, status: '❌ Offline', error: err.message }); + }); + + req.on('timeout', () => { + req.destroy(); + resolve({ ...service, status: '⏱️ Timeout', error: 'Request timed out' }); + }); + + req.end(); + }); +} + +async function checkAllServices() { + const results = await Promise.all(services.map(checkService)); + + results.forEach((result) => { + console.log(`${result.name.padEnd(15)} ${result.status}`); + if (result.error) { + console.log(` ${result.error}`); + } + }); + + const allHealthy = results.every(r => r.status === '✅ Healthy'); + + console.log('\n' + (allHealthy ? '✅ All services are healthy!' : '⚠️ Some services need attention')); + + process.exit(allHealthy ? 0 : 1); +} + +checkAllServices(); \ No newline at end of file diff --git a/apps/stock/tsconfig.json b/apps/stock/tsconfig.json new file mode 100644 index 0000000..5d62c04 --- /dev/null +++ b/apps/stock/tsconfig.json @@ -0,0 +1,18 @@ +{ + "extends": "../../tsconfig.json", + "compilerOptions": { + "baseUrl": "../..", + "paths": { + "@stock-bot/*": ["libs/*/src"], + "@stock-bot/stock-config": ["apps/stock/config/src"], + "@stock-bot/stock-config/*": ["apps/stock/config/src/*"] + } + }, + "references": [ + { "path": "./config" }, + { "path": "./data-ingestion" }, + { "path": "./data-pipeline" }, + { "path": "./web-api" }, + { "path": "./web-app" } + ] +} \ No newline at end of file diff --git a/apps/web-api/package.json b/apps/stock/web-api/package.json similarity index 87% rename from apps/web-api/package.json rename to apps/stock/web-api/package.json index ad90669..28e3701 100644 --- a/apps/web-api/package.json +++ b/apps/stock/web-api/package.json @@ -13,9 +13,10 @@ }, "dependencies": { "@stock-bot/config": "*", + "@stock-bot/stock-config": "*", "@stock-bot/logger": "*", - "@stock-bot/mongodb-client": "*", - "@stock-bot/postgres-client": "*", + "@stock-bot/mongodb": "*", + "@stock-bot/postgres": "*", "@stock-bot/shutdown": "*", "hono": "^4.0.0" }, diff --git a/apps/stock/web-api/src/container-setup.ts b/apps/stock/web-api/src/container-setup.ts new file mode 100644 index 0000000..2cec315 --- /dev/null +++ b/apps/stock/web-api/src/container-setup.ts @@ -0,0 +1,34 @@ +/** + * Service Container Setup for Web API + * Configures dependency injection for the web API service + */ + +import type { IServiceContainer } from '@stock-bot/handlers'; +import { getLogger } from '@stock-bot/logger'; +import type { AppConfig } from '@stock-bot/config'; + +const logger = getLogger('web-api-container'); + +/** + * Configure the service container for web API workloads + */ +export function setupServiceContainer( + config: AppConfig, + container: IServiceContainer +): IServiceContainer { + logger.info('Configuring web API service container...'); + + // Web API specific configuration + // This service mainly reads data, so smaller pool sizes are fine + const poolSizes = { + mongodb: config.environment === 'production' ? 20 : 10, + postgres: config.environment === 'production' ? 30 : 15, + cache: config.environment === 'production' ? 20 : 10, + }; + + logger.info('Web API pool sizes configured', poolSizes); + + // The container is already configured with connections + // Just return it with our logging + return container; +} \ No newline at end of file diff --git a/apps/stock/web-api/src/index.ts b/apps/stock/web-api/src/index.ts new file mode 100644 index 0000000..e4a4957 --- /dev/null +++ b/apps/stock/web-api/src/index.ts @@ -0,0 +1,84 @@ +/** + * Stock Bot Web API + * Simplified entry point using ServiceApplication framework + */ + +import { initializeStockConfig } from '@stock-bot/stock-config'; +import { ServiceApplication } from '@stock-bot/di'; +import { getLogger } from '@stock-bot/logger'; + +// Local imports +import { createRoutes } from './routes/create-routes'; + +// Initialize configuration with service-specific overrides +const config = initializeStockConfig('webApi'); + +// Override queue settings for web-api (no workers needed) +if (config.queue) { + config.queue.workers = 0; + config.queue.concurrency = 0; + config.queue.enableScheduledJobs = false; + config.queue.delayWorkerStart = true; +} + +// Log the full configuration +const logger = getLogger('web-api'); +logger.info('Service configuration:', config); + +// Create service application +const app = new ServiceApplication( + config, + { + serviceName: 'web-api', + enableHandlers: false, // Web API doesn't use handlers + enableScheduledJobs: false, // Web API doesn't use scheduled jobs + corsConfig: { + origin: ['http://localhost:4200', 'http://localhost:3000', 'http://localhost:3002'], + allowMethods: ['GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS'], + allowHeaders: ['Content-Type', 'Authorization'], + credentials: true, + }, + serviceMetadata: { + version: '1.0.0', + description: 'Stock Bot REST API', + endpoints: { + health: '/health', + exchanges: '/api/exchanges', + }, + }, + }, + { + // Custom lifecycle hooks + onStarted: (_port) => { + const logger = getLogger('web-api'); + logger.info('Web API service startup initiated with ServiceApplication framework'); + }, + } +); + +// Container factory function +async function createContainer(config: any) { + const { ServiceContainerBuilder } = await import('@stock-bot/di'); + + const container = await new ServiceContainerBuilder() + .withConfig(config) + .withOptions({ + enableQuestDB: false, // Disable QuestDB for now + enableMongoDB: true, + enablePostgres: true, + enableCache: true, + enableQueue: true, // Enable for pipeline operations + enableBrowser: false, // Web API doesn't need browser + enableProxy: false, // Web API doesn't need proxy + }) + .build(); // This automatically initializes services + + return container; +} + +// Start the service +app.start(createContainer, createRoutes).catch(error => { + const logger = getLogger('web-api'); + logger.fatal('Failed to start web API service', { error }); + process.exit(1); +}); \ No newline at end of file diff --git a/apps/stock/web-api/src/routes/create-routes.ts b/apps/stock/web-api/src/routes/create-routes.ts new file mode 100644 index 0000000..6f6eee3 --- /dev/null +++ b/apps/stock/web-api/src/routes/create-routes.ts @@ -0,0 +1,29 @@ +/** + * Route factory for web API service + * Creates routes with access to the service container + */ + +import { Hono } from 'hono'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import { createHealthRoutes } from './health.routes'; +import { createExchangeRoutes } from './exchange.routes'; +import { createMonitoringRoutes } from './monitoring.routes'; +import { createPipelineRoutes } from './pipeline.routes'; + +export function createRoutes(container: IServiceContainer): Hono { + const app = new Hono(); + + // Create routes with container + const healthRoutes = createHealthRoutes(container); + const exchangeRoutes = createExchangeRoutes(container); + const monitoringRoutes = createMonitoringRoutes(container); + const pipelineRoutes = createPipelineRoutes(container); + + // Mount routes + app.route('/health', healthRoutes); + app.route('/api/exchanges', exchangeRoutes); + app.route('/api/system/monitoring', monitoringRoutes); + app.route('/api/pipeline', pipelineRoutes); + + return app; +} \ No newline at end of file diff --git a/apps/stock/web-api/src/routes/exchange.routes.ts b/apps/stock/web-api/src/routes/exchange.routes.ts new file mode 100644 index 0000000..fd33cf6 --- /dev/null +++ b/apps/stock/web-api/src/routes/exchange.routes.ts @@ -0,0 +1,262 @@ +/** + * Exchange management routes - Refactored + */ +import { Hono } from 'hono'; +import { getLogger } from '@stock-bot/logger'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import { createExchangeService } from '../services/exchange.service'; +import { createSuccessResponse, handleError } from '../utils/error-handler'; +import { + validateCreateExchange, + validateCreateProviderMapping, + validateUpdateExchange, + validateUpdateProviderMapping, +} from '../utils/validation'; + +const logger = getLogger('exchange-routes'); + +export function createExchangeRoutes(container: IServiceContainer) { + const exchangeRoutes = new Hono(); + const exchangeService = createExchangeService(container); + + // Get all exchanges with provider mapping counts and mappings + exchangeRoutes.get('/', async c => { + logger.debug('Getting all exchanges'); + try { + const exchanges = await exchangeService.getAllExchanges(); + logger.info('Successfully retrieved exchanges', { count: exchanges.length }); + return c.json(createSuccessResponse(exchanges, undefined, exchanges.length)); + } catch (error) { + logger.error('Failed to get exchanges', { error }); + return handleError(c, error, 'to get exchanges'); + } + }); + + // Get exchange by ID with detailed provider mappings + exchangeRoutes.get('/:id', async c => { + const exchangeId = c.req.param('id'); + logger.debug('Getting exchange by ID', { exchangeId }); + + try { + const result = await exchangeService.getExchangeById(exchangeId); + + if (!result) { + logger.warn('Exchange not found', { exchangeId }); + return c.json(createSuccessResponse(null, 'Exchange not found'), 404); + } + + logger.info('Successfully retrieved exchange details', { + exchangeId, + exchangeCode: result.exchange.code, + mappingCount: result.provider_mappings.length, + }); + return c.json(createSuccessResponse(result)); + } catch (error) { + logger.error('Failed to get exchange details', { error, exchangeId }); + return handleError(c, error, 'to get exchange details'); + } + }); + + // Create new exchange + exchangeRoutes.post('/', async c => { + logger.debug('Creating new exchange'); + + try { + const body = await c.req.json(); + logger.debug('Received exchange creation request', { requestBody: body }); + + const validatedData = validateCreateExchange(body); + logger.debug('Exchange data validated successfully', { validatedData }); + + const exchange = await exchangeService.createExchange(validatedData); + logger.info('Exchange created successfully', { + exchangeId: exchange.id, + code: exchange.code, + name: exchange.name, + }); + + return c.json(createSuccessResponse(exchange, 'Exchange created successfully'), 201); + } catch (error) { + logger.error('Failed to create exchange', { error }); + return handleError(c, error, 'to create exchange'); + } + }); + + // Update exchange (activate/deactivate, rename, etc.) + exchangeRoutes.patch('/:id', async c => { + const exchangeId = c.req.param('id'); + logger.debug('Updating exchange', { exchangeId }); + + try { + const body = await c.req.json(); + logger.debug('Received exchange update request', { exchangeId, updates: body }); + + const validatedUpdates = validateUpdateExchange(body); + logger.debug('Exchange update data validated', { exchangeId, validatedUpdates }); + + const exchange = await exchangeService.updateExchange(exchangeId, validatedUpdates); + + if (!exchange) { + logger.warn('Exchange not found for update', { exchangeId }); + return c.json(createSuccessResponse(null, 'Exchange not found'), 404); + } + + logger.info('Exchange updated successfully', { + exchangeId, + code: exchange.code, + updates: validatedUpdates, + }); + + // Log special actions + if (validatedUpdates.visible === false) { + logger.warn('Exchange marked as hidden - provider mappings will be deleted', { + exchangeId, + code: exchange.code, + }); + } + + return c.json(createSuccessResponse(exchange, 'Exchange updated successfully')); + } catch (error) { + logger.error('Failed to update exchange', { error, exchangeId }); + return handleError(c, error, 'to update exchange'); + } + }); + + // Get all provider mappings + exchangeRoutes.get('/provider-mappings/all', async c => { + logger.debug('Getting all provider mappings'); + + try { + const mappings = await exchangeService.getAllProviderMappings(); + logger.info('Successfully retrieved all provider mappings', { count: mappings.length }); + return c.json(createSuccessResponse(mappings, undefined, mappings.length)); + } catch (error) { + logger.error('Failed to get provider mappings', { error }); + return handleError(c, error, 'to get provider mappings'); + } + }); + + // Get provider mappings by provider + exchangeRoutes.get('/provider-mappings/:provider', async c => { + const provider = c.req.param('provider'); + logger.debug('Getting provider mappings by provider', { provider }); + + try { + const mappings = await exchangeService.getProviderMappingsByProvider(provider); + logger.info('Successfully retrieved provider mappings', { provider, count: mappings.length }); + + return c.json(createSuccessResponse(mappings, undefined, mappings.length)); + } catch (error) { + logger.error('Failed to get provider mappings', { error, provider }); + return handleError(c, error, 'to get provider mappings'); + } + }); + + // Update provider mapping (activate/deactivate, verify, change confidence) + exchangeRoutes.patch('/provider-mappings/:id', async c => { + const mappingId = c.req.param('id'); + logger.debug('Updating provider mapping', { mappingId }); + + try { + const body = await c.req.json(); + logger.debug('Received provider mapping update request', { mappingId, updates: body }); + + const validatedUpdates = validateUpdateProviderMapping(body); + logger.debug('Provider mapping update data validated', { mappingId, validatedUpdates }); + + const mapping = await exchangeService.updateProviderMapping(mappingId, validatedUpdates); + + if (!mapping) { + logger.warn('Provider mapping not found for update', { mappingId }); + return c.json(createSuccessResponse(null, 'Provider mapping not found'), 404); + } + + logger.info('Provider mapping updated successfully', { + mappingId, + provider: mapping.provider, + providerExchangeCode: mapping.provider_exchange_code, + updates: validatedUpdates, + }); + + return c.json(createSuccessResponse(mapping, 'Provider mapping updated successfully')); + } catch (error) { + logger.error('Failed to update provider mapping', { error, mappingId }); + return handleError(c, error, 'to update provider mapping'); + } + }); + + // Create new provider mapping + exchangeRoutes.post('/provider-mappings', async c => { + logger.debug('Creating new provider mapping'); + + try { + const body = await c.req.json(); + logger.debug('Received provider mapping creation request', { requestBody: body }); + + const validatedData = validateCreateProviderMapping(body); + logger.debug('Provider mapping data validated successfully', { validatedData }); + + const mapping = await exchangeService.createProviderMapping(validatedData); + logger.info('Provider mapping created successfully', { + mappingId: mapping.id, + provider: mapping.provider, + providerExchangeCode: mapping.provider_exchange_code, + masterExchangeId: mapping.master_exchange_id, + }); + + return c.json(createSuccessResponse(mapping, 'Provider mapping created successfully'), 201); + } catch (error) { + logger.error('Failed to create provider mapping', { error }); + return handleError(c, error, 'to create provider mapping'); + } + }); + + // Get all available providers + exchangeRoutes.get('/providers/list', async c => { + logger.debug('Getting providers list'); + + try { + const providers = await exchangeService.getProviders(); + logger.info('Successfully retrieved providers list', { count: providers.length, providers }); + return c.json(createSuccessResponse(providers)); + } catch (error) { + logger.error('Failed to get providers list', { error }); + return handleError(c, error, 'to get providers list'); + } + }); + + // Get unmapped provider exchanges by provider + exchangeRoutes.get('/provider-exchanges/unmapped/:provider', async c => { + const provider = c.req.param('provider'); + logger.debug('Getting unmapped provider exchanges', { provider }); + + try { + const exchanges = await exchangeService.getUnmappedProviderExchanges(provider); + logger.info('Successfully retrieved unmapped provider exchanges', { + provider, + count: exchanges.length, + }); + + return c.json(createSuccessResponse(exchanges, undefined, exchanges.length)); + } catch (error) { + logger.error('Failed to get unmapped provider exchanges', { error, provider }); + return handleError(c, error, 'to get unmapped provider exchanges'); + } + }); + + // Get exchange statistics + exchangeRoutes.get('/stats/summary', async c => { + logger.debug('Getting exchange statistics'); + + try { + const stats = await exchangeService.getExchangeStats(); + logger.info('Successfully retrieved exchange statistics', { stats }); + return c.json(createSuccessResponse(stats)); + } catch (error) { + logger.error('Failed to get exchange statistics', { error }); + return handleError(c, error, 'to get exchange statistics'); + } + }); + + return exchangeRoutes; +} \ No newline at end of file diff --git a/apps/stock/web-api/src/routes/health.routes.ts b/apps/stock/web-api/src/routes/health.routes.ts new file mode 100644 index 0000000..3398c21 --- /dev/null +++ b/apps/stock/web-api/src/routes/health.routes.ts @@ -0,0 +1,111 @@ +/** + * Health check routes factory + */ +import { Hono } from 'hono'; +import { getLogger } from '@stock-bot/logger'; +import type { IServiceContainer } from '@stock-bot/handlers'; + +const logger = getLogger('health-routes'); + +export function createHealthRoutes(container: IServiceContainer) { + const healthRoutes = new Hono(); + + // Basic health check + healthRoutes.get('/', c => { + logger.debug('Basic health check requested'); + + const response = { + status: 'healthy', + service: 'web-api', + timestamp: new Date().toISOString(), + }; + + logger.info('Basic health check successful', { status: response.status }); + return c.json(response); + }); + + // Detailed health check with database connectivity + healthRoutes.get('/detailed', async c => { + logger.debug('Detailed health check requested'); + + const health = { + status: 'healthy', + service: 'web-api', + timestamp: new Date().toISOString(), + checks: { + mongodb: { status: 'unknown', message: '' }, + postgresql: { status: 'unknown', message: '' }, + }, + }; + + // Check MongoDB + logger.debug('Checking MongoDB connectivity'); + try { + const mongoClient = container.mongodb; + if (mongoClient && mongoClient.connected) { + // Try a simple operation + const db = mongoClient.getDatabase(); + await db.admin().ping(); + health.checks.mongodb = { status: 'healthy', message: 'Connected and responsive' }; + logger.debug('MongoDB health check passed'); + } else { + health.checks.mongodb = { status: 'unhealthy', message: 'Not connected' }; + logger.warn('MongoDB health check failed - not connected'); + } + } catch (error) { + const errorMessage = error instanceof Error ? error.message : 'Unknown error'; + health.checks.mongodb = { + status: 'unhealthy', + message: errorMessage, + }; + logger.error('MongoDB health check failed', { error: errorMessage }); + } + + // Check PostgreSQL + logger.debug('Checking PostgreSQL connectivity'); + try { + const postgresClient = container.postgres; + if (postgresClient) { + await postgresClient.query('SELECT 1'); + health.checks.postgresql = { status: 'healthy', message: 'Connected and responsive' }; + logger.debug('PostgreSQL health check passed'); + } else { + health.checks.postgresql = { status: 'unhealthy', message: 'PostgreSQL client not available' }; + logger.warn('PostgreSQL health check failed - client not available'); + } + } catch (error) { + const errorMessage = error instanceof Error ? error.message : 'Unknown error'; + health.checks.postgresql = { + status: 'unhealthy', + message: errorMessage, + }; + logger.error('PostgreSQL health check failed', { error: errorMessage }); + } + + // Overall status + const allHealthy = Object.values(health.checks).every(check => check.status === 'healthy'); + health.status = allHealthy ? 'healthy' : 'unhealthy'; + + const statusCode = allHealthy ? 200 : 503; + + if (allHealthy) { + logger.info('Detailed health check successful - all systems healthy', { + mongodb: health.checks.mongodb.status, + postgresql: health.checks.postgresql.status, + }); + } else { + logger.warn('Detailed health check failed - some systems unhealthy', { + mongodb: health.checks.mongodb.status, + postgresql: health.checks.postgresql.status, + overallStatus: health.status, + }); + } + + return c.json(health, statusCode); + }); + + return healthRoutes; +} + +// Export legacy routes for backward compatibility during migration +export const healthRoutes = createHealthRoutes({} as IServiceContainer); \ No newline at end of file diff --git a/apps/web-api/src/routes/index.ts b/apps/stock/web-api/src/routes/index.ts similarity index 63% rename from apps/web-api/src/routes/index.ts rename to apps/stock/web-api/src/routes/index.ts index 8a1e802..61bb46e 100644 --- a/apps/web-api/src/routes/index.ts +++ b/apps/stock/web-api/src/routes/index.ts @@ -1,5 +1,5 @@ /** * Routes index - exports all route modules */ -export { exchangeRoutes } from './exchange.routes'; +export { createExchangeRoutes } from './exchange.routes'; export { healthRoutes } from './health.routes'; diff --git a/apps/stock/web-api/src/routes/monitoring.routes.ts b/apps/stock/web-api/src/routes/monitoring.routes.ts new file mode 100644 index 0000000..89be314 --- /dev/null +++ b/apps/stock/web-api/src/routes/monitoring.routes.ts @@ -0,0 +1,259 @@ +/** + * Monitoring routes for system health and metrics + */ + +import { Hono } from 'hono'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import { MonitoringService } from '../services/monitoring.service'; + +export function createMonitoringRoutes(container: IServiceContainer) { + const monitoring = new Hono(); + const monitoringService = new MonitoringService(container); + + /** + * Get overall system health + */ + monitoring.get('/', async (c) => { + try { + const health = await monitoringService.getSystemHealth(); + + // Set appropriate status code based on health + const statusCode = health.status === 'healthy' ? 200 : + health.status === 'degraded' ? 503 : 500; + + return c.json(health, statusCode); + } catch (error) { + return c.json({ + status: 'error', + message: 'Failed to retrieve system health', + error: error instanceof Error ? error.message : 'Unknown error', + }, 500); + } + }); + + /** + * Get cache/Dragonfly statistics + */ + monitoring.get('/cache', async (c) => { + try { + const stats = await monitoringService.getCacheStats(); + return c.json(stats); + } catch (error) { + return c.json({ + error: 'Failed to retrieve cache statistics', + message: error instanceof Error ? error.message : 'Unknown error', + }, 500); + } + }); + + /** + * Get queue statistics + */ + monitoring.get('/queues', async (c) => { + try { + const stats = await monitoringService.getQueueStats(); + return c.json({ queues: stats }); + } catch (error) { + return c.json({ + error: 'Failed to retrieve queue statistics', + message: error instanceof Error ? error.message : 'Unknown error', + }, 500); + } + }); + + /** + * Get specific queue statistics + */ + monitoring.get('/queues/:name', async (c) => { + try { + const queueName = c.req.param('name'); + const stats = await monitoringService.getQueueStats(); + const queueStats = stats.find(q => q.name === queueName); + + if (!queueStats) { + return c.json({ + error: 'Queue not found', + message: `Queue '${queueName}' does not exist`, + }, 404); + } + + return c.json(queueStats); + } catch (error) { + return c.json({ + error: 'Failed to retrieve queue statistics', + message: error instanceof Error ? error.message : 'Unknown error', + }, 500); + } + }); + + /** + * Get database statistics + */ + monitoring.get('/databases', async (c) => { + try { + const stats = await monitoringService.getDatabaseStats(); + return c.json({ databases: stats }); + } catch (error) { + return c.json({ + error: 'Failed to retrieve database statistics', + message: error instanceof Error ? error.message : 'Unknown error', + }, 500); + } + }); + + /** + * Get specific database statistics + */ + monitoring.get('/databases/:type', async (c) => { + try { + const dbType = c.req.param('type') as 'postgres' | 'mongodb' | 'questdb'; + const stats = await monitoringService.getDatabaseStats(); + const dbStats = stats.find(db => db.type === dbType); + + if (!dbStats) { + return c.json({ + error: 'Database not found', + message: `Database type '${dbType}' not found or not enabled`, + }, 404); + } + + return c.json(dbStats); + } catch (error) { + return c.json({ + error: 'Failed to retrieve database statistics', + message: error instanceof Error ? error.message : 'Unknown error', + }, 500); + } + }); + + /** + * Get service metrics + */ + monitoring.get('/metrics', async (c) => { + try { + const metrics = await monitoringService.getServiceMetrics(); + return c.json(metrics); + } catch (error) { + return c.json({ + error: 'Failed to retrieve service metrics', + message: error instanceof Error ? error.message : 'Unknown error', + }, 500); + } + }); + + /** + * Get detailed cache info (Redis INFO command output) + */ + monitoring.get('/cache/info', async (c) => { + try { + if (!container.cache) { + return c.json({ + error: 'Cache not available', + message: 'Cache service is not enabled', + }, 503); + } + + const info = await container.cache.info(); + const stats = await monitoringService.getCacheStats(); + + return c.json({ + parsed: stats, + raw: info, + }); + } catch (error) { + return c.json({ + error: 'Failed to retrieve cache info', + message: error instanceof Error ? error.message : 'Unknown error', + }, 500); + } + }); + + /** + * Health check endpoint for monitoring + */ + monitoring.get('/ping', (c) => { + return c.json({ + status: 'ok', + timestamp: new Date().toISOString(), + service: 'monitoring', + }); + }); + + /** + * Get service status for all microservices + */ + monitoring.get('/services', async (c) => { + try { + const services = await monitoringService.getServiceStatus(); + return c.json({ services }); + } catch (error) { + return c.json({ + error: 'Failed to retrieve service status', + message: error instanceof Error ? error.message : 'Unknown error', + }, 500); + } + }); + + /** + * Get proxy statistics + */ + monitoring.get('/proxies', async (c) => { + try { + const stats = await monitoringService.getProxyStats(); + return c.json(stats || { enabled: false }); + } catch (error) { + return c.json({ + error: 'Failed to retrieve proxy statistics', + message: error instanceof Error ? error.message : 'Unknown error', + }, 500); + } + }); + + /** + * Get comprehensive system overview + */ + monitoring.get('/overview', async (c) => { + try { + const overview = await monitoringService.getSystemOverview(); + return c.json(overview); + } catch (error) { + return c.json({ + error: 'Failed to retrieve system overview', + message: error instanceof Error ? error.message : 'Unknown error', + }, 500); + } + }); + + /** + * Test direct BullMQ queue access + */ + monitoring.get('/test/queue/:name', async (c) => { + const queueName = c.req.param('name'); + const { Queue } = await import('bullmq'); + + const connection = { + host: 'localhost', + port: 6379, + db: 0, // All queues in DB 0 + }; + + const queue = new Queue(queueName, { connection }); + + try { + const counts = await queue.getJobCounts(); + await queue.close(); + return c.json({ + queueName, + counts + }); + } catch (error: any) { + await queue.close(); + return c.json({ + queueName, + error: error.message + }, 500); + } + }); + + return monitoring; +} \ No newline at end of file diff --git a/apps/stock/web-api/src/routes/pipeline.routes.ts b/apps/stock/web-api/src/routes/pipeline.routes.ts new file mode 100644 index 0000000..1e19fc2 --- /dev/null +++ b/apps/stock/web-api/src/routes/pipeline.routes.ts @@ -0,0 +1,135 @@ +/** + * Pipeline Routes + * API endpoints for data pipeline operations + */ + +import { Hono } from 'hono'; +import type { IServiceContainer } from '@stock-bot/handlers'; +import { getLogger } from '@stock-bot/logger'; +import { PipelineService } from '../services/pipeline.service'; + +const logger = getLogger('pipeline-routes'); + +export function createPipelineRoutes(container: IServiceContainer) { + const pipeline = new Hono(); + const pipelineService = new PipelineService(container); + + // Symbol sync endpoints + pipeline.post('/symbols', async c => { + try { + const result = await pipelineService.syncQMSymbols(); + return c.json(result, result.success ? 200 : 503); + } catch (error) { + logger.error('Error in POST /symbols', { error }); + return c.json({ success: false, error: 'Internal server error' }, 500); + } + }); + + pipeline.post('/symbols/:provider', async c => { + try { + const provider = c.req.param('provider'); + const result = await pipelineService.syncProviderSymbols(provider); + return c.json(result, result.success ? 200 : 503); + } catch (error) { + logger.error('Error in POST /symbols/:provider', { error }); + return c.json({ success: false, error: 'Internal server error' }, 500); + } + }); + + // Exchange sync endpoints + pipeline.post('/exchanges', async c => { + try { + const result = await pipelineService.syncQMExchanges(); + return c.json(result, result.success ? 200 : 503); + } catch (error) { + logger.error('Error in POST /exchanges', { error }); + return c.json({ success: false, error: 'Internal server error' }, 500); + } + }); + + pipeline.post('/exchanges/all', async c => { + try { + const clearFirst = c.req.query('clear') === 'true'; + const result = await pipelineService.syncAllExchanges(clearFirst); + return c.json(result, result.success ? 200 : 503); + } catch (error) { + logger.error('Error in POST /exchanges/all', { error }); + return c.json({ success: false, error: 'Internal server error' }, 500); + } + }); + + // Provider mapping sync endpoints + pipeline.post('/provider-mappings/qm', async c => { + try { + const result = await pipelineService.syncQMProviderMappings(); + return c.json(result, result.success ? 200 : 503); + } catch (error) { + logger.error('Error in POST /provider-mappings/qm', { error }); + return c.json({ success: false, error: 'Internal server error' }, 500); + } + }); + + pipeline.post('/provider-mappings/ib', async c => { + try { + const result = await pipelineService.syncIBExchanges(); + return c.json(result, result.success ? 200 : 503); + } catch (error) { + logger.error('Error in POST /provider-mappings/ib', { error }); + return c.json({ success: false, error: 'Internal server error' }, 500); + } + }); + + // Status endpoint + pipeline.get('/status', async c => { + try { + const result = await pipelineService.getSyncStatus(); + return c.json(result, result.success ? 200 : 503); + } catch (error) { + logger.error('Error in GET /status', { error }); + return c.json({ success: false, error: 'Internal server error' }, 500); + } + }); + + // Clear data endpoint + pipeline.post('/clear/postgresql', async c => { + try { + const dataType = c.req.query('type') as 'exchanges' | 'provider_mappings' | 'all'; + const result = await pipelineService.clearPostgreSQLData(dataType || 'all'); + return c.json(result, result.success ? 200 : 503); + } catch (error) { + logger.error('Error in POST /clear/postgresql', { error }); + return c.json({ success: false, error: 'Internal server error' }, 500); + } + }); + + // Statistics endpoints + pipeline.get('/stats/exchanges', async c => { + try { + const result = await pipelineService.getExchangeStats(); + if (result.success) { + return c.json(result.data); + } else { + return c.json({ error: result.error }, 503); + } + } catch (error) { + logger.error('Error in GET /stats/exchanges', { error }); + return c.json({ error: 'Internal server error' }, 500); + } + }); + + pipeline.get('/stats/provider-mappings', async c => { + try { + const result = await pipelineService.getProviderMappingStats(); + if (result.success) { + return c.json(result.data); + } else { + return c.json({ error: result.error }, 503); + } + } catch (error) { + logger.error('Error in GET /stats/provider-mappings', { error }); + return c.json({ error: 'Internal server error' }, 500); + } + }); + + return pipeline; +} \ No newline at end of file diff --git a/apps/web-api/src/services/exchange.service.ts b/apps/stock/web-api/src/services/exchange.service.ts similarity index 92% rename from apps/web-api/src/services/exchange.service.ts rename to apps/stock/web-api/src/services/exchange.service.ts index 8acbb0e..cb48694 100644 --- a/apps/web-api/src/services/exchange.service.ts +++ b/apps/stock/web-api/src/services/exchange.service.ts @@ -1,26 +1,28 @@ import { getLogger } from '@stock-bot/logger'; -import { getPostgreSQLClient, getMongoDBClient } from '../clients'; +import type { IServiceContainer } from '@stock-bot/handlers'; import { - Exchange, - ExchangeWithMappings, - ProviderMapping, CreateExchangeRequest, - UpdateExchangeRequest, CreateProviderMappingRequest, - UpdateProviderMappingRequest, - ProviderExchange, + Exchange, ExchangeStats, + ExchangeWithMappings, + ProviderExchange, + ProviderMapping, + UpdateExchangeRequest, + UpdateProviderMappingRequest, } from '../types/exchange.types'; const logger = getLogger('exchange-service'); export class ExchangeService { + constructor(private container: IServiceContainer) {} + private get postgresClient() { - return getPostgreSQLClient(); + return this.container.postgres; } - + private get mongoClient() { - return getMongoDBClient(); + return this.container.mongodb; } // Exchanges @@ -63,14 +65,17 @@ export class ExchangeService { const mappingsResult = await this.postgresClient.query(mappingsQuery); // Group mappings by exchange ID - const mappingsByExchange = mappingsResult.rows.reduce((acc, mapping) => { - const exchangeId = mapping.master_exchange_id; - if (!acc[exchangeId]) { - acc[exchangeId] = []; - } - acc[exchangeId].push(mapping); - return acc; - }, {} as Record); + const mappingsByExchange = mappingsResult.rows.reduce( + (acc, mapping) => { + const exchangeId = mapping.master_exchange_id; + if (!acc[exchangeId]) { + acc[exchangeId] = []; + } + acc[exchangeId].push(mapping); + return acc; + }, + {} as Record + ); // Attach mappings to exchanges return exchangesResult.rows.map(exchange => ({ @@ -79,7 +84,9 @@ export class ExchangeService { })); } - async getExchangeById(id: string): Promise<{ exchange: Exchange; provider_mappings: ProviderMapping[] } | null> { + async getExchangeById( + id: string + ): Promise<{ exchange: Exchange; provider_mappings: ProviderMapping[] } | null> { const exchangeQuery = 'SELECT * FROM exchanges WHERE id = $1 AND visible = true'; const exchangeResult = await this.postgresClient.query(exchangeQuery, [id]); @@ -230,7 +237,10 @@ export class ExchangeService { return result.rows[0]; } - async updateProviderMapping(id: string, updates: UpdateProviderMappingRequest): Promise { + async updateProviderMapping( + id: string, + updates: UpdateProviderMappingRequest + ): Promise { const updateFields = []; const values = []; let paramIndex = 1; @@ -359,7 +369,6 @@ export class ExchangeService { break; } - default: throw new Error(`Unknown provider: ${provider}`); } @@ -368,5 +377,7 @@ export class ExchangeService { } } -// Export singleton instance -export const exchangeService = new ExchangeService(); \ No newline at end of file +// Export function to create service instance with container +export function createExchangeService(container: IServiceContainer): ExchangeService { + return new ExchangeService(container); +} \ No newline at end of file diff --git a/apps/stock/web-api/src/services/monitoring.service.ts b/apps/stock/web-api/src/services/monitoring.service.ts new file mode 100644 index 0000000..bf42e57 --- /dev/null +++ b/apps/stock/web-api/src/services/monitoring.service.ts @@ -0,0 +1,787 @@ +/** + * Monitoring Service + * Collects health and performance metrics from all system components + */ + +import type { IServiceContainer } from '@stock-bot/handlers'; +import { getLogger } from '@stock-bot/logger'; +import type { + CacheStats, + QueueStats, + DatabaseStats, + SystemHealth, + ServiceMetrics, + ServiceStatus, + ProxyStats, + SystemOverview +} from '../types/monitoring.types'; +import * as os from 'os'; + +export class MonitoringService { + private readonly logger = getLogger('monitoring-service'); + private startTime = Date.now(); + + constructor(private readonly container: IServiceContainer) {} + + /** + * Get cache/Dragonfly statistics + */ + async getCacheStats(): Promise { + try { + if (!this.container.cache) { + return { + provider: 'dragonfly', + connected: false, + }; + } + + // Check if cache is connected using the isReady method + const isConnected = this.container.cache.isReady(); + if (!isConnected) { + return { + provider: 'dragonfly', + connected: false, + }; + } + + // Get cache stats from the provider + const cacheStats = this.container.cache.getStats(); + + // Since we can't access Redis info directly, we'll use what's available + return { + provider: 'dragonfly', + connected: true, + uptime: cacheStats.uptime, + stats: { + hits: cacheStats.hits, + misses: cacheStats.misses, + keys: 0, // We can't get total keys without direct Redis access + evictedKeys: 0, + expiredKeys: 0, + }, + }; + } catch (error) { + this.logger.error('Failed to get cache stats', { error }); + return { + provider: 'dragonfly', + connected: false, + }; + } + } + + /** + * Get queue statistics + */ + async getQueueStats(): Promise { + const stats: QueueStats[] = []; + + try { + if (!this.container.queue) { + this.logger.warn('No queue manager available'); + return stats; + } + + // Get all queue names from the SmartQueueManager + const queueManager = this.container.queue as any; + this.logger.debug('Queue manager type:', { + type: queueManager.constructor.name, + hasGetAllQueues: typeof queueManager.getAllQueues === 'function', + hasQueues: !!queueManager.queues, + hasGetQueue: typeof queueManager.getQueue === 'function' + }); + + // Always use the known queue names since web-api doesn't create worker queues + const handlerMapping = { + 'proxy': 'data-ingestion', + 'qm': 'data-ingestion', + 'ib': 'data-ingestion', + 'ceo': 'data-ingestion', + 'webshare': 'data-ingestion', + 'exchanges': 'data-pipeline', + 'symbols': 'data-pipeline', + }; + + const queueNames = Object.keys(handlerMapping); + this.logger.debug('Using known queue names', { count: queueNames.length, names: queueNames }); + + // Create BullMQ queues directly with the correct format + for (const handlerName of queueNames) { + try { + // Import BullMQ directly to create queue instances + const { Queue: BullMQQueue } = await import('bullmq'); + const connection = { + host: 'localhost', + port: 6379, + db: 0, // All queues now in DB 0 + }; + + // Get the service that owns this handler + const serviceName = handlerMapping[handlerName as keyof typeof handlerMapping]; + + // Create BullMQ queue with the new naming format {service_handler} + const fullQueueName = `{${serviceName}_${handlerName}}`; + const bullQueue = new BullMQQueue(fullQueueName, { connection }); + + // Get stats directly from BullMQ + const queueStats = await this.getQueueStatsForBullQueue(bullQueue, handlerName); + + stats.push({ + name: handlerName, + connected: true, + jobs: queueStats, + workers: { + count: 0, + concurrency: 1, + }, + }); + + // Close the queue connection after getting stats + await bullQueue.close(); + } catch (error) { + this.logger.warn(`Failed to get stats for queue ${handlerName}`, { error }); + stats.push({ + name: handlerName, + connected: false, + jobs: { + waiting: 0, + active: 0, + completed: 0, + failed: 0, + delayed: 0, + paused: 0, + }, + }); + } + } + } catch (error) { + this.logger.error('Failed to get queue stats', { error }); + } + + return stats; + } + + /** + * Get stats for a BullMQ queue directly + */ + private async getQueueStatsForBullQueue(bullQueue: any, queueName: string) { + try { + // BullMQ provides getJobCounts which returns all counts at once + const counts = await bullQueue.getJobCounts(); + + return { + waiting: counts.waiting || 0, + active: counts.active || 0, + completed: counts.completed || 0, + failed: counts.failed || 0, + delayed: counts.delayed || 0, + paused: counts.paused || 0, + prioritized: counts.prioritized || 0, + 'waiting-children': counts['waiting-children'] || 0, + }; + } catch (error) { + this.logger.error(`Failed to get stats for BullMQ queue ${queueName}`, { error }); + // Fallback to individual methods + try { + const [waiting, active, completed, failed, delayed, paused] = await Promise.all([ + bullQueue.getWaitingCount(), + bullQueue.getActiveCount(), + bullQueue.getCompletedCount(), + bullQueue.getFailedCount(), + bullQueue.getDelayedCount(), + bullQueue.getPausedCount ? bullQueue.getPausedCount() : 0 + ]); + + return { + waiting, + active, + completed, + failed, + delayed, + paused, + }; + } catch (fallbackError) { + this.logger.error(`Fallback also failed for queue ${queueName}`, { fallbackError }); + return this.getQueueStatsForQueue(bullQueue, queueName); + } + } + } + + /** + * Get stats for a specific queue + */ + private async getQueueStatsForQueue(queue: any, _queueName: string) { + // Check if it has the getStats method + if (queue.getStats && typeof queue.getStats === 'function') { + const stats = await queue.getStats(); + return { + waiting: stats.waiting || 0, + active: stats.active || 0, + completed: stats.completed || 0, + failed: stats.failed || 0, + delayed: stats.delayed || 0, + paused: stats.paused || 0, + }; + } + + // Try individual count methods + const [waiting, active, completed, failed, delayed] = await Promise.all([ + this.safeGetCount(queue, 'getWaitingCount', 'getWaiting'), + this.safeGetCount(queue, 'getActiveCount', 'getActive'), + this.safeGetCount(queue, 'getCompletedCount', 'getCompleted'), + this.safeGetCount(queue, 'getFailedCount', 'getFailed'), + this.safeGetCount(queue, 'getDelayedCount', 'getDelayed'), + ]); + + const paused = await this.safeGetPausedStatus(queue); + + return { + waiting, + active, + completed, + failed, + delayed, + paused, + }; + } + + /** + * Safely get count from queue + */ + private async safeGetCount(queue: any, ...methodNames: string[]): Promise { + for (const methodName of methodNames) { + if (queue[methodName] && typeof queue[methodName] === 'function') { + try { + const result = await queue[methodName](); + return Array.isArray(result) ? result.length : (result || 0); + } catch (_e) { + // Continue to next method + } + } + } + return 0; + } + + /** + * Get paused status + */ + private async safeGetPausedStatus(queue: any): Promise { + try { + if (queue.isPaused && typeof queue.isPaused === 'function') { + const isPaused = await queue.isPaused(); + return isPaused ? 1 : 0; + } + if (queue.getPausedCount && typeof queue.getPausedCount === 'function') { + return await queue.getPausedCount(); + } + } catch (_e) { + // Ignore + } + return 0; + } + + /** + * Get worker info for a queue + */ + private getWorkerInfo(queue: any, queueManager: any, _queueName: string) { + try { + // Check queue itself for worker info + if (queue.workers && Array.isArray(queue.workers)) { + return { + count: queue.workers.length, + concurrency: queue.workers[0]?.concurrency || 1, + }; + } + + // Check queue manager for worker config + if (queueManager.config?.defaultQueueOptions) { + const options = queueManager.config.defaultQueueOptions; + return { + count: options.workers || 1, + concurrency: options.concurrency || 1, + }; + } + + // Check for getWorkerCount method + if (queue.getWorkerCount && typeof queue.getWorkerCount === 'function') { + const count = queue.getWorkerCount(); + return { + count, + concurrency: 1, + }; + } + } catch (_e) { + // Ignore + } + + return undefined; + } + + /** + * Get database statistics + */ + async getDatabaseStats(): Promise { + const stats: DatabaseStats[] = []; + + // PostgreSQL stats + if (this.container.postgres) { + try { + const startTime = Date.now(); + const _result = await this.container.postgres.query('SELECT 1'); + const latency = Date.now() - startTime; + + // Get pool stats + const pool = (this.container.postgres as any).pool; + const poolStats = pool ? { + size: pool.totalCount || 0, + active: pool.idleCount || 0, + idle: pool.waitingCount || 0, + max: pool.options?.max || 0, + } : undefined; + + stats.push({ + type: 'postgres', + name: 'PostgreSQL', + connected: true, + latency, + pool: poolStats, + }); + } catch (error) { + this.logger.error('Failed to get PostgreSQL stats', { error }); + stats.push({ + type: 'postgres', + name: 'PostgreSQL', + connected: false, + }); + } + } + + // MongoDB stats + if (this.container.mongodb) { + try { + const startTime = Date.now(); + const mongoClient = this.container.mongodb as any; // This is MongoDBClient + const db = mongoClient.getDatabase(); + await db.admin().ping(); + const latency = Date.now() - startTime; + + const serverStatus = await db.admin().serverStatus(); + + stats.push({ + type: 'mongodb', + name: 'MongoDB', + connected: true, + latency, + stats: { + version: serverStatus.version, + uptime: serverStatus.uptime, + connections: serverStatus.connections, + opcounters: serverStatus.opcounters, + }, + }); + } catch (error) { + this.logger.error('Failed to get MongoDB stats', { error }); + stats.push({ + type: 'mongodb', + name: 'MongoDB', + connected: false, + }); + } + } + + // QuestDB stats + if (this.container.questdb) { + try { + const startTime = Date.now(); + // QuestDB health check + const response = await fetch(`http://${process.env.QUESTDB_HOST || 'localhost'}:9000/exec?query=SELECT%201`); + const latency = Date.now() - startTime; + + stats.push({ + type: 'questdb', + name: 'QuestDB', + connected: response.ok, + latency, + }); + } catch (error) { + this.logger.error('Failed to get QuestDB stats', { error }); + stats.push({ + type: 'questdb', + name: 'QuestDB', + connected: false, + }); + } + } + + return stats; + } + + /** + * Get system health summary + */ + async getSystemHealth(): Promise { + const [cacheStats, queueStats, databaseStats] = await Promise.all([ + this.getCacheStats(), + this.getQueueStats(), + this.getDatabaseStats(), + ]); + + const processMemory = process.memoryUsage(); + const systemMemory = this.getSystemMemory(); + const uptime = Date.now() - this.startTime; + const cpuInfo = this.getCpuUsage(); + + // Determine overall health status + const errors: string[] = []; + + if (!cacheStats.connected) { + errors.push('Cache service is disconnected'); + } + + const disconnectedQueues = queueStats.filter(q => !q.connected); + if (disconnectedQueues.length > 0) { + errors.push(`${disconnectedQueues.length} queue(s) are disconnected`); + } + + const disconnectedDbs = databaseStats.filter(db => !db.connected); + if (disconnectedDbs.length > 0) { + errors.push(`${disconnectedDbs.length} database(s) are disconnected`); + } + + const status = errors.length === 0 ? 'healthy' : + errors.length < 3 ? 'degraded' : 'unhealthy'; + + return { + status, + timestamp: new Date().toISOString(), + uptime, + memory: { + used: systemMemory.used, + total: systemMemory.total, + percentage: systemMemory.percentage, + heap: { + used: processMemory.heapUsed, + total: processMemory.heapTotal, + }, + }, + cpu: cpuInfo, + services: { + cache: cacheStats, + queues: queueStats, + databases: databaseStats, + }, + errors: errors.length > 0 ? errors : undefined, + }; + } + + /** + * Get service metrics (placeholder for future implementation) + */ + async getServiceMetrics(): Promise { + const now = new Date().toISOString(); + + return { + requestsPerSecond: { + timestamp: now, + value: 0, + unit: 'req/s', + }, + averageResponseTime: { + timestamp: now, + value: 0, + unit: 'ms', + }, + errorRate: { + timestamp: now, + value: 0, + unit: '%', + }, + activeConnections: { + timestamp: now, + value: 0, + unit: 'connections', + }, + }; + } + + /** + * Parse value from Redis INFO output + */ + private parseInfoValue(info: string, key: string): number { + const match = info.match(new RegExp(`${key}:(\\d+)`)); + return match ? parseInt(match[1], 10) : 0; + } + + /** + * Parse Redis INFO into structured object + */ + private parseRedisInfo(info: string): Record { + const result: Record = {}; + const sections = info.split('\r\n\r\n'); + + for (const section of sections) { + const lines = section.split('\r\n'); + const sectionName = lines[0]?.replace('# ', '') || 'general'; + result[sectionName] = {}; + + for (let i = 1; i < lines.length; i++) { + const [key, value] = lines[i].split(':'); + if (key && value) { + result[sectionName][key] = value; + } + } + } + + return result; + } + + /** + * Get service status for all microservices + */ + async getServiceStatus(): Promise { + const services: ServiceStatus[] = []; + + // Define service endpoints + const serviceEndpoints = [ + { name: 'data-ingestion', port: 2001, path: '/health' }, + { name: 'data-pipeline', port: 2002, path: '/health' }, + { name: 'web-api', port: 2003, path: '/health' }, + ]; + + for (const service of serviceEndpoints) { + try { + // For the current service (web-api), add it directly without health check + if (service.name === 'web-api') { + services.push({ + name: 'web-api', + version: '1.0.0', + status: 'running', + port: process.env.PORT ? parseInt(process.env.PORT) : 2003, + uptime: Date.now() - this.startTime, + lastCheck: new Date().toISOString(), + healthy: true, + }); + continue; + } + + const startTime = Date.now(); + const response = await fetch(`http://localhost:${service.port}${service.path}`, { + signal: AbortSignal.timeout(5000), // 5 second timeout + }); + const _latency = Date.now() - startTime; + + if (response.ok) { + const data = await response.json(); + services.push({ + name: service.name, + version: data.version || '1.0.0', + status: 'running', + port: service.port, + uptime: data.uptime || 0, + lastCheck: new Date().toISOString(), + healthy: true, + }); + } else { + services.push({ + name: service.name, + version: 'unknown', + status: 'error', + port: service.port, + uptime: 0, + lastCheck: new Date().toISOString(), + healthy: false, + error: `HTTP ${response.status}`, + }); + } + } catch (error) { + services.push({ + name: service.name, + version: 'unknown', + status: 'stopped', + port: service.port, + uptime: 0, + lastCheck: new Date().toISOString(), + healthy: false, + error: error instanceof Error ? error.message : 'Connection failed', + }); + } + } + + return services; + } + + /** + * Get proxy statistics + */ + async getProxyStats(): Promise { + try { + // Since web-api doesn't have proxy manager, query the cache directly + // The proxy manager stores data with cache:proxy: prefix + if (!this.container.cache) { + return { + enabled: false, + totalProxies: 0, + workingProxies: 0, + failedProxies: 0, + }; + } + + try { + // Get proxy data from cache using getRaw method + // The proxy manager uses cache:proxy: prefix, but web-api cache uses cache:api: + const cacheProvider = this.container.cache; + + if (cacheProvider.getRaw) { + // Use getRaw to access data with different cache prefix + // The proxy manager now uses a global cache:proxy: prefix + this.logger.debug('Attempting to fetch proxy data from cache'); + + const [cachedProxies, lastUpdateStr] = await Promise.all([ + cacheProvider.getRaw('cache:proxy:active'), + cacheProvider.getRaw('cache:proxy:last-update') + ]); + + this.logger.debug('Proxy cache data retrieved', { + hasProxies: !!cachedProxies, + isArray: Array.isArray(cachedProxies), + proxyCount: cachedProxies ? cachedProxies.length : 0, + lastUpdate: lastUpdateStr + }); + + if (cachedProxies && Array.isArray(cachedProxies)) { + const workingCount = cachedProxies.filter((p: any) => p.isWorking !== false).length; + const failedCount = cachedProxies.filter((p: any) => p.isWorking === false).length; + + return { + enabled: true, + totalProxies: cachedProxies.length, + workingProxies: workingCount, + failedProxies: failedCount, + lastUpdate: lastUpdateStr || undefined, + }; + } + } else { + this.logger.debug('Cache provider does not support getRaw method'); + } + + // No cached data found - proxies might not be initialized yet + return { + enabled: true, + totalProxies: 0, + workingProxies: 0, + failedProxies: 0, + }; + } catch (cacheError) { + this.logger.debug('Could not retrieve proxy data from cache', { error: cacheError }); + + // Return basic stats if cache query fails + return { + enabled: true, + totalProxies: 0, + workingProxies: 0, + failedProxies: 0, + }; + } + } catch (error) { + this.logger.error('Failed to get proxy stats', { error }); + return null; + } + } + + /** + * Get comprehensive system overview + */ + async getSystemOverview(): Promise { + const [services, health, proxies] = await Promise.all([ + this.getServiceStatus(), + this.getSystemHealth(), + this.getProxyStats(), + ]); + + return { + services, + health, + proxies: proxies || undefined, + environment: { + nodeVersion: process.version, + platform: os.platform(), + architecture: os.arch(), + hostname: os.hostname(), + }, + }; + } + + /** + * Get detailed CPU usage + */ + private getCpuUsage() { + const cpus = os.cpus(); + let totalIdle = 0; + let totalTick = 0; + + cpus.forEach(cpu => { + for (const type in cpu.times) { + totalTick += cpu.times[type as keyof typeof cpu.times]; + } + totalIdle += cpu.times.idle; + }); + + const idle = totalIdle / cpus.length; + const total = totalTick / cpus.length; + const usage = 100 - ~~(100 * idle / total); + + return { + usage, + loadAverage: os.loadavg(), + cores: cpus.length, + }; + } + + /** + * Get system memory info + */ + private getSystemMemory() { + const totalMem = os.totalmem(); + const freeMem = os.freemem(); + + // On Linux, freeMem includes buffers/cache, but we want "available" memory + // which better represents memory that can be used by applications + let availableMem = freeMem; + + // Try to read from /proc/meminfo for more accurate memory stats on Linux + if (os.platform() === 'linux') { + try { + const fs = require('fs'); + const meminfo = fs.readFileSync('/proc/meminfo', 'utf8'); + const lines = meminfo.split('\n'); + + let memAvailable = 0; + let _memTotal = 0; + + for (const line of lines) { + if (line.startsWith('MemAvailable:')) { + memAvailable = parseInt(line.split(/\s+/)[1], 10) * 1024; // Convert from KB to bytes + } else if (line.startsWith('MemTotal:')) { + _memTotal = parseInt(line.split(/\s+/)[1], 10) * 1024; + } + } + + if (memAvailable > 0) { + availableMem = memAvailable; + } + } catch (error) { + // Fallback to os.freemem() if we can't read /proc/meminfo + this.logger.debug('Could not read /proc/meminfo', { error }); + } + } + + const usedMem = totalMem - availableMem; + + return { + total: totalMem, + used: usedMem, + free: freeMem, + available: availableMem, + percentage: (usedMem / totalMem) * 100, + }; + } +} \ No newline at end of file diff --git a/apps/stock/web-api/src/services/pipeline.service.ts b/apps/stock/web-api/src/services/pipeline.service.ts new file mode 100644 index 0000000..f95906f --- /dev/null +++ b/apps/stock/web-api/src/services/pipeline.service.ts @@ -0,0 +1,335 @@ +/** + * Pipeline Service + * Manages data pipeline operations by queuing jobs for the data-pipeline service + */ + +import type { IServiceContainer } from '@stock-bot/handlers'; +import { getLogger } from '@stock-bot/logger'; + +const logger = getLogger('pipeline-service'); + +export interface PipelineJobResult { + success: boolean; + jobId?: string; + message?: string; + error?: string; + data?: any; +} + +export interface PipelineStatsResult { + success: boolean; + data?: any; + error?: string; +} + +export class PipelineService { + constructor(private container: IServiceContainer) {} + + /** + * Queue a job to sync symbols from QuestionsAndMethods + */ + async syncQMSymbols(): Promise { + try { + const queueManager = this.container.queue; + if (!queueManager) { + return { success: false, error: 'Queue manager not available' }; + } + + const symbolsQueue = queueManager.getQueue('symbols'); + const job = await symbolsQueue.addJob('sync-qm-symbols', { + handler: 'symbols', + operation: 'sync-qm-symbols', + payload: {}, + }); + + logger.info('QM symbols sync job queued', { jobId: job.id }); + return { success: true, jobId: job.id, message: 'QM symbols sync job queued' }; + } catch (error) { + logger.error('Failed to queue QM symbols sync job', { error }); + return { + success: false, + error: error instanceof Error ? error.message : 'Failed to queue sync job', + }; + } + } + + /** + * Queue a job to sync exchanges from QuestionsAndMethods + */ + async syncQMExchanges(): Promise { + try { + const queueManager = this.container.queue; + if (!queueManager) { + return { success: false, error: 'Queue manager not available' }; + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + const job = await exchangesQueue.addJob('sync-qm-exchanges', { + handler: 'exchanges', + operation: 'sync-qm-exchanges', + payload: {}, + }); + + logger.info('QM exchanges sync job queued', { jobId: job.id }); + return { success: true, jobId: job.id, message: 'QM exchanges sync job queued' }; + } catch (error) { + logger.error('Failed to queue QM exchanges sync job', { error }); + return { + success: false, + error: error instanceof Error ? error.message : 'Failed to queue sync job', + }; + } + } + + /** + * Queue a job to sync symbols from a specific provider + */ + async syncProviderSymbols(provider: string): Promise { + try { + const queueManager = this.container.queue; + if (!queueManager) { + return { success: false, error: 'Queue manager not available' }; + } + + const symbolsQueue = queueManager.getQueue('symbols'); + const job = await symbolsQueue.addJob('sync-symbols-from-provider', { + handler: 'symbols', + operation: 'sync-symbols-from-provider', + payload: { provider }, + }); + + logger.info(`${provider} symbols sync job queued`, { jobId: job.id, provider }); + return { + success: true, + jobId: job.id, + message: `${provider} symbols sync job queued`, + }; + } catch (error) { + logger.error('Failed to queue provider symbols sync job', { error, provider }); + return { + success: false, + error: error instanceof Error ? error.message : 'Failed to queue sync job', + }; + } + } + + /** + * Queue a job to sync all exchanges + */ + async syncAllExchanges(clearFirst: boolean = false): Promise { + try { + const queueManager = this.container.queue; + if (!queueManager) { + return { success: false, error: 'Queue manager not available' }; + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + const job = await exchangesQueue.addJob('sync-all-exchanges', { + handler: 'exchanges', + operation: 'sync-all-exchanges', + payload: { clearFirst }, + }); + + logger.info('Enhanced exchanges sync job queued', { jobId: job.id, clearFirst }); + return { + success: true, + jobId: job.id, + message: 'Enhanced exchanges sync job queued', + }; + } catch (error) { + logger.error('Failed to queue enhanced exchanges sync job', { error }); + return { + success: false, + error: error instanceof Error ? error.message : 'Failed to queue sync job', + }; + } + } + + /** + * Queue a job to sync QM provider mappings + */ + async syncQMProviderMappings(): Promise { + try { + const queueManager = this.container.queue; + if (!queueManager) { + return { success: false, error: 'Queue manager not available' }; + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + const job = await exchangesQueue.addJob('sync-qm-provider-mappings', { + handler: 'exchanges', + operation: 'sync-qm-provider-mappings', + payload: {}, + }); + + logger.info('QM provider mappings sync job queued', { jobId: job.id }); + return { + success: true, + jobId: job.id, + message: 'QM provider mappings sync job queued', + }; + } catch (error) { + logger.error('Failed to queue QM provider mappings sync job', { error }); + return { + success: false, + error: error instanceof Error ? error.message : 'Failed to queue sync job', + }; + } + } + + /** + * Queue a job to sync IB exchanges + */ + async syncIBExchanges(): Promise { + try { + const queueManager = this.container.queue; + if (!queueManager) { + return { success: false, error: 'Queue manager not available' }; + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + const job = await exchangesQueue.addJob('sync-ib-exchanges', { + handler: 'exchanges', + operation: 'sync-ib-exchanges', + payload: {}, + }); + + logger.info('IB exchanges sync job queued', { jobId: job.id }); + return { + success: true, + jobId: job.id, + message: 'IB exchanges sync job queued', + }; + } catch (error) { + logger.error('Failed to queue IB exchanges sync job', { error }); + return { + success: false, + error: error instanceof Error ? error.message : 'Failed to queue sync job', + }; + } + } + + /** + * Get sync status + */ + async getSyncStatus(): Promise { + try { + const queueManager = this.container.queue; + if (!queueManager) { + return { success: false, error: 'Queue manager not available' }; + } + + const symbolsQueue = queueManager.getQueue('symbols'); + const job = await symbolsQueue.addJob('sync-status', { + handler: 'symbols', + operation: 'sync-status', + payload: {}, + }); + + logger.info('Sync status job queued', { jobId: job.id }); + return { + success: true, + jobId: job.id, + message: 'Sync status job queued', + }; + } catch (error) { + logger.error('Failed to queue sync status job', { error }); + return { + success: false, + error: error instanceof Error ? error.message : 'Failed to queue status job', + }; + } + } + + /** + * Clear PostgreSQL data + */ + async clearPostgreSQLData( + dataType: 'exchanges' | 'provider_mappings' | 'all' = 'all' + ): Promise { + try { + const queueManager = this.container.queue; + if (!queueManager) { + return { success: false, error: 'Queue manager not available' }; + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + const job = await exchangesQueue.addJob('clear-postgresql-data', { + handler: 'exchanges', + operation: 'clear-postgresql-data', + payload: { dataType }, + }); + + logger.info('PostgreSQL data clear job queued', { jobId: job.id, dataType }); + return { + success: true, + jobId: job.id, + message: 'PostgreSQL data clear job queued', + }; + } catch (error) { + logger.error('Failed to queue PostgreSQL clear job', { error }); + return { + success: false, + error: error instanceof Error ? error.message : 'Failed to queue clear job', + }; + } + } + + /** + * Get exchange statistics (waits for result) + */ + async getExchangeStats(): Promise { + try { + const queueManager = this.container.queue; + if (!queueManager) { + return { success: false, error: 'Queue manager not available' }; + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + const job = await exchangesQueue.addJob('get-exchange-stats', { + handler: 'exchanges', + operation: 'get-exchange-stats', + payload: {}, + }); + + // Wait for job to complete and return result + const result = await job.waitUntilFinished(); + return { success: true, data: result }; + } catch (error) { + logger.error('Failed to get exchange stats', { error }); + return { + success: false, + error: error instanceof Error ? error.message : 'Failed to get stats', + }; + } + } + + /** + * Get provider mapping statistics (waits for result) + */ + async getProviderMappingStats(): Promise { + try { + const queueManager = this.container.queue; + if (!queueManager) { + return { success: false, error: 'Queue manager not available' }; + } + + const exchangesQueue = queueManager.getQueue('exchanges'); + const job = await exchangesQueue.addJob('get-provider-mapping-stats', { + handler: 'exchanges', + operation: 'get-provider-mapping-stats', + payload: {}, + }); + + // Wait for job to complete and return result + const result = await job.waitUntilFinished(); + return { success: true, data: result }; + } catch (error) { + logger.error('Failed to get provider mapping stats', { error }); + return { + success: false, + error: error instanceof Error ? error.message : 'Failed to get stats', + }; + } + } +} \ No newline at end of file diff --git a/apps/web-api/src/types/exchange.types.ts b/apps/stock/web-api/src/types/exchange.types.ts similarity index 99% rename from apps/web-api/src/types/exchange.types.ts rename to apps/stock/web-api/src/types/exchange.types.ts index d755efe..a367db7 100644 --- a/apps/web-api/src/types/exchange.types.ts +++ b/apps/stock/web-api/src/types/exchange.types.ts @@ -100,4 +100,4 @@ export interface ApiResponse { error?: string; message?: string; total?: number; -} \ No newline at end of file +} diff --git a/apps/stock/web-api/src/types/monitoring.types.ts b/apps/stock/web-api/src/types/monitoring.types.ts new file mode 100644 index 0000000..8d41532 --- /dev/null +++ b/apps/stock/web-api/src/types/monitoring.types.ts @@ -0,0 +1,127 @@ +/** + * Monitoring types for system health and metrics + */ + +export interface CacheStats { + provider: string; + connected: boolean; + uptime?: number; + memoryUsage?: { + used: number; + peak: number; + total?: number; + }; + stats?: { + hits: number; + misses: number; + keys: number; + evictedKeys?: number; + expiredKeys?: number; + }; + info?: Record; +} + +export interface QueueStats { + name: string; + connected: boolean; + jobs: { + waiting: number; + active: number; + completed: number; + failed: number; + delayed: number; + paused: number; + prioritized?: number; + 'waiting-children'?: number; + }; + workers?: { + count: number; + concurrency: number; + }; + throughput?: { + processed: number; + failed: number; + avgProcessingTime?: number; + }; +} + +export interface DatabaseStats { + type: 'postgres' | 'mongodb' | 'questdb'; + name: string; + connected: boolean; + latency?: number; + pool?: { + size: number; + active: number; + idle: number; + waiting?: number; + max: number; + }; + stats?: Record; +} + +export interface SystemHealth { + status: 'healthy' | 'degraded' | 'unhealthy'; + timestamp: string; + uptime: number; + memory: { + used: number; + total: number; + percentage: number; + }; + cpu?: { + usage: number; + loadAverage?: number[]; + }; + services: { + cache: CacheStats; + queues: QueueStats[]; + databases: DatabaseStats[]; + }; + errors?: string[]; +} + +export interface MetricSnapshot { + timestamp: string; + value: number; + unit?: string; +} + +export interface ServiceMetrics { + requestsPerSecond: MetricSnapshot; + averageResponseTime: MetricSnapshot; + errorRate: MetricSnapshot; + activeConnections: MetricSnapshot; +} + +export interface ServiceStatus { + name: string; + version: string; + status: 'running' | 'stopped' | 'error'; + port?: number; + uptime: number; + lastCheck: string; + healthy: boolean; + error?: string; +} + +export interface ProxyStats { + enabled: boolean; + totalProxies: number; + workingProxies: number; + failedProxies: number; + lastUpdate?: string; + lastFetchTime?: string; +} + +export interface SystemOverview { + services: ServiceStatus[]; + health: SystemHealth; + proxies?: ProxyStats; + environment: { + nodeVersion: string; + platform: string; + architecture: string; + hostname: string; + }; +} \ No newline at end of file diff --git a/apps/web-api/src/utils/error-handler.ts b/apps/stock/web-api/src/utils/error-handler.ts similarity index 99% rename from apps/web-api/src/utils/error-handler.ts rename to apps/stock/web-api/src/utils/error-handler.ts index 77787e0..eba907d 100644 --- a/apps/web-api/src/utils/error-handler.ts +++ b/apps/stock/web-api/src/utils/error-handler.ts @@ -1,7 +1,7 @@ import { Context } from 'hono'; import { getLogger } from '@stock-bot/logger'; -import { ValidationError } from './validation'; import { ApiResponse } from '../types/exchange.types'; +import { ValidationError } from './validation'; const logger = getLogger('error-handler'); @@ -61,4 +61,4 @@ export function createSuccessResponse( } return response; -} \ No newline at end of file +} diff --git a/apps/web-api/src/utils/validation.ts b/apps/stock/web-api/src/utils/validation.ts similarity index 96% rename from apps/web-api/src/utils/validation.ts rename to apps/stock/web-api/src/utils/validation.ts index 47f4dfe..59441ec 100644 --- a/apps/web-api/src/utils/validation.ts +++ b/apps/stock/web-api/src/utils/validation.ts @@ -1,7 +1,10 @@ import { CreateExchangeRequest, CreateProviderMappingRequest } from '../types/exchange.types'; export class ValidationError extends Error { - constructor(message: string, public field?: string) { + constructor( + message: string, + public field?: string + ) { super(message); this.name = 'ValidationError'; } @@ -38,7 +41,10 @@ export function validateCreateExchange(data: unknown): CreateExchangeRequest { } if (currency.length !== 3) { - throw new ValidationError('Currency must be exactly 3 characters (e.g., USD, EUR, CAD)', 'currency'); + throw new ValidationError( + 'Currency must be exactly 3 characters (e.g., USD, EUR, CAD)', + 'currency' + ); } return { @@ -172,4 +178,4 @@ export function validateUpdateProviderMapping(data: unknown): RecordAnalytics Page - Coming Soon} /> Settings Page - Coming Soon} /> + } /> + } /> diff --git a/apps/web-app/src/app/index.ts b/apps/stock/web-app/src/app/index.ts similarity index 100% rename from apps/web-app/src/app/index.ts rename to apps/stock/web-app/src/app/index.ts diff --git a/apps/web-app/src/components/index.ts b/apps/stock/web-app/src/components/index.ts similarity index 100% rename from apps/web-app/src/components/index.ts rename to apps/stock/web-app/src/components/index.ts diff --git a/apps/web-app/src/components/layout/Header.tsx b/apps/stock/web-app/src/components/layout/Header.tsx similarity index 100% rename from apps/web-app/src/components/layout/Header.tsx rename to apps/stock/web-app/src/components/layout/Header.tsx diff --git a/apps/web-app/src/components/layout/Layout.tsx b/apps/stock/web-app/src/components/layout/Layout.tsx similarity index 66% rename from apps/web-app/src/components/layout/Layout.tsx rename to apps/stock/web-app/src/components/layout/Layout.tsx index 6f50405..c20fe04 100644 --- a/apps/web-app/src/components/layout/Layout.tsx +++ b/apps/stock/web-app/src/components/layout/Layout.tsx @@ -11,6 +11,16 @@ export function Layout() { const getTitle = () => { const path = location.pathname.replace('/', ''); if (!path || path === 'dashboard') {return 'Dashboard';} + + // Handle nested routes + if (path.includes('/')) { + const parts = path.split('/'); + // For system routes, show the sub-page name + if (parts[0] === 'system' && parts[1]) { + return parts[1].charAt(0).toUpperCase() + parts[1].slice(1); + } + } + return path.charAt(0).toUpperCase() + path.slice(1); }; @@ -19,8 +29,8 @@ export function Layout() {
-
-
+
+
diff --git a/apps/stock/web-app/src/components/layout/Sidebar.tsx b/apps/stock/web-app/src/components/layout/Sidebar.tsx new file mode 100644 index 0000000..e31466e --- /dev/null +++ b/apps/stock/web-app/src/components/layout/Sidebar.tsx @@ -0,0 +1,218 @@ +import { navigation } from '@/lib/constants'; +import type { NavigationItem } from '@/lib/constants'; +import { cn } from '@/lib/utils'; +import { Dialog, Transition } from '@headlessui/react'; +import { XMarkIcon, ChevronDownIcon, ChevronRightIcon } from '@heroicons/react/24/outline'; +import { Fragment, useState } from 'react'; +import { NavLink, useLocation } from 'react-router-dom'; + +interface SidebarProps { + sidebarOpen: boolean; + setSidebarOpen: (open: boolean) => void; +} + +export function Sidebar({ sidebarOpen, setSidebarOpen }: SidebarProps) { + return ( + <> + {/* Mobile sidebar */} + + + +
+ + +
+ + + +
+ +
+
+ + +
+
+
+
+
+ + {/* Static sidebar for desktop */} +
+ +
+ + ); +} + +function SidebarContent() { + const location = useLocation(); + + // Auto-expand items that have active children + const getInitialExpanded = () => { + const expanded = new Set(); + navigation.forEach(item => { + if (item.children && item.children.some(child => location.pathname === child.href)) { + expanded.add(item.name); + } + }); + return expanded; + }; + + const [expandedItems, setExpandedItems] = useState>(getInitialExpanded()); + + const toggleExpanded = (name: string) => { + const newExpanded = new Set(expandedItems); + if (newExpanded.has(name)) { + newExpanded.delete(name); + } else { + newExpanded.add(name); + } + setExpandedItems(newExpanded); + }; + + const isChildActive = (children: NavigationItem[]) => { + return children.some(child => location.pathname === child.href); + }; + + return ( +
+
+

Stock Bot

+
+ +
+ ); +} diff --git a/apps/web-app/src/components/layout/index.ts b/apps/stock/web-app/src/components/layout/index.ts similarity index 100% rename from apps/web-app/src/components/layout/index.ts rename to apps/stock/web-app/src/components/layout/index.ts diff --git a/apps/web-app/src/components/ui/button.tsx b/apps/stock/web-app/src/components/ui/Button.tsx similarity index 100% rename from apps/web-app/src/components/ui/button.tsx rename to apps/stock/web-app/src/components/ui/Button.tsx diff --git a/apps/web-app/src/components/ui/Card.tsx b/apps/stock/web-app/src/components/ui/Card.tsx similarity index 88% rename from apps/web-app/src/components/ui/Card.tsx rename to apps/stock/web-app/src/components/ui/Card.tsx index 57dadda..599a6ed 100644 --- a/apps/web-app/src/components/ui/Card.tsx +++ b/apps/stock/web-app/src/components/ui/Card.tsx @@ -1,4 +1,4 @@ -import { ReactNode } from 'react'; +import type { ReactNode } from 'react'; import { cn } from '@/lib/utils'; interface CardProps { @@ -11,7 +11,7 @@ export function Card({ children, className, hover = false }: CardProps) { return (
+
{icon}
diff --git a/apps/web-app/src/components/ui/index.ts b/apps/stock/web-app/src/components/ui/index.ts similarity index 57% rename from apps/web-app/src/components/ui/index.ts rename to apps/stock/web-app/src/components/ui/index.ts index 30766a7..e065fee 100644 --- a/apps/web-app/src/components/ui/index.ts +++ b/apps/stock/web-app/src/components/ui/index.ts @@ -1,5 +1,6 @@ -export { Card, CardHeader, CardContent } from './Card'; -export { StatCard } from './StatCard'; +export { Button } from './Button'; +export { Card, CardContent, CardHeader } from './Card'; export { DataTable } from './DataTable'; -export { Dialog, DialogContent, DialogHeader, DialogTitle } from './dialog'; -export { Button } from './button'; +export { Dialog, DialogContent, DialogHeader, DialogTitle } from './Dialog'; +export { StatCard } from './StatCard'; + diff --git a/apps/web-app/src/features/dashboard/DashboardPage.tsx b/apps/stock/web-app/src/features/dashboard/DashboardPage.tsx similarity index 100% rename from apps/web-app/src/features/dashboard/DashboardPage.tsx rename to apps/stock/web-app/src/features/dashboard/DashboardPage.tsx diff --git a/apps/web-app/src/features/dashboard/components/DashboardActivity.tsx b/apps/stock/web-app/src/features/dashboard/components/DashboardActivity.tsx similarity index 100% rename from apps/web-app/src/features/dashboard/components/DashboardActivity.tsx rename to apps/stock/web-app/src/features/dashboard/components/DashboardActivity.tsx diff --git a/apps/web-app/src/features/dashboard/components/DashboardStats.tsx b/apps/stock/web-app/src/features/dashboard/components/DashboardStats.tsx similarity index 100% rename from apps/web-app/src/features/dashboard/components/DashboardStats.tsx rename to apps/stock/web-app/src/features/dashboard/components/DashboardStats.tsx diff --git a/apps/web-app/src/features/dashboard/components/PortfolioTable.tsx b/apps/stock/web-app/src/features/dashboard/components/PortfolioTable.tsx similarity index 99% rename from apps/web-app/src/features/dashboard/components/PortfolioTable.tsx rename to apps/stock/web-app/src/features/dashboard/components/PortfolioTable.tsx index 0132578..b31f574 100644 --- a/apps/web-app/src/features/dashboard/components/PortfolioTable.tsx +++ b/apps/stock/web-app/src/features/dashboard/components/PortfolioTable.tsx @@ -1,5 +1,5 @@ import { DataTable } from '@/components/ui'; -import { ColumnDef } from '@tanstack/react-table'; +import type { ColumnDef } from '@tanstack/react-table'; import React from 'react'; interface PortfolioItem { diff --git a/apps/web-app/src/features/dashboard/components/index.ts b/apps/stock/web-app/src/features/dashboard/components/index.ts similarity index 100% rename from apps/web-app/src/features/dashboard/components/index.ts rename to apps/stock/web-app/src/features/dashboard/components/index.ts diff --git a/apps/web-app/src/features/dashboard/index.ts b/apps/stock/web-app/src/features/dashboard/index.ts similarity index 100% rename from apps/web-app/src/features/dashboard/index.ts rename to apps/stock/web-app/src/features/dashboard/index.ts diff --git a/apps/web-app/src/features/exchanges/ExchangesPage.tsx b/apps/stock/web-app/src/features/exchanges/ExchangesPage.tsx similarity index 100% rename from apps/web-app/src/features/exchanges/ExchangesPage.tsx rename to apps/stock/web-app/src/features/exchanges/ExchangesPage.tsx diff --git a/apps/web-app/src/features/exchanges/components/AddExchangeDialog.tsx b/apps/stock/web-app/src/features/exchanges/components/AddExchangeDialog.tsx similarity index 97% rename from apps/web-app/src/features/exchanges/components/AddExchangeDialog.tsx rename to apps/stock/web-app/src/features/exchanges/components/AddExchangeDialog.tsx index 3c7ca75..0e7a0ff 100644 --- a/apps/web-app/src/features/exchanges/components/AddExchangeDialog.tsx +++ b/apps/stock/web-app/src/features/exchanges/components/AddExchangeDialog.tsx @@ -1,8 +1,8 @@ -import { Dialog, DialogContent, DialogHeader, DialogTitle, Button } from '@/components/ui'; +import { Button, Dialog, DialogContent, DialogHeader, DialogTitle } from '@/components/ui'; import { useCallback } from 'react'; -import { CreateExchangeRequest, AddExchangeDialogProps } from '../types'; -import { validateExchangeForm } from '../utils/validation'; import { useFormValidation } from '../hooks/useFormValidation'; +import type { AddExchangeDialogProps, CreateExchangeRequest } from '../types'; +import { validateExchangeForm } from '../utils/validation'; const initialFormData: CreateExchangeRequest = { code: '', diff --git a/apps/web-app/src/features/exchanges/components/AddProviderMappingDialog.tsx b/apps/stock/web-app/src/features/exchanges/components/AddProviderMappingDialog.tsx similarity index 96% rename from apps/web-app/src/features/exchanges/components/AddProviderMappingDialog.tsx rename to apps/stock/web-app/src/features/exchanges/components/AddProviderMappingDialog.tsx index f571c15..17b56cd 100644 --- a/apps/web-app/src/features/exchanges/components/AddProviderMappingDialog.tsx +++ b/apps/stock/web-app/src/features/exchanges/components/AddProviderMappingDialog.tsx @@ -1,14 +1,22 @@ import { Dialog, DialogContent, DialogHeader, DialogTitle, Button } from '@/components/ui'; import { useCallback, useEffect, useState } from 'react'; import { useExchanges } from '../hooks/useExchanges'; -import { CreateProviderMappingRequest } from '../types'; +import type { CreateProviderMappingRequest } from '../types/index'; interface AddProviderMappingDialogProps { isOpen: boolean; exchangeId: string; exchangeName: string; onClose: () => void; - onCreateMapping: (request: CreateProviderMappingRequest) => Promise; + onCreateMapping: (request: CreateProviderMappingRequest) => Promise; +} + +interface UnmappedExchange { + provider_exchange_code: string; + provider_exchange_name: string; + country_code?: string; + currency?: string; + symbol_count?: number; } export function AddProviderMappingDialog({ @@ -21,29 +29,12 @@ export function AddProviderMappingDialog({ const { fetchProviders, fetchUnmappedProviderExchanges } = useExchanges(); const [providers, setProviders] = useState([]); const [selectedProvider, setSelectedProvider] = useState(''); - const [unmappedExchanges, setUnmappedExchanges] = useState([]); + const [unmappedExchanges, setUnmappedExchanges] = useState([]); const [selectedProviderExchange, setSelectedProviderExchange] = useState(''); const [loading, setLoading] = useState(false); const [providersLoading, setProvidersLoading] = useState(false); const [exchangesLoading, setExchangesLoading] = useState(false); - // Load providers on mount - useEffect(() => { - if (isOpen) { - loadProviders(); - } - }, [isOpen, loadProviders]); - - // Load unmapped exchanges when provider changes - useEffect(() => { - if (selectedProvider) { - loadUnmappedExchanges(selectedProvider); - } else { - setUnmappedExchanges([]); - setSelectedProviderExchange(''); - } - }, [selectedProvider, loadUnmappedExchanges]); - const loadProviders = useCallback(async () => { setProvidersLoading(true); try { @@ -71,6 +62,23 @@ export function AddProviderMappingDialog({ [fetchUnmappedProviderExchanges] ); + // Load providers on mount + useEffect(() => { + if (isOpen) { + loadProviders(); + } + }, [isOpen, loadProviders]); + + // Load unmapped exchanges when provider changes + useEffect(() => { + if (selectedProvider) { + loadUnmappedExchanges(selectedProvider); + } else { + setUnmappedExchanges([]); + setSelectedProviderExchange(''); + } + }, [selectedProvider, loadUnmappedExchanges]); + const handleSubmit = useCallback( async (e: React.FormEvent) => { e.preventDefault(); diff --git a/apps/web-app/src/features/exchanges/components/DeleteExchangeDialog.tsx b/apps/stock/web-app/src/features/exchanges/components/DeleteExchangeDialog.tsx similarity index 100% rename from apps/web-app/src/features/exchanges/components/DeleteExchangeDialog.tsx rename to apps/stock/web-app/src/features/exchanges/components/DeleteExchangeDialog.tsx diff --git a/apps/web-app/src/features/exchanges/components/ExchangesTable.tsx b/apps/stock/web-app/src/features/exchanges/components/ExchangesTable.tsx similarity index 97% rename from apps/web-app/src/features/exchanges/components/ExchangesTable.tsx rename to apps/stock/web-app/src/features/exchanges/components/ExchangesTable.tsx index 52133f0..8fe0b24 100644 --- a/apps/web-app/src/features/exchanges/components/ExchangesTable.tsx +++ b/apps/stock/web-app/src/features/exchanges/components/ExchangesTable.tsx @@ -1,12 +1,12 @@ import { DataTable } from '@/components/ui'; import { PlusIcon, TrashIcon } from '@heroicons/react/24/outline'; -import { ColumnDef } from '@tanstack/react-table'; +import type { ColumnDef, Row } from '@tanstack/react-table'; import { useCallback, useMemo, useState } from 'react'; import { useExchanges } from '../hooks/useExchanges'; -import { Exchange, EditingCell, AddProviderMappingDialogState, DeleteDialogState } from '../types'; +import type { AddProviderMappingDialogState, DeleteDialogState, EditingCell, Exchange } from '../types'; +import { formatDate, formatProviderMapping, getProviderMappingColor, sortProviderMappings } from '../utils/formatters'; import { AddProviderMappingDialog } from './AddProviderMappingDialog'; import { DeleteExchangeDialog } from './DeleteExchangeDialog'; -import { sortProviderMappings, getProviderMappingColor, formatProviderMapping, formatDate } from '../utils/formatters'; export function ExchangesTable() { const { @@ -70,7 +70,7 @@ export function ExchangesTable() { ); const handleRowExpand = useCallback( - async (_row: any) => { + (_row: Row) => { // Row expansion is now handled automatically by TanStack Table // No need to fetch data since all mappings are already loaded }, @@ -235,7 +235,7 @@ export function ExchangesTable() { cell: ({ getValue, row }) => { const totalMappings = parseInt(getValue() as string) || 0; const activeMappings = parseInt(row.original.active_mapping_count) || 0; - const _verifiedMappings = parseInt(row.original.verified_mapping_count) || 0; + // const _verifiedMappings = parseInt(row.original.verified_mapping_count) || 0; // Get provider mappings directly from the exchange data const mappings = row.original.provider_mappings || []; @@ -319,7 +319,6 @@ export function ExchangesTable() { handleToggleActive, handleAddProviderMapping, handleDeleteExchange, - handleConfirmDelete, handleRowExpand, ]); @@ -329,13 +328,13 @@ export function ExchangesTable() {

Error Loading Exchanges

{error}

- Make sure the web-api service is running on localhost:4000 + Make sure the web-api service is running on localhost:2003

); } - const renderSubComponent = ({ row }: { row: any }) => { + const renderSubComponent = ({ row }: { row: Row }) => { const exchange = row.original as Exchange; const mappings = exchange.provider_mappings || []; diff --git a/apps/web-app/src/features/exchanges/components/index.ts b/apps/stock/web-app/src/features/exchanges/components/index.ts similarity index 82% rename from apps/web-app/src/features/exchanges/components/index.ts rename to apps/stock/web-app/src/features/exchanges/components/index.ts index fd15d0d..2f6b5a7 100644 --- a/apps/web-app/src/features/exchanges/components/index.ts +++ b/apps/stock/web-app/src/features/exchanges/components/index.ts @@ -1,4 +1,4 @@ -export { AddSourceDialog } from './AddSourceDialog'; + export { AddProviderMappingDialog } from './AddProviderMappingDialog'; export { AddExchangeDialog } from './AddExchangeDialog'; export { DeleteExchangeDialog } from './DeleteExchangeDialog'; diff --git a/apps/web-app/src/features/exchanges/hooks/index.ts b/apps/stock/web-app/src/features/exchanges/hooks/index.ts similarity index 100% rename from apps/web-app/src/features/exchanges/hooks/index.ts rename to apps/stock/web-app/src/features/exchanges/hooks/index.ts diff --git a/apps/web-app/src/features/exchanges/hooks/useExchanges.ts b/apps/stock/web-app/src/features/exchanges/hooks/useExchanges.ts similarity index 91% rename from apps/web-app/src/features/exchanges/hooks/useExchanges.ts rename to apps/stock/web-app/src/features/exchanges/hooks/useExchanges.ts index 1ec0094..2333984 100644 --- a/apps/web-app/src/features/exchanges/hooks/useExchanges.ts +++ b/apps/stock/web-app/src/features/exchanges/hooks/useExchanges.ts @@ -1,16 +1,16 @@ import { useCallback, useEffect, useState } from 'react'; -import { +import { exchangeApi } from '../services/exchangeApi'; +import type { + CreateExchangeRequest, + CreateProviderMappingRequest, Exchange, ExchangeDetails, ExchangeStats, - ProviderMapping, ProviderExchange, - CreateExchangeRequest, + ProviderMapping, UpdateExchangeRequest, - CreateProviderMappingRequest, UpdateProviderMappingRequest, -} from '../types'; -import { exchangeApi } from '../services/exchangeApi'; +} from '../types/index'; export function useExchanges() { const [exchanges, setExchanges] = useState([]); @@ -62,18 +62,15 @@ export function useExchanges() { [fetchExchanges] ); - const fetchExchangeDetails = useCallback( - async (id: string): Promise => { - try { - return await exchangeApi.getExchangeById(id); - } catch (err) { - // Error fetching exchange details - error state will show in UI - setError(err instanceof Error ? err.message : 'Failed to fetch exchange details'); - return null; - } - }, - [] - ); + const fetchExchangeDetails = useCallback(async (id: string): Promise => { + try { + return await exchangeApi.getExchangeById(id); + } catch (err) { + // Error fetching exchange details - error state will show in UI + setError(err instanceof Error ? err.message : 'Failed to fetch exchange details'); + return null; + } + }, []); const fetchStats = useCallback(async (): Promise => { try { diff --git a/apps/stock/web-app/src/features/exchanges/hooks/useFormValidation.ts b/apps/stock/web-app/src/features/exchanges/hooks/useFormValidation.ts new file mode 100644 index 0000000..8c1a44b --- /dev/null +++ b/apps/stock/web-app/src/features/exchanges/hooks/useFormValidation.ts @@ -0,0 +1,67 @@ +import { useCallback, useState } from 'react'; +import type { FormErrors } from '../types'; + +export function useFormValidation(initialData: T, validateFn: (data: T) => FormErrors) { + const [formData, setFormData] = useState(initialData); + const [errors, setErrors] = useState({}); + const [isSubmitting, setIsSubmitting] = useState(false); + + const updateField = useCallback( + (field: keyof T, value: T[keyof T]) => { + setFormData(prev => ({ ...prev, [field]: value })); + + // Clear error when user starts typing + if (errors[field as string]) { + setErrors(prev => ({ ...prev, [field as string]: '' })); + } + }, + [errors] + ); + + const validate = useCallback((): boolean => { + const newErrors = validateFn(formData); + setErrors(newErrors); + return Object.keys(newErrors).length === 0; + }, [formData, validateFn]); + + const reset = useCallback(() => { + setFormData(initialData); + setErrors({}); + setIsSubmitting(false); + }, [initialData]); + + const handleSubmit = useCallback( + async ( + onSubmit: (data: T) => Promise, + onSuccess?: () => void, + onError?: (error: unknown) => void + ) => { + if (!validate()) { + return; + } + + setIsSubmitting(true); + try { + await onSubmit(formData); + reset(); + onSuccess?.(); + } catch (error) { + onError?.(error); + } finally { + setIsSubmitting(false); + } + }, + [formData, validate, reset] + ); + + return { + formData, + errors, + isSubmitting, + updateField, + validate, + reset, + handleSubmit, + setIsSubmitting, + }; +} diff --git a/apps/web-app/src/features/exchanges/index.ts b/apps/stock/web-app/src/features/exchanges/index.ts similarity index 100% rename from apps/web-app/src/features/exchanges/index.ts rename to apps/stock/web-app/src/features/exchanges/index.ts diff --git a/apps/web-app/src/features/exchanges/services/exchangeApi.ts b/apps/stock/web-app/src/features/exchanges/services/exchangeApi.ts similarity index 91% rename from apps/web-app/src/features/exchanges/services/exchangeApi.ts rename to apps/stock/web-app/src/features/exchanges/services/exchangeApi.ts index 3a416dc..3f1f448 100644 --- a/apps/web-app/src/features/exchanges/services/exchangeApi.ts +++ b/apps/stock/web-app/src/features/exchanges/services/exchangeApi.ts @@ -1,25 +1,22 @@ -import { +import type { ApiResponse, + CreateExchangeRequest, + CreateProviderMappingRequest, Exchange, ExchangeDetails, ExchangeStats, - ProviderMapping, ProviderExchange, - CreateExchangeRequest, + ProviderMapping, UpdateExchangeRequest, - CreateProviderMappingRequest, UpdateProviderMappingRequest, -} from '../types'; +} from '../types/index'; -const API_BASE_URL = import.meta.env.VITE_API_BASE_URL || 'http://localhost:4000/api'; +const API_BASE_URL = import.meta.env.VITE_API_BASE_URL || 'http://localhost:2003'; class ExchangeApiService { - private async request( - endpoint: string, - options?: RequestInit - ): Promise> { - const url = `${API_BASE_URL}${endpoint}`; - + private async request(endpoint: string, options?: RequestInit): Promise> { + const url = `${API_BASE_URL}/api${endpoint}`; + const response = await fetch(url, { headers: { 'Content-Type': 'application/json', @@ -33,7 +30,7 @@ class ExchangeApiService { } const data = await response.json(); - + if (!data.success) { throw new Error(data.error || 'API request failed'); } @@ -76,10 +73,10 @@ class ExchangeApiService { // Provider Mappings async getProviderMappings(provider?: string): Promise { - const endpoint = provider + const endpoint = provider ? `/exchanges/provider-mappings/${provider}` : '/exchanges/provider-mappings/all'; - + const response = await this.request(endpoint); return response.data || []; } @@ -96,7 +93,7 @@ class ExchangeApiService { } async updateProviderMapping( - id: string, + id: string, data: UpdateProviderMappingRequest ): Promise { const response = await this.request(`/exchanges/provider-mappings/${id}`, { @@ -132,4 +129,4 @@ class ExchangeApiService { } // Export singleton instance -export const exchangeApi = new ExchangeApiService(); \ No newline at end of file +export const exchangeApi = new ExchangeApiService(); diff --git a/apps/web-app/src/features/exchanges/types/api.types.ts b/apps/stock/web-app/src/features/exchanges/types/api.types.ts similarity index 99% rename from apps/web-app/src/features/exchanges/types/api.types.ts rename to apps/stock/web-app/src/features/exchanges/types/api.types.ts index e86503b..342536c 100644 --- a/apps/web-app/src/features/exchanges/types/api.types.ts +++ b/apps/stock/web-app/src/features/exchanges/types/api.types.ts @@ -66,4 +66,4 @@ export interface ExchangeStats { active_provider_mappings: string; verified_provider_mappings: string; providers: string; -} \ No newline at end of file +} diff --git a/apps/web-app/src/features/exchanges/types/component.types.ts b/apps/stock/web-app/src/features/exchanges/types/component.types.ts similarity index 89% rename from apps/web-app/src/features/exchanges/types/component.types.ts rename to apps/stock/web-app/src/features/exchanges/types/component.types.ts index ac22733..d6af0d6 100644 --- a/apps/web-app/src/features/exchanges/types/component.types.ts +++ b/apps/stock/web-app/src/features/exchanges/types/component.types.ts @@ -32,7 +32,9 @@ export interface AddExchangeDialogProps extends BaseDialogProps { export interface AddProviderMappingDialogProps extends BaseDialogProps { exchangeId: string; exchangeName: string; - onCreateMapping: (request: import('./request.types').CreateProviderMappingRequest) => Promise; + onCreateMapping: ( + request: import('./request.types').CreateProviderMappingRequest + ) => Promise; } export interface DeleteExchangeDialogProps extends BaseDialogProps { @@ -40,4 +42,4 @@ export interface DeleteExchangeDialogProps extends BaseDialogProps { exchangeName: string; providerMappingCount: number; onConfirmDelete: (exchangeId: string) => Promise; -} \ No newline at end of file +} diff --git a/apps/stock/web-app/src/features/exchanges/types/index.ts b/apps/stock/web-app/src/features/exchanges/types/index.ts new file mode 100644 index 0000000..ff687b3 --- /dev/null +++ b/apps/stock/web-app/src/features/exchanges/types/index.ts @@ -0,0 +1,154 @@ +// API Response types +export interface ApiResponse { + success: boolean; + data?: T; + error?: string; + message?: string; + total?: number; +} + +// Base entity types +export interface BaseEntity { + id: string; + created_at: string; + updated_at: string; +} + +export interface ProviderMapping extends BaseEntity { + provider: string; + provider_exchange_code: string; + provider_exchange_name: string; + master_exchange_id: string; + country_code: string | null; + currency: string | null; + confidence: number; + active: boolean; + verified: boolean; + auto_mapped: boolean; + master_exchange_code?: string; + master_exchange_name?: string; + master_exchange_active?: boolean; +} + +export interface Exchange extends BaseEntity { + code: string; + name: string; + country: string; + currency: string; + active: boolean; + visible: boolean; + provider_mapping_count: string; + active_mapping_count: string; + verified_mapping_count: string; + providers: string | null; + provider_mappings: ProviderMapping[]; +} + +export interface ExchangeDetails { + exchange: Exchange; + provider_mappings: ProviderMapping[]; +} + +export interface ProviderExchange { + provider_exchange_code: string; + provider_exchange_name: string; + country_code: string | null; + currency: string | null; + symbol_count: number | null; +} + +export interface ExchangeStats { + total_exchanges: string; + active_exchanges: string; + countries: string; + currencies: string; + total_provider_mappings: string; + active_provider_mappings: string; + verified_provider_mappings: string; + providers: string; +} + +// Request types for API calls +export interface CreateExchangeRequest { + code: string; + name: string; + country: string; + currency: string; + active?: boolean; +} + +export interface UpdateExchangeRequest { + name?: string; + active?: boolean; + visible?: boolean; + country?: string; + currency?: string; +} + +export interface CreateProviderMappingRequest { + provider: string; + provider_exchange_code: string; + provider_exchange_name?: string; + master_exchange_id: string; + country_code?: string; + currency?: string; + confidence?: number; + active?: boolean; + verified?: boolean; +} + +export interface UpdateProviderMappingRequest { + active?: boolean; + verified?: boolean; + confidence?: number; + master_exchange_id?: string; +} + +// Component-specific types +export interface EditingCell { + id: string; + field: string; +} + +export interface AddProviderMappingDialogState { + exchangeId: string; + exchangeName: string; +} + +export interface DeleteDialogState { + exchangeId: string; + exchangeName: string; + providerMappingCount: number; +} + +export interface FormErrors { + [key: string]: string; +} + +// Dialog props interfaces +export interface BaseDialogProps { + isOpen: boolean; + onClose: () => void; +} + +export interface AddExchangeDialogProps extends BaseDialogProps { + onCreateExchange: (request: CreateExchangeRequest) => Promise; +} + +export interface AddProviderMappingDialogProps extends BaseDialogProps { + exchangeId: string; + exchangeName: string; + onCreateMapping: ( + request: CreateProviderMappingRequest + ) => Promise; +} + +export interface DeleteExchangeDialogProps extends BaseDialogProps { + exchangeId: string; + exchangeName: string; + providerMappingCount: number; + onConfirmDelete: (exchangeId: string) => Promise; +} + +// Legacy compatibility - can be removed later +export type ExchangesApiResponse = ApiResponse; \ No newline at end of file diff --git a/apps/web-app/src/features/exchanges/types/request.types.ts b/apps/stock/web-app/src/features/exchanges/types/request.types.ts similarity index 99% rename from apps/web-app/src/features/exchanges/types/request.types.ts rename to apps/stock/web-app/src/features/exchanges/types/request.types.ts index 6624bb0..efd1553 100644 --- a/apps/web-app/src/features/exchanges/types/request.types.ts +++ b/apps/stock/web-app/src/features/exchanges/types/request.types.ts @@ -32,4 +32,4 @@ export interface UpdateProviderMappingRequest { verified?: boolean; confidence?: number; master_exchange_id?: string; -} \ No newline at end of file +} diff --git a/apps/web-app/src/features/exchanges/utils/formatters.ts b/apps/stock/web-app/src/features/exchanges/utils/formatters.ts similarity index 94% rename from apps/web-app/src/features/exchanges/utils/formatters.ts rename to apps/stock/web-app/src/features/exchanges/utils/formatters.ts index 7c7f36c..cf583f3 100644 --- a/apps/web-app/src/features/exchanges/utils/formatters.ts +++ b/apps/stock/web-app/src/features/exchanges/utils/formatters.ts @@ -1,4 +1,4 @@ -import { ProviderMapping } from '../types'; +import type { ProviderMapping } from '../types'; export function formatDate(dateString: string): string { return new Date(dateString).toLocaleDateString(); @@ -21,7 +21,7 @@ export function sortProviderMappings(mappings: ProviderMapping[]): ProviderMappi if (!a.active && b.active) { return 1; } - + // Then by provider name return a.provider.localeCompare(b.provider); }); @@ -32,4 +32,4 @@ export function truncateText(text: string, maxLength: number): string { return text; } return text.substring(0, maxLength) + '...'; -} \ No newline at end of file +} diff --git a/apps/web-app/src/features/exchanges/utils/validation.ts b/apps/stock/web-app/src/features/exchanges/utils/validation.ts similarity index 95% rename from apps/web-app/src/features/exchanges/utils/validation.ts rename to apps/stock/web-app/src/features/exchanges/utils/validation.ts index fd2dfdd..007ac48 100644 --- a/apps/web-app/src/features/exchanges/utils/validation.ts +++ b/apps/stock/web-app/src/features/exchanges/utils/validation.ts @@ -1,4 +1,4 @@ -import { FormErrors } from '../types'; +import type { FormErrors } from '../types'; export function validateExchangeForm(data: { code: string; @@ -35,4 +35,4 @@ export function validateExchangeForm(data: { export function hasValidationErrors(errors: FormErrors): boolean { return Object.keys(errors).length > 0; -} \ No newline at end of file +} diff --git a/apps/stock/web-app/src/features/monitoring/MonitoringPage.tsx b/apps/stock/web-app/src/features/monitoring/MonitoringPage.tsx new file mode 100644 index 0000000..043077e --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/MonitoringPage.tsx @@ -0,0 +1,189 @@ +/** + * System Monitoring Page + */ + +import React, { useState } from 'react'; +import { + CacheStatsCard, + DatabaseStatsGrid, + ProxyStatsCard, + QueueStatsTable, + ServiceStatusGrid, + SystemHealthCard +} from './components'; +import { + useCacheStats, + useDatabaseStats, + useProxyStats, + useQueueStats, + useServiceStatus, + useSystemHealth, + useSystemOverview +} from './hooks'; + +export function MonitoringPage() { + const [refreshInterval, setRefreshInterval] = useState(5000); // 5 seconds default + const [useOverview, setUseOverview] = useState(false); // Toggle between individual calls and overview + + // Individual API calls + const { data: health, loading: healthLoading, error: healthError } = useSystemHealth(useOverview ? 0 : refreshInterval); + const { data: cache, loading: cacheLoading, error: cacheError } = useCacheStats(useOverview ? 0 : refreshInterval); + const { data: queues, loading: queuesLoading, error: queuesError } = useQueueStats(useOverview ? 0 : refreshInterval); + const { data: databases, loading: dbLoading, error: dbError } = useDatabaseStats(useOverview ? 0 : refreshInterval); + const { data: services, loading: servicesLoading, error: servicesError } = useServiceStatus(useOverview ? 0 : refreshInterval); + const { data: proxies, loading: proxiesLoading, error: proxiesError } = useProxyStats(useOverview ? 0 : refreshInterval); + + // Combined overview call + const { data: overview, loading: overviewLoading, error: overviewError } = useSystemOverview(useOverview ? refreshInterval : 0); + + const handleRefreshIntervalChange = (e: React.ChangeEvent) => { + setRefreshInterval(Number(e.target.value)); + }; + + const handleDataSourceToggle = () => { + setUseOverview(!useOverview); + }; + + // Use overview data if enabled + const displayHealth = useOverview && overview ? overview.health : health; + const displayServices = useOverview && overview ? overview.services : services; + const displayProxies = useOverview && overview ? overview.proxies : proxies; + const displayCache = useOverview && overview ? overview.health.services.cache : cache; + const displayQueues = useOverview && overview ? overview.health.services.queues : queues; + const displayDatabases = useOverview && overview ? overview.health.services.databases : databases; + + const isLoading = useOverview + ? overviewLoading + : (healthLoading || cacheLoading || queuesLoading || dbLoading || servicesLoading || proxiesLoading); + + const errors = useOverview + ? (overviewError ? [overviewError] : []) + : [healthError, cacheError, queuesError, dbError, servicesError, proxiesError].filter(Boolean); + + if (isLoading) { + return ( +
+
+
+

Loading monitoring data...

+
+
+ ); + } + + return ( +
+
+
+

System Monitoring

+
+
+
+ + +
+
+ + +
+
+
+ + {errors.length > 0 && ( +
+

Errors occurred while fetching data:

+
    + {errors.map((error, index) => ( +
  • {error}
  • + ))} +
+
+ )} + + {useOverview && overview && ( +
+

System Environment

+
+
+ Node: {overview.environment.nodeVersion} +
+
+ Platform: {overview.environment.platform} +
+
+ Architecture: {overview.environment.architecture} +
+
+ Hostname: {overview.environment.hostname} +
+
+
+ )} + +
+ {/* Service Status */} + {displayServices && displayServices.length > 0 && ( +
+

Microservices Status

+ +
+ )} + + {/* System Health and Cache */} + {displayHealth && ( +
+
+ +
+
+ {displayCache && } +
+
+ {displayProxies && } +
+
+ )} + + {/* Database Stats */} + {displayDatabases && displayDatabases.length > 0 && ( +
+

Database Connections

+ +
+ )} + + {/* Queue Stats */} + {displayQueues && displayQueues.length > 0 && ( +
+

Queue Status

+ +
+ )} +
+
+ ); +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/README.md b/apps/stock/web-app/src/features/monitoring/README.md new file mode 100644 index 0000000..33bab17 --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/README.md @@ -0,0 +1,39 @@ +# Monitoring Components + +This directory contains monitoring components that have been refactored to use standardized UI components with a consistent dark theme. + +## Standardized Components + +### StatusBadge +Used for displaying status indicators with consistent styling: +- `ConnectionStatus` - Shows connected/disconnected state +- `HealthStatus` - Shows healthy/unhealthy state +- `ServiceStatusIndicator` - Shows service status as a colored dot + +### MetricCard +Displays metrics with optional progress bars in a consistent card layout. + +### Cards +- `ServiceCard` - Displays individual service status +- `DatabaseCard` - Displays database connection info +- `SystemHealthCard` - Shows system health overview +- `CacheStatsCard` - Shows cache statistics +- `ProxyStatsCard` - Shows proxy status +- `QueueStatsTable` - Displays queue statistics in a table + +## Theme Colors + +All components now use the standardized color palette from the Tailwind config: +- Background: `bg-surface-secondary` (dark surfaces) +- Borders: `border-border` +- Text: `text-text-primary`, `text-text-secondary`, `text-text-muted` +- Status colors: `text-success`, `text-danger`, `text-warning` +- Primary accent: `text-primary-400`, `bg-primary-500/10` + +## Utilities + +Common formatting functions are available in `utils/formatters.ts`: +- `formatUptime()` - Formats milliseconds to human-readable uptime +- `formatBytes()` - Formats bytes to KB/MB/GB +- `formatNumber()` - Adds thousand separators +- `formatPercentage()` - Formats numbers as percentages \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/SPACING.md b/apps/stock/web-app/src/features/monitoring/SPACING.md new file mode 100644 index 0000000..0359772 --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/SPACING.md @@ -0,0 +1,44 @@ +# Monitoring Components - Spacing Guidelines + +This document outlines the standardized spacing used across monitoring components to maximize screen real estate. + +## Spacing Standards + +### Page Layout +- Main container: `flex flex-col h-full space-y-4` (16px vertical gaps) +- No outer padding - handled by parent layout +- Section spacing: `space-y-4` between major sections + +### Cards +- Card padding: `p-4` (16px) for main cards, `p-3` (12px) for compact cards +- Card header: `pb-3` (12px bottom padding) +- Card content spacing: `space-y-3` (12px gaps) +- Grid gaps: `gap-3` (12px) or `gap-4` (16px) + +### Typography +- Page title: `text-lg font-bold mb-2` +- Section headings: `text-lg font-semibold mb-3` +- Card titles: `text-base font-semibold` +- Large values: `text-xl` or `text-lg` +- Regular text: `text-sm` +- Small text/labels: `text-xs` + +### Specific Components + +**ServiceCard**: `p-3` with `space-y-1.5` and `text-xs` +**DatabaseCard**: `p-4` with `space-y-2` +**SystemHealthCard**: `p-4` with `space-y-3` +**CacheStatsCard**: `p-4` with `space-y-3` +**ProxyStatsCard**: `p-4` with `space-y-3` +**QueueStatsTable**: `p-4` with `text-xs` table + +### Grids +- Service grid: `gap-3` +- Database grid: `gap-3` +- Main layout grid: `gap-4` + +## Benefits +- Maximizes usable screen space +- Consistent with dashboard/exchanges pages +- More data visible without scrolling +- Clean, compact appearance \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/components/CacheStatsCard.tsx b/apps/stock/web-app/src/features/monitoring/components/CacheStatsCard.tsx new file mode 100644 index 0000000..08a6489 --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/components/CacheStatsCard.tsx @@ -0,0 +1,92 @@ +/** + * Cache Statistics Card Component + */ + +import type { CacheStats } from '../types'; +import { Card, CardHeader, CardContent } from '@/components/ui/Card'; +import { ConnectionStatus } from './StatusBadge'; +import { formatBytes, formatNumber, formatPercentage } from '../utils/formatters'; + +interface CacheStatsCardProps { + stats: CacheStats; +} + +export function CacheStatsCard({ stats }: CacheStatsCardProps) { + const hitRate = stats.stats && (stats.stats.hits + stats.stats.misses) > 0 + ? (stats.stats.hits / (stats.stats.hits + stats.stats.misses) * 100) + : 0; + + return ( + + +
+

Cache (Dragonfly)

+ +
+
+ + + {stats.connected ? ( +
+ {stats.memoryUsage && ( +
+
+
Memory Used
+
{formatBytes(stats.memoryUsage.used)}
+
+
+
Peak Memory
+
{formatBytes(stats.memoryUsage.peak)}
+
+
+ )} + + {stats.stats && ( + <> +
+
+
Hit Rate
+
{formatPercentage(hitRate)}
+
+
+
Total Keys
+
{formatNumber(stats.stats.keys)}
+
+
+ +
+
+ Hits: {formatNumber(stats.stats.hits)} +
+
+ Misses: {formatNumber(stats.stats.misses)} +
+ {stats.stats.evictedKeys !== undefined && ( +
+ Evicted: {formatNumber(stats.stats.evictedKeys)} +
+ )} + {stats.stats.expiredKeys !== undefined && ( +
+ Expired: {formatNumber(stats.stats.expiredKeys)} +
+ )} +
+ + )} + + {stats.uptime && ( +
+ Uptime: {Math.floor(stats.uptime / 3600)}h {Math.floor((stats.uptime % 3600) / 60)}m +
+ )} +
+ ) : ( +
+ Cache service is not available +
+ )} +
+
+ ); +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/components/DatabaseCard.tsx b/apps/stock/web-app/src/features/monitoring/components/DatabaseCard.tsx new file mode 100644 index 0000000..aa0bd77 --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/components/DatabaseCard.tsx @@ -0,0 +1,106 @@ +import { Card, CardHeader, CardContent } from '@/components/ui/Card'; +import { ConnectionStatus } from './StatusBadge'; +import { formatPercentage } from '../utils/formatters'; +import type { DatabaseStats } from '../types'; + +interface DatabaseCardProps { + database: DatabaseStats; +} + +export function DatabaseCard({ database }: DatabaseCardProps) { + const getDbIcon = (type: string) => { + switch (type) { + case 'postgres': + return '🐘'; + case 'mongodb': + return '🍃'; + case 'questdb': + return '⚡'; + default: + return '💾'; + } + }; + + return ( + + +
+
+ {getDbIcon(database.type)} +

{database.name}

+
+ +
+
+ + + {database.connected ? ( +
+ {database.latency !== undefined && ( +
+
Latency
+
{database.latency}ms
+
+ )} + + {database.pool && ( +
+
Connection Pool
+
+
+ Active:{' '} + {database.pool.active} +
+
+ Idle:{' '} + {database.pool.idle} +
+
+ Size:{' '} + {database.pool.size} +
+
+ Max:{' '} + {database.pool.max} +
+
+ + {database.pool.max > 0 && ( +
+
+
+
+
+ {formatPercentage((database.pool.size / database.pool.max) * 100, 0)} utilized +
+
+ )} +
+ )} + + {database.type === 'mongodb' && database.stats && ( +
+
Version: {database.stats.version}
+ {database.stats.connections && ( +
+ Connections:{' '} + + {database.stats.connections.current}/{database.stats.connections.available} + +
+ )} +
+ )} +
+ ) : ( +
+ Database is not available +
+ )} + + + ); +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/components/DatabaseStatsGrid.tsx b/apps/stock/web-app/src/features/monitoring/components/DatabaseStatsGrid.tsx new file mode 100644 index 0000000..b235b01 --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/components/DatabaseStatsGrid.tsx @@ -0,0 +1,20 @@ +/** + * Database Statistics Grid Component + */ + +import type { DatabaseStats } from '../types'; +import { DatabaseCard } from './DatabaseCard'; + +interface DatabaseStatsGridProps { + databases: DatabaseStats[]; +} + +export function DatabaseStatsGrid({ databases }: DatabaseStatsGridProps) { + return ( +
+ {databases.map((db) => ( + + ))} +
+ ); +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/components/MetricCard.tsx b/apps/stock/web-app/src/features/monitoring/components/MetricCard.tsx new file mode 100644 index 0000000..d1724e9 --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/components/MetricCard.tsx @@ -0,0 +1,57 @@ +import { Card, CardHeader, CardContent } from '@/components/ui/Card'; +import { cn } from '@/lib/utils'; + +interface MetricCardProps { + title: string; + value: string | number; + subtitle?: string; + icon?: React.ReactNode; + valueClassName?: string; + progress?: { + value: number; + max: number; + label?: string; + }; +} + +export function MetricCard({ + title, + value, + subtitle, + icon, + valueClassName, + progress +}: MetricCardProps) { + return ( + + +
+

{title}

+ {icon &&
{icon}
} +
+
+ +
+ {value} +
+ {subtitle && ( +

{subtitle}

+ )} + {progress && ( +
+
+ {progress.label || 'Usage'} + {((progress.value / progress.max) * 100).toFixed(0)}% +
+
+
+
+
+ )} + + + ); +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/components/ProxyStatsCard.tsx b/apps/stock/web-app/src/features/monitoring/components/ProxyStatsCard.tsx new file mode 100644 index 0000000..942e7f9 --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/components/ProxyStatsCard.tsx @@ -0,0 +1,90 @@ +/** + * Proxy Stats Card Component + */ + +import type { ProxyStats } from '../types'; +import { Card, CardHeader, CardContent } from '@/components/ui/Card'; +import { StatusBadge } from './StatusBadge'; +import { formatPercentage } from '../utils/formatters'; + +interface ProxyStatsCardProps { + stats: ProxyStats; +} + +export function ProxyStatsCard({ stats }: ProxyStatsCardProps) { + const successRate = stats.totalProxies > 0 + ? (stats.workingProxies / stats.totalProxies) * 100 + : 0; + + const formatDate = (dateString?: string) => { + if (!dateString) {return 'Never';} + const date = new Date(dateString); + return date.toLocaleString(); + }; + + return ( + + +
+

Proxy Status

+ + {stats.enabled ? 'Enabled' : 'Disabled'} + +
+
+ + + {stats.enabled ? ( +
+
+
+

Total Proxies

+

{stats.totalProxies}

+
+
+

Success Rate

+

{formatPercentage(successRate)}

+
+
+ +
+
+

Working

+

{stats.workingProxies}

+
+
+

Failed

+

{stats.failedProxies}

+
+
+ +
+
+ Last Update: + {formatDate(stats.lastUpdate)} +
+
+ Last Fetch: + {formatDate(stats.lastFetchTime)} +
+
+ + {stats.totalProxies === 0 && ( +
+ No proxies available. Check WebShare API configuration. +
+ )} +
+ ) : ( +
+

Proxy service is disabled

+

Enable it in the configuration to use proxies

+
+ )} +
+
+ ); +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/components/QueueStatsTable.tsx b/apps/stock/web-app/src/features/monitoring/components/QueueStatsTable.tsx new file mode 100644 index 0000000..ca3b1a8 --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/components/QueueStatsTable.tsx @@ -0,0 +1,104 @@ +/** + * Queue Statistics Table Component + */ + +import type { QueueStats } from '../types'; +import { Card, CardHeader, CardContent } from '@/components/ui/Card'; +import { formatNumber } from '../utils/formatters'; +import { cn } from '@/lib/utils'; + +interface QueueStatsTableProps { + queues: QueueStats[]; +} + +export function QueueStatsTable({ queues }: QueueStatsTableProps) { + const totalJobs = (queue: QueueStats) => { + const { jobs } = queue; + return jobs.waiting + jobs.active + jobs.completed + jobs.failed + jobs.delayed + jobs.paused + + (jobs.prioritized || 0) + (jobs['waiting-children'] || 0); + }; + + return ( + + +

Queue Statistics

+
+ + + {queues.length > 0 ? ( +
+ + + + + + + + + + + + + + + + + + {queues.map((queue) => ( + + + + + + + + + + + + + + ))} + +
QueueStatusWaitingActiveCompletedFailedDelayedPrioritizedChildrenWorkersTotal
{queue.name} + + {formatNumber(queue.jobs.waiting)} + {queue.jobs.active > 0 ? ( + {formatNumber(queue.jobs.active)} + ) : ( + {queue.jobs.active} + )} + {formatNumber(queue.jobs.completed)} + {queue.jobs.failed > 0 ? ( + {formatNumber(queue.jobs.failed)} + ) : ( + {queue.jobs.failed} + )} + {formatNumber(queue.jobs.delayed)} + {queue.jobs.prioritized && queue.jobs.prioritized > 0 ? ( + {formatNumber(queue.jobs.prioritized)} + ) : ( + 0 + )} + + {queue.jobs['waiting-children'] && queue.jobs['waiting-children'] > 0 ? ( + {formatNumber(queue.jobs['waiting-children'])} + ) : ( + 0 + )} + + {queue.workers ? `${queue.workers.count}/${queue.workers.concurrency}` : '-'} + {formatNumber(totalJobs(queue))}
+
+ ) : ( +
+ No queue data available +
+ )} +
+
+ ); +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/components/ServiceCard.tsx b/apps/stock/web-app/src/features/monitoring/components/ServiceCard.tsx new file mode 100644 index 0000000..08e286b --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/components/ServiceCard.tsx @@ -0,0 +1,56 @@ +import { Card, CardHeader, CardContent } from '@/components/ui/Card'; +import { ServiceStatusIndicator, HealthStatus } from './StatusBadge'; +import { formatUptime } from '../utils/formatters'; +import type { ServiceStatus } from '../types'; + +interface ServiceCardProps { + service: ServiceStatus; +} + +export function ServiceCard({ service }: ServiceCardProps) { + return ( + + +
+

+ {service.name.replace(/-/g, ' ')} +

+ +
+
+ + +
+ Status: + {service.status} +
+ +
+ Port: + {service.port || 'N/A'} +
+ +
+ Version: + {service.version} +
+ +
+ Uptime: + {formatUptime(service.uptime)} +
+ +
+ Health: + +
+ + {service.error && ( +
+ Error: {service.error} +
+ )} +
+
+ ); +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/components/ServiceStatusGrid.tsx b/apps/stock/web-app/src/features/monitoring/components/ServiceStatusGrid.tsx new file mode 100644 index 0000000..fcfaeba --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/components/ServiceStatusGrid.tsx @@ -0,0 +1,20 @@ +/** + * Service Status Grid Component + */ + +import type { ServiceStatus } from '../types'; +import { ServiceCard } from './ServiceCard'; + +interface ServiceStatusGridProps { + services: ServiceStatus[]; +} + +export function ServiceStatusGrid({ services }: ServiceStatusGridProps) { + return ( +
+ {services.map((service) => ( + + ))} +
+ ); +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/components/StatusBadge.tsx b/apps/stock/web-app/src/features/monitoring/components/StatusBadge.tsx new file mode 100644 index 0000000..353940b --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/components/StatusBadge.tsx @@ -0,0 +1,80 @@ +import { cn } from '@/lib/utils'; + +type BadgeVariant = 'success' | 'danger' | 'warning' | 'default'; + +interface StatusBadgeProps { + children: React.ReactNode; + variant?: BadgeVariant; + className?: string; + size?: 'sm' | 'md'; +} + +const variantStyles: Record = { + success: 'text-success bg-success/10', + danger: 'text-danger bg-danger/10', + warning: 'text-warning bg-warning/10', + default: 'text-text-secondary bg-text-secondary/10', +}; + +const sizeStyles = { + sm: 'px-2 py-0.5 text-xs', + md: 'px-3 py-1 text-sm', +}; + +export function StatusBadge({ + children, + variant = 'default', + className, + size = 'sm' +}: StatusBadgeProps) { + return ( + + {children} + + ); +} + +interface ConnectionStatusProps { + connected: boolean; + size?: 'sm' | 'md'; +} + +export function ConnectionStatus({ connected, size = 'sm' }: ConnectionStatusProps) { + return ( + + {connected ? 'Connected' : 'Disconnected'} + + ); +} + +interface HealthStatusProps { + healthy: boolean; + size?: 'sm' | 'md'; +} + +export function HealthStatus({ healthy, size = 'sm' }: HealthStatusProps) { + return ( + + {healthy ? 'Healthy' : 'Unhealthy'} + + ); +} + +interface ServiceStatusIndicatorProps { + status: 'running' | 'stopped' | 'error'; +} + +export function ServiceStatusIndicator({ status }: ServiceStatusIndicatorProps) { + const statusColors = { + running: 'bg-success', + stopped: 'bg-text-muted', + error: 'bg-danger', + }; + + return
; +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/components/SystemHealthCard.tsx b/apps/stock/web-app/src/features/monitoring/components/SystemHealthCard.tsx new file mode 100644 index 0000000..3376f01 --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/components/SystemHealthCard.tsx @@ -0,0 +1,100 @@ +/** + * System Health Card Component + */ + +import type { SystemHealth } from '../types'; +import { Card, CardHeader, CardContent } from '@/components/ui/Card'; +import { StatusBadge } from './StatusBadge'; +import { formatUptime, formatBytes, formatPercentage } from '../utils/formatters'; + +interface SystemHealthCardProps { + health: SystemHealth; +} + +export function SystemHealthCard({ health }: SystemHealthCardProps) { + const statusVariant = { + healthy: 'success' as const, + degraded: 'warning' as const, + unhealthy: 'danger' as const, + }[health.status]; + + return ( + + +
+

System Health

+ + {health.status.toUpperCase()} + +
+
+ + +
+
Uptime
+
{formatUptime(health.uptime)}
+
+ +
+
System Memory
+
+
+ {formatBytes(health.memory.used)} / {formatBytes(health.memory.total)} + {formatPercentage(health.memory.percentage)} +
+
+
+
+
+ {health.memory.available !== undefined && ( +
+ Available: {formatBytes(health.memory.available)} (excludes cache/buffers) +
+ )} + {health.memory.heap && ( +
+ Node.js Heap: {formatBytes(health.memory.heap.used)} / {formatBytes(health.memory.heap.total)} +
+ )} +
+ + {health.cpu && ( +
+
CPU Usage
+
+
{health.cpu.usage}%
+ {health.cpu.cores && ( + {health.cpu.cores} cores + )} +
+ {health.cpu.loadAverage && ( +
+ Load: {health.cpu.loadAverage.map(l => l.toFixed(2)).join(', ')} +
+ )} +
+ )} + + {health.errors && health.errors.length > 0 && ( +
+
Issues
+
    + {health.errors.map((error, index) => ( +
  • + • {error} +
  • + ))} +
+
+ )} + +
+ Last updated: {new Date(health.timestamp).toLocaleTimeString()} +
+ + + ); +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/components/index.ts b/apps/stock/web-app/src/features/monitoring/components/index.ts new file mode 100644 index 0000000..c94ddfe --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/components/index.ts @@ -0,0 +1,14 @@ +/** + * Monitoring components exports + */ + +export { SystemHealthCard } from './SystemHealthCard'; +export { CacheStatsCard } from './CacheStatsCard'; +export { QueueStatsTable } from './QueueStatsTable'; +export { DatabaseStatsGrid } from './DatabaseStatsGrid'; +export { ServiceStatusGrid } from './ServiceStatusGrid'; +export { ProxyStatsCard } from './ProxyStatsCard'; +export { StatusBadge, ConnectionStatus, HealthStatus, ServiceStatusIndicator } from './StatusBadge'; +export { MetricCard } from './MetricCard'; +export { ServiceCard } from './ServiceCard'; +export { DatabaseCard } from './DatabaseCard'; \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/hooks/index.ts b/apps/stock/web-app/src/features/monitoring/hooks/index.ts new file mode 100644 index 0000000..22327de --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/hooks/index.ts @@ -0,0 +1,5 @@ +/** + * Monitoring hooks exports + */ + +export * from './useMonitoring'; \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/hooks/useMonitoring.ts b/apps/stock/web-app/src/features/monitoring/hooks/useMonitoring.ts new file mode 100644 index 0000000..22ae4f0 --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/hooks/useMonitoring.ts @@ -0,0 +1,218 @@ +/** + * Custom hook for monitoring data + */ + +import { useState, useEffect, useCallback } from 'react'; +import { monitoringApi } from '../services/monitoringApi'; +import type { + SystemHealth, + CacheStats, + QueueStats, + DatabaseStats, + ServiceStatus, + ProxyStats, + SystemOverview +} from '../types'; + +export function useSystemHealth(refreshInterval: number = 5000) { + const [data, setData] = useState(null); + const [loading, setLoading] = useState(true); + const [error, setError] = useState(null); + + const fetchData = useCallback(async () => { + try { + const health = await monitoringApi.getSystemHealth(); + setData(health); + setError(null); + } catch (err) { + setError(err instanceof Error ? err.message : 'Failed to fetch system health'); + } finally { + setLoading(false); + } + }, []); + + useEffect(() => { + fetchData(); + + if (refreshInterval > 0) { + const interval = setInterval(fetchData, refreshInterval); + return () => clearInterval(interval); + } + }, [fetchData, refreshInterval]); + + return { data, loading, error, refetch: fetchData }; +} + +export function useCacheStats(refreshInterval: number = 5000) { + const [data, setData] = useState(null); + const [loading, setLoading] = useState(true); + const [error, setError] = useState(null); + + const fetchData = useCallback(async () => { + try { + const stats = await monitoringApi.getCacheStats(); + setData(stats); + setError(null); + } catch (err) { + setError(err instanceof Error ? err.message : 'Failed to fetch cache stats'); + } finally { + setLoading(false); + } + }, []); + + useEffect(() => { + fetchData(); + + if (refreshInterval > 0) { + const interval = setInterval(fetchData, refreshInterval); + return () => clearInterval(interval); + } + }, [fetchData, refreshInterval]); + + return { data, loading, error, refetch: fetchData }; +} + +export function useQueueStats(refreshInterval: number = 5000) { + const [data, setData] = useState([]); + const [loading, setLoading] = useState(true); + const [error, setError] = useState(null); + + const fetchData = useCallback(async () => { + try { + const result = await monitoringApi.getQueueStats(); + setData(result.queues); + setError(null); + } catch (err) { + setError(err instanceof Error ? err.message : 'Failed to fetch queue stats'); + } finally { + setLoading(false); + } + }, []); + + useEffect(() => { + fetchData(); + + if (refreshInterval > 0) { + const interval = setInterval(fetchData, refreshInterval); + return () => clearInterval(interval); + } + }, [fetchData, refreshInterval]); + + return { data, loading, error, refetch: fetchData }; +} + +export function useDatabaseStats(refreshInterval: number = 5000) { + const [data, setData] = useState([]); + const [loading, setLoading] = useState(true); + const [error, setError] = useState(null); + + const fetchData = useCallback(async () => { + try { + const result = await monitoringApi.getDatabaseStats(); + setData(result.databases); + setError(null); + } catch (err) { + setError(err instanceof Error ? err.message : 'Failed to fetch database stats'); + } finally { + setLoading(false); + } + }, []); + + useEffect(() => { + fetchData(); + + if (refreshInterval > 0) { + const interval = setInterval(fetchData, refreshInterval); + return () => clearInterval(interval); + } + }, [fetchData, refreshInterval]); + + return { data, loading, error, refetch: fetchData }; +} + +export function useServiceStatus(refreshInterval: number = 5000) { + const [data, setData] = useState([]); + const [loading, setLoading] = useState(true); + const [error, setError] = useState(null); + + const fetchData = useCallback(async () => { + try { + const result = await monitoringApi.getServiceStatus(); + setData(result.services); + setError(null); + } catch (err) { + setError(err instanceof Error ? err.message : 'Failed to fetch service status'); + } finally { + setLoading(false); + } + }, []); + + useEffect(() => { + fetchData(); + + if (refreshInterval > 0) { + const interval = setInterval(fetchData, refreshInterval); + return () => clearInterval(interval); + } + }, [fetchData, refreshInterval]); + + return { data, loading, error, refetch: fetchData }; +} + +export function useProxyStats(refreshInterval: number = 5000) { + const [data, setData] = useState(null); + const [loading, setLoading] = useState(true); + const [error, setError] = useState(null); + + const fetchData = useCallback(async () => { + try { + const stats = await monitoringApi.getProxyStats(); + setData(stats); + setError(null); + } catch (err) { + setError(err instanceof Error ? err.message : 'Failed to fetch proxy stats'); + } finally { + setLoading(false); + } + }, []); + + useEffect(() => { + fetchData(); + + if (refreshInterval > 0) { + const interval = setInterval(fetchData, refreshInterval); + return () => clearInterval(interval); + } + }, [fetchData, refreshInterval]); + + return { data, loading, error, refetch: fetchData }; +} + +export function useSystemOverview(refreshInterval: number = 5000) { + const [data, setData] = useState(null); + const [loading, setLoading] = useState(true); + const [error, setError] = useState(null); + + const fetchData = useCallback(async () => { + try { + const overview = await monitoringApi.getSystemOverview(); + setData(overview); + setError(null); + } catch (err) { + setError(err instanceof Error ? err.message : 'Failed to fetch system overview'); + } finally { + setLoading(false); + } + }, []); + + useEffect(() => { + fetchData(); + + if (refreshInterval > 0) { + const interval = setInterval(fetchData, refreshInterval); + return () => clearInterval(interval); + } + }, [fetchData, refreshInterval]); + + return { data, loading, error, refetch: fetchData }; +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/index.ts b/apps/stock/web-app/src/features/monitoring/index.ts new file mode 100644 index 0000000..f56a62a --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/index.ts @@ -0,0 +1,8 @@ +/** + * Monitoring feature exports + */ + +export { MonitoringPage } from './MonitoringPage'; +export * from './types'; +export * from './hooks/useMonitoring'; +export * from './services/monitoringApi'; \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/services/monitoringApi.ts b/apps/stock/web-app/src/features/monitoring/services/monitoringApi.ts new file mode 100644 index 0000000..2e3e210 --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/services/monitoringApi.ts @@ -0,0 +1,128 @@ +/** + * Monitoring API Service + */ + +import type { + SystemHealth, + CacheStats, + QueueStats, + DatabaseStats, + ServiceStatus, + ProxyStats, + SystemOverview +} from '../types'; + +const API_BASE_URL = import.meta.env.VITE_API_BASE_URL || 'http://localhost:2003'; +const MONITORING_BASE = `${API_BASE_URL}/api/system/monitoring`; + +export const monitoringApi = { + /** + * Get overall system health + */ + async getSystemHealth(): Promise { + const response = await fetch(MONITORING_BASE); + if (!response.ok) { + throw new Error(`Failed to fetch system health: ${response.statusText}`); + } + return response.json(); + }, + + /** + * Get cache statistics + */ + async getCacheStats(): Promise { + const response = await fetch(`${MONITORING_BASE}/cache`); + if (!response.ok) { + throw new Error(`Failed to fetch cache stats: ${response.statusText}`); + } + return response.json(); + }, + + /** + * Get queue statistics + */ + async getQueueStats(): Promise<{ queues: QueueStats[] }> { + const response = await fetch(`${MONITORING_BASE}/queues`); + if (!response.ok) { + throw new Error(`Failed to fetch queue stats: ${response.statusText}`); + } + return response.json(); + }, + + /** + * Get specific queue statistics + */ + async getQueueStatsByName(name: string): Promise { + const response = await fetch(`${MONITORING_BASE}/queues/${name}`); + if (!response.ok) { + throw new Error(`Failed to fetch queue ${name} stats: ${response.statusText}`); + } + return response.json(); + }, + + /** + * Get database statistics + */ + async getDatabaseStats(): Promise<{ databases: DatabaseStats[] }> { + const response = await fetch(`${MONITORING_BASE}/databases`); + if (!response.ok) { + throw new Error(`Failed to fetch database stats: ${response.statusText}`); + } + return response.json(); + }, + + /** + * Get specific database statistics + */ + async getDatabaseStatsByType(type: 'postgres' | 'mongodb' | 'questdb'): Promise { + const response = await fetch(`${MONITORING_BASE}/databases/${type}`); + if (!response.ok) { + throw new Error(`Failed to fetch ${type} stats: ${response.statusText}`); + } + return response.json(); + }, + + /** + * Get detailed cache info + */ + async getCacheInfo(): Promise<{ parsed: CacheStats; raw: string }> { + const response = await fetch(`${MONITORING_BASE}/cache/info`); + if (!response.ok) { + throw new Error(`Failed to fetch cache info: ${response.statusText}`); + } + return response.json(); + }, + + /** + * Get service status + */ + async getServiceStatus(): Promise<{ services: ServiceStatus[] }> { + const response = await fetch(`${MONITORING_BASE}/services`); + if (!response.ok) { + throw new Error(`Failed to fetch service status: ${response.statusText}`); + } + return response.json(); + }, + + /** + * Get proxy statistics + */ + async getProxyStats(): Promise { + const response = await fetch(`${MONITORING_BASE}/proxies`); + if (!response.ok) { + throw new Error(`Failed to fetch proxy stats: ${response.statusText}`); + } + return response.json(); + }, + + /** + * Get system overview + */ + async getSystemOverview(): Promise { + const response = await fetch(`${MONITORING_BASE}/overview`); + if (!response.ok) { + throw new Error(`Failed to fetch system overview: ${response.statusText}`); + } + return response.json(); + }, +}; \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/types/index.ts b/apps/stock/web-app/src/features/monitoring/types/index.ts new file mode 100644 index 0000000..e22a419 --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/types/index.ts @@ -0,0 +1,120 @@ +/** + * Monitoring types for system health and metrics + */ + +export interface CacheStats { + provider: string; + connected: boolean; + uptime?: number; + memoryUsage?: { + used: number; + peak: number; + total?: number; + }; + stats?: { + hits: number; + misses: number; + keys: number; + evictedKeys?: number; + expiredKeys?: number; + }; + info?: Record; +} + +export interface QueueStats { + name: string; + connected: boolean; + jobs: { + waiting: number; + active: number; + completed: number; + failed: number; + delayed: number; + paused: number; + prioritized?: number; + 'waiting-children'?: number; + }; + workers?: { + count: number; + concurrency: number; + }; + throughput?: { + processed: number; + failed: number; + avgProcessingTime?: number; + }; +} + +export interface DatabaseStats { + type: 'postgres' | 'mongodb' | 'questdb'; + name: string; + connected: boolean; + latency?: number; + pool?: { + size: number; + active: number; + idle: number; + waiting?: number; + max: number; + }; + stats?: Record; +} + +export interface SystemHealth { + status: 'healthy' | 'degraded' | 'unhealthy'; + timestamp: string; + uptime: number; + memory: { + used: number; + total: number; + percentage: number; + available?: number; + heap?: { + used: number; + total: number; + }; + }; + cpu?: { + usage: number; + loadAverage?: number[]; + cores?: number; + }; + services: { + cache: CacheStats; + queues: QueueStats[]; + databases: DatabaseStats[]; + }; + errors?: string[]; +} + +export interface ServiceStatus { + name: string; + version: string; + status: 'running' | 'stopped' | 'error'; + port?: number; + uptime: number; + lastCheck: string; + healthy: boolean; + error?: string; +} + +export interface ProxyStats { + enabled: boolean; + totalProxies: number; + workingProxies: number; + failedProxies: number; + lastUpdate?: string; + lastFetchTime?: string; +} + +export interface SystemOverview { + services: ServiceStatus[]; + health: SystemHealth; + proxies?: ProxyStats; + environment: { + nodeVersion: string; + platform: string; + architecture: string; + hostname: string; + }; +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/monitoring/utils/formatters.ts b/apps/stock/web-app/src/features/monitoring/utils/formatters.ts new file mode 100644 index 0000000..a07878a --- /dev/null +++ b/apps/stock/web-app/src/features/monitoring/utils/formatters.ts @@ -0,0 +1,42 @@ +/** + * Common formatting utilities for monitoring components + */ + +export function formatUptime(ms: number): string { + const seconds = Math.floor(ms / 1000); + const minutes = Math.floor(seconds / 60); + const hours = Math.floor(minutes / 60); + const days = Math.floor(hours / 24); + + if (days > 0) {return `${days}d ${hours % 24}h`;} + if (hours > 0) {return `${hours}h ${minutes % 60}m`;} + if (minutes > 0) {return `${minutes}m ${seconds % 60}s`;} + return `${seconds}s`; +} + +export function formatBytes(bytes: number): string { + const gb = bytes / 1024 / 1024 / 1024; + if (gb >= 1) { + return gb.toFixed(2) + ' GB'; + } + + const mb = bytes / 1024 / 1024; + if (mb >= 1) { + return mb.toFixed(2) + ' MB'; + } + + const kb = bytes / 1024; + if (kb >= 1) { + return kb.toFixed(2) + ' KB'; + } + + return bytes + ' B'; +} + +export function formatNumber(num: number): string { + return num.toLocaleString(); +} + +export function formatPercentage(value: number, decimals: number = 1): string { + return `${value.toFixed(decimals)}%`; +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/pipeline/PipelinePage.tsx b/apps/stock/web-app/src/features/pipeline/PipelinePage.tsx new file mode 100644 index 0000000..bb980c0 --- /dev/null +++ b/apps/stock/web-app/src/features/pipeline/PipelinePage.tsx @@ -0,0 +1,405 @@ +import { useState, useEffect, useCallback } from 'react'; +import { + ArrowPathIcon, + CircleStackIcon, + CloudArrowDownIcon, + ExclamationTriangleIcon, + CheckCircleIcon, +} from '@heroicons/react/24/outline'; +import { usePipeline } from './hooks/usePipeline'; +import type { PipelineOperation, ExchangeStats, ProviderMappingStats, DataClearType } from './types'; + +const operations: PipelineOperation[] = [ + // Symbol operations + { + id: 'sync-qm-symbols', + name: 'Sync QM Symbols', + description: 'Sync symbols from QuestionsAndMethods API', + endpoint: '/symbols', + method: 'POST', + category: 'sync', + }, + { + id: 'sync-provider-symbols', + name: 'Sync Provider Symbols', + description: 'Sync symbols from a specific provider', + endpoint: '/symbols/:provider', + method: 'POST', + category: 'sync', + params: { provider: 'yahoo' }, // Default provider + }, + // Exchange operations + { + id: 'sync-qm-exchanges', + name: 'Sync QM Exchanges', + description: 'Sync exchanges from QuestionsAndMethods API', + endpoint: '/exchanges', + method: 'POST', + category: 'sync', + }, + { + id: 'sync-all-exchanges', + name: 'Sync All Exchanges', + description: 'Sync all exchanges with optional clear', + endpoint: '/exchanges/all', + method: 'POST', + category: 'sync', + }, + // Provider mapping operations + { + id: 'sync-qm-provider-mappings', + name: 'Sync QM Provider Mappings', + description: 'Sync provider mappings from QuestionsAndMethods', + endpoint: '/provider-mappings/qm', + method: 'POST', + category: 'sync', + }, + { + id: 'sync-ib-exchanges', + name: 'Sync IB Exchanges', + description: 'Sync exchanges from Interactive Brokers', + endpoint: '/provider-mappings/ib', + method: 'POST', + category: 'sync', + }, + // Maintenance operations + { + id: 'clear-postgresql', + name: 'Clear PostgreSQL Data', + description: 'Clear exchange and provider mapping data', + endpoint: '/clear/postgresql', + method: 'POST', + category: 'maintenance', + dangerous: true, + }, +]; + +export function PipelinePage() { + const { + loading, + error, + lastJobResult, + syncQMSymbols, + syncProviderSymbols, + syncQMExchanges, + syncAllExchanges, + syncQMProviderMappings, + syncIBExchanges, + clearPostgreSQLData, + getExchangeStats, + getProviderMappingStats, + } = usePipeline(); + + const [selectedProvider, setSelectedProvider] = useState('yahoo'); + const [clearFirst, setClearFirst] = useState(false); + const [clearDataType, setClearDataType] = useState<'all' | 'exchanges' | 'provider_mappings'>('all'); + const [stats, setStats] = useState<{ exchanges?: ExchangeStats; providerMappings?: ProviderMappingStats }>({}); + + // Load stats on mount + useEffect(() => { + loadStats(); + }, [loadStats]); + + const loadStats = useCallback(async () => { + const [exchangeStats, mappingStats] = await Promise.all([ + getExchangeStats(), + getProviderMappingStats(), + ]); + setStats({ + exchanges: exchangeStats, + providerMappings: mappingStats, + }); + }, [getExchangeStats, getProviderMappingStats]); + + const handleOperation = async (op: PipelineOperation) => { + switch (op.id) { + case 'sync-qm-symbols': + await syncQMSymbols(); + break; + case 'sync-provider-symbols': + await syncProviderSymbols(selectedProvider); + break; + case 'sync-qm-exchanges': + await syncQMExchanges(); + break; + case 'sync-all-exchanges': + await syncAllExchanges(clearFirst); + break; + case 'sync-qm-provider-mappings': + await syncQMProviderMappings(); + break; + case 'sync-ib-exchanges': + await syncIBExchanges(); + break; + case 'clear-postgresql': + if (confirm(`Are you sure you want to clear ${clearDataType} data? This action cannot be undone.`)) { + await clearPostgreSQLData(clearDataType); + } + break; + } + // Reload stats after operation + await loadStats(); + }; + + const getCategoryIcon = (category: string) => { + switch (category) { + case 'sync': + return ; + case 'maintenance': + return ; + default: + return ; + } + }; + + const getCategoryColor = (category: string) => { + switch (category) { + case 'sync': + return 'text-primary-400'; + case 'maintenance': + return 'text-warning'; + default: + return 'text-text-secondary'; + } + }; + + return ( +
+
+

Data Pipeline Management

+

+ Manage data synchronization and maintenance operations +

+
+ + {/* Stats Overview */} + {(stats.exchanges || stats.providerMappings) && ( +
+ {stats.exchanges && ( +
+

Exchange Statistics

+
+
+ Total Exchanges: + {stats.exchanges.totalExchanges} +
+
+ Active Exchanges: + {stats.exchanges.activeExchanges} +
+
+ Total Provider Mappings: + {stats.exchanges.totalProviderMappings} +
+
+ Active Mappings: + {stats.exchanges.activeProviderMappings} +
+
+
+ )} + + {stats.providerMappings && ( +
+

Provider Mapping Statistics

+
+
+ Coverage: + + {stats.providerMappings.coveragePercentage?.toFixed(1)}% + +
+
+ Verified Mappings: + {stats.providerMappings.verifiedMappings} +
+
+ Auto-mapped: + {stats.providerMappings.autoMappedCount} +
+ {stats.providerMappings.mappingsByProvider && ( +
+ By Provider: +
+ {Object.entries(stats.providerMappings.mappingsByProvider).map(([provider, count]) => ( + + {provider}: {String(count)} + + ))} +
+
+ )} +
+
+ )} +
+ )} + + {/* Status Messages */} + {error && ( +
+
+ + {error} +
+
+ )} + + {lastJobResult && ( +
+
+ {lastJobResult.success ? ( + + ) : ( + + )} + + {lastJobResult.message || lastJobResult.error} + + {lastJobResult.jobId && ( + + Job ID: {lastJobResult.jobId} + + )} +
+
+ )} + + {/* Operations Grid */} +
+ {/* Sync Operations */} +
+

+ + Sync Operations +

+
+ {operations.filter(op => op.category === 'sync').map(op => ( +
+
+

{op.name}

+
+ {getCategoryIcon(op.category)} +
+
+

{op.description}

+ + {/* Special inputs for specific operations */} + {op.id === 'sync-provider-symbols' && ( +
+ + +
+ )} + + {op.id === 'sync-all-exchanges' && ( +
+ +
+ )} + + +
+ ))} +
+
+ + {/* Maintenance Operations */} +
+

+ + Maintenance Operations +

+
+ {operations.filter(op => op.category === 'maintenance').map(op => ( +
+
+

{op.name}

+
+ {getCategoryIcon(op.category)} +
+
+

{op.description}

+ + {op.id === 'clear-postgresql' && ( +
+ + +
+ )} + + +
+ ))} +
+
+
+
+ ); +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/pipeline/SPACING.md b/apps/stock/web-app/src/features/pipeline/SPACING.md new file mode 100644 index 0000000..8901a67 --- /dev/null +++ b/apps/stock/web-app/src/features/pipeline/SPACING.md @@ -0,0 +1,58 @@ +# Pipeline Page - Spacing Updates + +This document outlines the spacing standardization applied to the Pipeline page to match the monitoring and dashboard pages. + +## Changes Applied + +### Layout Structure +- Main container: `flex flex-col h-full space-y-4` +- No outer padding (handled by parent layout) +- Flex-shrink-0 for fixed elements +- Overflow-y-auto for scrollable content area + +### Typography +- Page title: `text-lg` (from `text-2xl`) +- Page description: `text-sm` +- Section headings: `text-base` (from `text-lg`) +- Card titles: `text-sm` +- Card descriptions: `text-xs` (from `text-sm`) +- All text in cards: `text-xs` + +### Spacing +- Grid gaps: `gap-3` (from `gap-4`) +- Card padding: `p-3` (from `p-4`) +- Margins: `mb-2` or `mb-3` (from `mb-3` or `mb-4`) +- Button padding: `px-2.5 py-1.5` (from `px-3 py-2`) + +### Icons +- Section icons: `h-4 w-4` (from `h-5 w-5`) +- Button icons: `h-3 w-3` (from `h-4 w-4`) + +### Specific Components + +**Stats Cards**: +- Padding: `p-3` +- Title: `text-base mb-2` +- Content: `text-xs` with `space-y-1.5` + +**Operation Cards**: +- Padding: `p-3` +- Title: `text-sm` +- Description: `text-xs mb-2` +- Form elements: `text-xs` with `mb-2` + +**Buttons**: +- Padding: `px-2.5 py-1.5` +- Text: `text-xs` +- Icons: `h-3 w-3` + +**Alert Messages**: +- Padding: `p-3` +- Text: `text-sm` +- Icons: `h-4 w-4` + +## Benefits +- Consistent with monitoring and dashboard pages +- More operations visible without scrolling +- Cleaner, more compact appearance +- Better space utilization for data-heavy interface \ No newline at end of file diff --git a/apps/stock/web-app/src/features/pipeline/hooks/usePipeline.ts b/apps/stock/web-app/src/features/pipeline/hooks/usePipeline.ts new file mode 100644 index 0000000..9a14810 --- /dev/null +++ b/apps/stock/web-app/src/features/pipeline/hooks/usePipeline.ts @@ -0,0 +1,159 @@ +import { useCallback, useState } from 'react'; +import { pipelineApi } from '../services/pipelineApi'; +import type { + DataClearType, + ExchangeStats, + PipelineJobResult, + ProviderMappingStats, +} from '../types'; + +export function usePipeline() { + const [loading, setLoading] = useState(false); + const [error, setError] = useState(null); + const [lastJobResult, setLastJobResult] = useState(null); + + const executeOperation = useCallback(async ( + operation: () => Promise + ): Promise => { + try { + setLoading(true); + setError(null); + const result = await operation(); + setLastJobResult(result); + if (!result.success) { + setError(result.error || 'Operation failed'); + return false; + } + return true; + } catch (err) { + const errorMessage = err instanceof Error ? err.message : 'Unknown error occurred'; + setError(errorMessage); + setLastJobResult({ success: false, error: errorMessage }); + return false; + } finally { + setLoading(false); + } + }, []); + + // Symbol sync operations + const syncQMSymbols = useCallback( + () => executeOperation(() => pipelineApi.syncQMSymbols()), + [executeOperation] + ); + + const syncProviderSymbols = useCallback( + (provider: string) => executeOperation(() => pipelineApi.syncProviderSymbols(provider)), + [executeOperation] + ); + + // Exchange sync operations + const syncQMExchanges = useCallback( + () => executeOperation(() => pipelineApi.syncQMExchanges()), + [executeOperation] + ); + + const syncAllExchanges = useCallback( + (clearFirst: boolean = false) => + executeOperation(() => pipelineApi.syncAllExchanges(clearFirst)), + [executeOperation] + ); + + // Provider mapping sync operations + const syncQMProviderMappings = useCallback( + () => executeOperation(() => pipelineApi.syncQMProviderMappings()), + [executeOperation] + ); + + const syncIBExchanges = useCallback( + () => executeOperation(() => pipelineApi.syncIBExchanges()), + [executeOperation] + ); + + // Maintenance operations + const clearPostgreSQLData = useCallback( + (dataType: DataClearType = 'all') => + executeOperation(() => pipelineApi.clearPostgreSQLData(dataType)), + [executeOperation] + ); + + // Status and stats operations + const getSyncStatus = useCallback(async () => { + try { + setLoading(true); + setError(null); + const result = await pipelineApi.getSyncStatus(); + return result; + } catch (err) { + const errorMessage = err instanceof Error ? err.message : 'Failed to get sync status'; + setError(errorMessage); + return null; + } finally { + setLoading(false); + } + }, []); + + const getExchangeStats = useCallback(async (): Promise => { + try { + setLoading(true); + setError(null); + const result = await pipelineApi.getExchangeStats(); + if (result.success && result.data) { + return result.data as ExchangeStats; + } + setError(result.error || 'Failed to get exchange stats'); + return null; + } catch (err) { + const errorMessage = err instanceof Error ? err.message : 'Failed to get exchange stats'; + setError(errorMessage); + return null; + } finally { + setLoading(false); + } + }, []); + + const getProviderMappingStats = useCallback(async (): Promise => { + try { + setLoading(true); + setError(null); + const result = await pipelineApi.getProviderMappingStats(); + if (result.success && result.data) { + return result.data as ProviderMappingStats; + } + setError(result.error || 'Failed to get provider mapping stats'); + return null; + } catch (err) { + const errorMessage = err instanceof Error ? err.message : 'Failed to get provider mapping stats'; + setError(errorMessage); + return null; + } finally { + setLoading(false); + } + }, []); + + return { + // State + loading, + error, + lastJobResult, + + // Symbol operations + syncQMSymbols, + syncProviderSymbols, + + // Exchange operations + syncQMExchanges, + syncAllExchanges, + + // Provider mapping operations + syncQMProviderMappings, + syncIBExchanges, + + // Maintenance operations + clearPostgreSQLData, + + // Status and stats operations + getSyncStatus, + getExchangeStats, + getProviderMappingStats, + }; +} \ No newline at end of file diff --git a/apps/stock/web-app/src/features/pipeline/index.ts b/apps/stock/web-app/src/features/pipeline/index.ts new file mode 100644 index 0000000..c4040e8 --- /dev/null +++ b/apps/stock/web-app/src/features/pipeline/index.ts @@ -0,0 +1,3 @@ +export { PipelinePage } from './PipelinePage'; +export * from './hooks/usePipeline'; +export * from './types'; \ No newline at end of file diff --git a/apps/stock/web-app/src/features/pipeline/services/pipelineApi.ts b/apps/stock/web-app/src/features/pipeline/services/pipelineApi.ts new file mode 100644 index 0000000..cfb97fe --- /dev/null +++ b/apps/stock/web-app/src/features/pipeline/services/pipelineApi.ts @@ -0,0 +1,82 @@ +import type { + DataClearType, + PipelineJobResult, + PipelineStatsResult, +} from '../types'; + +const API_BASE_URL = import.meta.env.VITE_API_BASE_URL || 'http://localhost:2003'; + +class PipelineApiService { + private async request( + endpoint: string, + options?: RequestInit + ): Promise { + const url = `${API_BASE_URL}/pipeline${endpoint}`; + + const response = await fetch(url, { + headers: { + 'Content-Type': 'application/json', + ...options?.headers, + }, + ...options, + }); + + const data = await response.json(); + + if (!response.ok) { + throw new Error(data.error || `HTTP ${response.status}: ${response.statusText}`); + } + + return data; + } + + // Symbol sync operations + async syncQMSymbols(): Promise { + return this.request('/symbols', { method: 'POST' }); + } + + async syncProviderSymbols(provider: string): Promise { + return this.request(`/symbols/${provider}`, { method: 'POST' }); + } + + // Exchange sync operations + async syncQMExchanges(): Promise { + return this.request('/exchanges', { method: 'POST' }); + } + + async syncAllExchanges(clearFirst: boolean = false): Promise { + const params = clearFirst ? '?clear=true' : ''; + return this.request(`/exchanges/all${params}`, { method: 'POST' }); + } + + // Provider mapping sync operations + async syncQMProviderMappings(): Promise { + return this.request('/provider-mappings/qm', { method: 'POST' }); + } + + async syncIBExchanges(): Promise { + return this.request('/provider-mappings/ib', { method: 'POST' }); + } + + // Status and maintenance operations + async getSyncStatus(): Promise { + return this.request('/status'); + } + + async clearPostgreSQLData(dataType: DataClearType = 'all'): Promise { + const params = `?type=${dataType}`; + return this.request(`/clear/postgresql${params}`, { method: 'POST' }); + } + + // Statistics operations + async getExchangeStats(): Promise { + return this.request('/stats/exchanges'); + } + + async getProviderMappingStats(): Promise { + return this.request('/stats/provider-mappings'); + } +} + +// Export singleton instance +export const pipelineApi = new PipelineApiService(); \ No newline at end of file diff --git a/apps/stock/web-app/src/features/pipeline/types/index.ts b/apps/stock/web-app/src/features/pipeline/types/index.ts new file mode 100644 index 0000000..d22be10 --- /dev/null +++ b/apps/stock/web-app/src/features/pipeline/types/index.ts @@ -0,0 +1,58 @@ +// Pipeline API types + +export interface PipelineJobResult { + success: boolean; + jobId?: string; + message?: string; + error?: string; + data?: unknown; +} + +export interface PipelineStatsResult { + success: boolean; + data?: unknown; + error?: string; +} + +export interface ExchangeStats { + totalExchanges: number; + activeExchanges: number; + totalProviderMappings: number; + activeProviderMappings: number; + verifiedProviderMappings: number; + providers: string[]; +} + +export interface ProviderMappingStats { + totalMappings: number; + activeMappings: number; + verifiedMappings: number; + autoMappedCount: number; + mappingsByProvider: Record; + coveragePercentage: number; +} + +export interface SyncStatus { + lastSync?: { + symbols?: string; + exchanges?: string; + providerMappings?: string; + }; + pendingJobs?: number; + activeJobs?: number; + completedJobs?: number; + failedJobs?: number; +} + +export type DataClearType = 'exchanges' | 'provider_mappings' | 'all'; + +export interface PipelineOperation { + id: string; + name: string; + description: string; + endpoint: string; + method: 'GET' | 'POST'; + category: 'sync' | 'stats' | 'maintenance'; + dangerous?: boolean; + params?: Record; +} \ No newline at end of file diff --git a/apps/web-app/src/index.css b/apps/stock/web-app/src/index.css similarity index 100% rename from apps/web-app/src/index.css rename to apps/stock/web-app/src/index.css diff --git a/apps/web-app/src/lib/constants.ts b/apps/stock/web-app/src/lib/constants.ts similarity index 53% rename from apps/web-app/src/lib/constants.ts rename to apps/stock/web-app/src/lib/constants.ts index f1478d5..7501445 100644 --- a/apps/web-app/src/lib/constants.ts +++ b/apps/stock/web-app/src/lib/constants.ts @@ -5,13 +5,31 @@ import { DocumentTextIcon, HomeIcon, PresentationChartLineIcon, + ServerStackIcon, + CircleStackIcon, + ChartPieIcon, } from '@heroicons/react/24/outline'; -export const navigation = [ +export interface NavigationItem { + name: string; + href?: string; + icon: React.ComponentType>; + children?: NavigationItem[]; +} + +export const navigation: NavigationItem[] = [ { name: 'Dashboard', href: '/dashboard', icon: HomeIcon }, { name: 'Exchanges', href: '/exchanges', icon: BuildingLibraryIcon }, { name: 'Portfolio', href: '/portfolio', icon: ChartBarIcon }, { name: 'Strategies', href: '/strategies', icon: DocumentTextIcon }, { name: 'Analytics', href: '/analytics', icon: PresentationChartLineIcon }, + { + name: 'System', + icon: ServerStackIcon, + children: [ + { name: 'Monitoring', href: '/system/monitoring', icon: ChartPieIcon }, + { name: 'Pipeline', href: '/system/pipeline', icon: CircleStackIcon }, + ] + }, { name: 'Settings', href: '/settings', icon: CogIcon }, ]; diff --git a/apps/web-app/src/lib/constants/index.ts b/apps/stock/web-app/src/lib/constants/index.ts similarity index 100% rename from apps/web-app/src/lib/constants/index.ts rename to apps/stock/web-app/src/lib/constants/index.ts diff --git a/apps/web-app/src/lib/constants/navigation.ts b/apps/stock/web-app/src/lib/constants/navigation.ts similarity index 86% rename from apps/web-app/src/lib/constants/navigation.ts rename to apps/stock/web-app/src/lib/constants/navigation.ts index 3bd3353..1351bee 100644 --- a/apps/web-app/src/lib/constants/navigation.ts +++ b/apps/stock/web-app/src/lib/constants/navigation.ts @@ -5,6 +5,7 @@ import { CurrencyDollarIcon, DocumentTextIcon, HomeIcon, + ServerIcon, } from '@heroicons/react/24/outline'; export const navigation = [ @@ -38,6 +39,12 @@ export const navigation = [ icon: DocumentTextIcon, current: false, }, + { + name: 'System', + href: '/system/monitoring', + icon: ServerIcon, + current: false, + }, { name: 'Settings', href: '/settings', diff --git a/apps/web-app/src/lib/utils.ts b/apps/stock/web-app/src/lib/utils.ts similarity index 86% rename from apps/web-app/src/lib/utils.ts rename to apps/stock/web-app/src/lib/utils.ts index be7fef6..db2551b 100644 --- a/apps/web-app/src/lib/utils.ts +++ b/apps/stock/web-app/src/lib/utils.ts @@ -19,7 +19,11 @@ export function formatPercentage(value: number): string { } export function getValueColor(value: number): string { - if (value > 0) {return 'text-success';} - if (value < 0) {return 'text-danger';} + if (value > 0) { + return 'text-success'; + } + if (value < 0) { + return 'text-danger'; + } return 'text-text-secondary'; } diff --git a/apps/web-app/src/lib/utils/cn.ts b/apps/stock/web-app/src/lib/utils/cn.ts similarity index 100% rename from apps/web-app/src/lib/utils/cn.ts rename to apps/stock/web-app/src/lib/utils/cn.ts diff --git a/apps/web-app/src/lib/utils/index.ts b/apps/stock/web-app/src/lib/utils/index.ts similarity index 72% rename from apps/web-app/src/lib/utils/index.ts rename to apps/stock/web-app/src/lib/utils/index.ts index e365e72..483f89a 100644 --- a/apps/web-app/src/lib/utils/index.ts +++ b/apps/stock/web-app/src/lib/utils/index.ts @@ -23,9 +23,15 @@ export function formatPercentage(value: number, decimals = 2): string { * Format large numbers with K, M, B suffixes */ export function formatNumber(num: number): string { - if (num >= 1e9) {return (num / 1e9).toFixed(1) + 'B';} - if (num >= 1e6) {return (num / 1e6).toFixed(1) + 'M';} - if (num >= 1e3) {return (num / 1e3).toFixed(1) + 'K';} + if (num >= 1e9) { + return (num / 1e9).toFixed(1) + 'B'; + } + if (num >= 1e6) { + return (num / 1e6).toFixed(1) + 'M'; + } + if (num >= 1e3) { + return (num / 1e3).toFixed(1) + 'K'; + } return num.toString(); } @@ -33,8 +39,12 @@ export function formatNumber(num: number): string { * Get color class based on numeric value (profit/loss) */ export function getValueColor(value: number): string { - if (value > 0) {return 'text-success';} - if (value < 0) {return 'text-danger';} + if (value > 0) { + return 'text-success'; + } + if (value < 0) { + return 'text-danger'; + } return 'text-text-secondary'; } @@ -42,6 +52,8 @@ export function getValueColor(value: number): string { * Truncate text to specified length */ export function truncateText(text: string, length: number): string { - if (text.length <= length) {return text;} + if (text.length <= length) { + return text; + } return text.slice(0, length) + '...'; } diff --git a/apps/web-app/src/main.tsx b/apps/stock/web-app/src/main.tsx similarity index 100% rename from apps/web-app/src/main.tsx rename to apps/stock/web-app/src/main.tsx diff --git a/apps/web-app/tailwind.config.js b/apps/stock/web-app/tailwind.config.js similarity index 100% rename from apps/web-app/tailwind.config.js rename to apps/stock/web-app/tailwind.config.js diff --git a/apps/web-app/tsconfig.json b/apps/stock/web-app/tsconfig.json similarity index 94% rename from apps/web-app/tsconfig.json rename to apps/stock/web-app/tsconfig.json index 0c1ef41..145593e 100644 --- a/apps/web-app/tsconfig.json +++ b/apps/stock/web-app/tsconfig.json @@ -1,5 +1,5 @@ { - "extends": "../../tsconfig.json", + "extends": "../../../tsconfig.json", "compilerOptions": { "target": "ES2020", "useDefineForClassFields": true, diff --git a/apps/web-app/tsconfig.node.json b/apps/stock/web-app/tsconfig.node.json similarity index 100% rename from apps/web-app/tsconfig.node.json rename to apps/stock/web-app/tsconfig.node.json diff --git a/apps/web-app/vite.config.ts b/apps/stock/web-app/vite.config.ts similarity index 100% rename from apps/web-app/vite.config.ts rename to apps/stock/web-app/vite.config.ts diff --git a/apps/web-api/config/default.json b/apps/web-api/config/default.json deleted file mode 100644 index dacdb89..0000000 --- a/apps/web-api/config/default.json +++ /dev/null @@ -1,15 +0,0 @@ -{ - "service": { - "name": "web-api", - "port": 4000, - "host": "0.0.0.0", - "healthCheckPath": "/health", - "metricsPath": "/metrics", - "shutdownTimeout": 30000, - "cors": { - "enabled": true, - "origin": ["http://localhost:4200", "http://localhost:3000", "http://localhost:3002"], - "credentials": true - } - } -} \ No newline at end of file diff --git a/apps/web-api/src/clients.ts b/apps/web-api/src/clients.ts deleted file mode 100644 index 5dfe266..0000000 --- a/apps/web-api/src/clients.ts +++ /dev/null @@ -1,27 +0,0 @@ -import { PostgreSQLClient } from '@stock-bot/postgres-client'; -import { MongoDBClient } from '@stock-bot/mongodb-client'; - -let postgresClient: PostgreSQLClient | null = null; -let mongodbClient: MongoDBClient | null = null; - -export function setPostgreSQLClient(client: PostgreSQLClient): void { - postgresClient = client; -} - -export function getPostgreSQLClient(): PostgreSQLClient { - if (!postgresClient) { - throw new Error('PostgreSQL client not initialized. Call setPostgreSQLClient first.'); - } - return postgresClient; -} - -export function setMongoDBClient(client: MongoDBClient): void { - mongodbClient = client; -} - -export function getMongoDBClient(): MongoDBClient { - if (!mongodbClient) { - throw new Error('MongoDB client not initialized. Call setMongoDBClient first.'); - } - return mongodbClient; -} \ No newline at end of file diff --git a/apps/web-api/src/index.ts b/apps/web-api/src/index.ts deleted file mode 100644 index e4b5f3d..0000000 --- a/apps/web-api/src/index.ts +++ /dev/null @@ -1,175 +0,0 @@ -/** - * Stock Bot Web API - REST API service for web application - */ -import { Hono } from 'hono'; -import { cors } from 'hono/cors'; -import { initializeServiceConfig } from '@stock-bot/config'; -import { getLogger, setLoggerConfig, shutdownLoggers } from '@stock-bot/logger'; -import { createAndConnectMongoDBClient, MongoDBClient } from '@stock-bot/mongodb-client'; -import { createAndConnectPostgreSQLClient, PostgreSQLClient } from '@stock-bot/postgres-client'; -import { Shutdown } from '@stock-bot/shutdown'; -import { exchangeRoutes } from './routes/exchange.routes'; -import { healthRoutes } from './routes/health.routes'; -// Import routes -import { setMongoDBClient, setPostgreSQLClient } from './clients'; - -// Initialize configuration with automatic monorepo config inheritance -const config = await initializeServiceConfig(); -const serviceConfig = config.service; -const databaseConfig = config.database; - -// Initialize logger with config -const loggingConfig = config.logging; -if (loggingConfig) { - setLoggerConfig({ - logLevel: loggingConfig.level, - logConsole: true, - logFile: false, - environment: config.environment, - }); -} - -const app = new Hono(); - -// Add CORS middleware -app.use( - '*', - cors({ - origin: ['http://localhost:4200', 'http://localhost:3000', 'http://localhost:3002'], // React dev server ports - allowMethods: ['GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS'], - allowHeaders: ['Content-Type', 'Authorization'], - credentials: true, - }) -); - -const logger = getLogger('web-api'); -const PORT = serviceConfig.port; -let server: ReturnType | null = null; -let postgresClient: PostgreSQLClient | null = null; -let mongoClient: MongoDBClient | null = null; - -// Initialize shutdown manager -const shutdown = Shutdown.getInstance({ timeout: 15000 }); - -// Add routes -app.route('/health', healthRoutes); -app.route('/api/exchanges', exchangeRoutes); - -// Basic API info endpoint -app.get('/', c => { - return c.json({ - name: 'Stock Bot Web API', - version: '1.0.0', - status: 'running', - timestamp: new Date().toISOString(), - endpoints: { - health: '/health', - exchanges: '/api/exchanges', - }, - }); -}); - -// Initialize services -async function initializeServices() { - logger.info('Initializing web API service...'); - - try { - // Initialize MongoDB client - logger.debug('Connecting to MongoDB...'); - const mongoConfig = databaseConfig.mongodb; - mongoClient = await createAndConnectMongoDBClient({ - uri: mongoConfig.uri, - database: mongoConfig.database, - host: mongoConfig.host, - port: mongoConfig.port, - timeouts: { - connectTimeout: 30000, - socketTimeout: 30000, - serverSelectionTimeout: 5000, - }, - }); - setMongoDBClient(mongoClient); - logger.info('MongoDB connected'); - - // Initialize PostgreSQL client - logger.debug('Connecting to PostgreSQL...'); - const pgConfig = databaseConfig.postgres; - postgresClient = await createAndConnectPostgreSQLClient({ - host: pgConfig.host, - port: pgConfig.port, - database: pgConfig.database, - username: pgConfig.user, - password: pgConfig.password, - poolSettings: { - min: 2, - max: pgConfig.poolSize || 10, - idleTimeoutMillis: pgConfig.idleTimeout || 30000, - }, - }); - setPostgreSQLClient(postgresClient); - logger.info('PostgreSQL connected'); - - logger.info('All services initialized successfully'); - } catch (error) { - logger.error('Failed to initialize services', { error }); - throw error; - } -} - -// Start server -async function startServer() { - await initializeServices(); - - server = Bun.serve({ - port: PORT, - fetch: app.fetch, - development: config.environment === 'development', - }); - - logger.info(`Stock Bot Web API started on port ${PORT}`); -} - -// Register shutdown handlers -shutdown.onShutdown(async () => { - if (server) { - logger.info('Stopping HTTP server...'); - try { - server.stop(); - logger.info('HTTP server stopped'); - } catch (error) { - logger.error('Error stopping HTTP server', { error }); - } - } -}); - -shutdown.onShutdown(async () => { - logger.info('Disconnecting from databases...'); - try { - if (mongoClient) { - await mongoClient.disconnect(); - } - if (postgresClient) { - await postgresClient.disconnect(); - } - logger.info('Database connections closed'); - } catch (error) { - logger.error('Error closing database connections', { error }); - } -}); - -shutdown.onShutdown(async () => { - try { - await shutdownLoggers(); - // process.stdout.write('Web API loggers shut down\n'); - } catch (error) { - process.stderr.write(`Error shutting down loggers: ${error}\n`); - } -}); - -// Start the service -startServer().catch(error => { - logger.error('Failed to start web API service', { error }); - process.exit(1); -}); - -logger.info('Web API service startup initiated'); diff --git a/apps/web-api/src/routes/exchange.routes.ts b/apps/web-api/src/routes/exchange.routes.ts deleted file mode 100644 index 3e189fe..0000000 --- a/apps/web-api/src/routes/exchange.routes.ts +++ /dev/null @@ -1,273 +0,0 @@ -/** - * Exchange management routes - Refactored - */ -import { Hono } from 'hono'; -import { getLogger } from '@stock-bot/logger'; -import { exchangeService } from '../services/exchange.service'; -import { - validateCreateExchange, - validateUpdateExchange, - validateCreateProviderMapping, - validateUpdateProviderMapping, -} from '../utils/validation'; -import { handleError, createSuccessResponse } from '../utils/error-handler'; - -const logger = getLogger('exchange-routes'); -export const exchangeRoutes = new Hono(); - -// Get all exchanges with provider mapping counts and mappings -exchangeRoutes.get('/', async c => { - logger.debug('Getting all exchanges'); - try { - const exchanges = await exchangeService.getAllExchanges(); - logger.info('Successfully retrieved exchanges', { count: exchanges.length }); - return c.json(createSuccessResponse(exchanges, undefined, exchanges.length)); - } catch (error) { - logger.error('Failed to get exchanges', { error }); - return handleError(c, error, 'to get exchanges'); - } -}); - -// Get exchange by ID with detailed provider mappings -exchangeRoutes.get('/:id', async c => { - const exchangeId = c.req.param('id'); - logger.debug('Getting exchange by ID', { exchangeId }); - - try { - const result = await exchangeService.getExchangeById(exchangeId); - - if (!result) { - logger.warn('Exchange not found', { exchangeId }); - return c.json(createSuccessResponse(null, 'Exchange not found'), 404); - } - - logger.info('Successfully retrieved exchange details', { - exchangeId, - exchangeCode: result.exchange.code, - mappingCount: result.provider_mappings.length - }); - return c.json(createSuccessResponse(result)); - } catch (error) { - logger.error('Failed to get exchange details', { error, exchangeId }); - return handleError(c, error, 'to get exchange details'); - } -}); - -// Create new exchange -exchangeRoutes.post('/', async c => { - logger.debug('Creating new exchange'); - - try { - const body = await c.req.json(); - logger.debug('Received exchange creation request', { requestBody: body }); - - const validatedData = validateCreateExchange(body); - logger.debug('Exchange data validated successfully', { validatedData }); - - const exchange = await exchangeService.createExchange(validatedData); - logger.info('Exchange created successfully', { - exchangeId: exchange.id, - code: exchange.code, - name: exchange.name - }); - - return c.json( - createSuccessResponse(exchange, 'Exchange created successfully'), - 201 - ); - } catch (error) { - logger.error('Failed to create exchange', { error }); - return handleError(c, error, 'to create exchange'); - } -}); - -// Update exchange (activate/deactivate, rename, etc.) -exchangeRoutes.patch('/:id', async c => { - const exchangeId = c.req.param('id'); - logger.debug('Updating exchange', { exchangeId }); - - try { - const body = await c.req.json(); - logger.debug('Received exchange update request', { exchangeId, updates: body }); - - const validatedUpdates = validateUpdateExchange(body); - logger.debug('Exchange update data validated', { exchangeId, validatedUpdates }); - - const exchange = await exchangeService.updateExchange(exchangeId, validatedUpdates); - - if (!exchange) { - logger.warn('Exchange not found for update', { exchangeId }); - return c.json(createSuccessResponse(null, 'Exchange not found'), 404); - } - - logger.info('Exchange updated successfully', { - exchangeId, - code: exchange.code, - updates: validatedUpdates - }); - - // Log special actions - if (validatedUpdates.visible === false) { - logger.warn('Exchange marked as hidden - provider mappings will be deleted', { - exchangeId, - code: exchange.code - }); - } - - return c.json(createSuccessResponse(exchange, 'Exchange updated successfully')); - } catch (error) { - logger.error('Failed to update exchange', { error, exchangeId }); - return handleError(c, error, 'to update exchange'); - } -}); - -// Get all provider mappings -exchangeRoutes.get('/provider-mappings/all', async c => { - logger.debug('Getting all provider mappings'); - - try { - const mappings = await exchangeService.getAllProviderMappings(); - logger.info('Successfully retrieved all provider mappings', { count: mappings.length }); - return c.json(createSuccessResponse(mappings, undefined, mappings.length)); - } catch (error) { - logger.error('Failed to get provider mappings', { error }); - return handleError(c, error, 'to get provider mappings'); - } -}); - -// Get provider mappings by provider -exchangeRoutes.get('/provider-mappings/:provider', async c => { - const provider = c.req.param('provider'); - logger.debug('Getting provider mappings by provider', { provider }); - - try { - const mappings = await exchangeService.getProviderMappingsByProvider(provider); - logger.info('Successfully retrieved provider mappings', { provider, count: mappings.length }); - - return c.json( - createSuccessResponse( - mappings, - undefined, - mappings.length - ) - ); - } catch (error) { - logger.error('Failed to get provider mappings', { error, provider }); - return handleError(c, error, 'to get provider mappings'); - } -}); - -// Update provider mapping (activate/deactivate, verify, change confidence) -exchangeRoutes.patch('/provider-mappings/:id', async c => { - const mappingId = c.req.param('id'); - logger.debug('Updating provider mapping', { mappingId }); - - try { - const body = await c.req.json(); - logger.debug('Received provider mapping update request', { mappingId, updates: body }); - - const validatedUpdates = validateUpdateProviderMapping(body); - logger.debug('Provider mapping update data validated', { mappingId, validatedUpdates }); - - const mapping = await exchangeService.updateProviderMapping(mappingId, validatedUpdates); - - if (!mapping) { - logger.warn('Provider mapping not found for update', { mappingId }); - return c.json(createSuccessResponse(null, 'Provider mapping not found'), 404); - } - - logger.info('Provider mapping updated successfully', { - mappingId, - provider: mapping.provider, - providerExchangeCode: mapping.provider_exchange_code, - updates: validatedUpdates - }); - - return c.json(createSuccessResponse(mapping, 'Provider mapping updated successfully')); - } catch (error) { - logger.error('Failed to update provider mapping', { error, mappingId }); - return handleError(c, error, 'to update provider mapping'); - } -}); - -// Create new provider mapping -exchangeRoutes.post('/provider-mappings', async c => { - logger.debug('Creating new provider mapping'); - - try { - const body = await c.req.json(); - logger.debug('Received provider mapping creation request', { requestBody: body }); - - const validatedData = validateCreateProviderMapping(body); - logger.debug('Provider mapping data validated successfully', { validatedData }); - - const mapping = await exchangeService.createProviderMapping(validatedData); - logger.info('Provider mapping created successfully', { - mappingId: mapping.id, - provider: mapping.provider, - providerExchangeCode: mapping.provider_exchange_code, - masterExchangeId: mapping.master_exchange_id - }); - - return c.json( - createSuccessResponse(mapping, 'Provider mapping created successfully'), - 201 - ); - } catch (error) { - logger.error('Failed to create provider mapping', { error }); - return handleError(c, error, 'to create provider mapping'); - } -}); - -// Get all available providers -exchangeRoutes.get('/providers/list', async c => { - logger.debug('Getting providers list'); - - try { - const providers = await exchangeService.getProviders(); - logger.info('Successfully retrieved providers list', { count: providers.length, providers }); - return c.json(createSuccessResponse(providers)); - } catch (error) { - logger.error('Failed to get providers list', { error }); - return handleError(c, error, 'to get providers list'); - } -}); - -// Get unmapped provider exchanges by provider -exchangeRoutes.get('/provider-exchanges/unmapped/:provider', async c => { - const provider = c.req.param('provider'); - logger.debug('Getting unmapped provider exchanges', { provider }); - - try { - const exchanges = await exchangeService.getUnmappedProviderExchanges(provider); - logger.info('Successfully retrieved unmapped provider exchanges', { - provider, - count: exchanges.length - }); - - return c.json( - createSuccessResponse( - exchanges, - undefined, - exchanges.length - ) - ); - } catch (error) { - logger.error('Failed to get unmapped provider exchanges', { error, provider }); - return handleError(c, error, 'to get unmapped provider exchanges'); - } -}); - -// Get exchange statistics -exchangeRoutes.get('/stats/summary', async c => { - logger.debug('Getting exchange statistics'); - - try { - const stats = await exchangeService.getExchangeStats(); - logger.info('Successfully retrieved exchange statistics', { stats }); - return c.json(createSuccessResponse(stats)); - } catch (error) { - logger.error('Failed to get exchange statistics', { error }); - return handleError(c, error, 'to get exchange statistics'); - } -}); \ No newline at end of file diff --git a/apps/web-api/src/routes/health.routes.ts b/apps/web-api/src/routes/health.routes.ts deleted file mode 100644 index 9ace743..0000000 --- a/apps/web-api/src/routes/health.routes.ts +++ /dev/null @@ -1,98 +0,0 @@ -/** - * Health check routes - */ -import { Hono } from 'hono'; -import { getLogger } from '@stock-bot/logger'; -import { getPostgreSQLClient, getMongoDBClient } from '../clients'; - -const logger = getLogger('health-routes'); -export const healthRoutes = new Hono(); - -// Basic health check -healthRoutes.get('/', c => { - logger.debug('Basic health check requested'); - - const response = { - status: 'healthy', - service: 'web-api', - timestamp: new Date().toISOString(), - }; - - logger.info('Basic health check successful', { status: response.status }); - return c.json(response); -}); - -// Detailed health check with database connectivity -healthRoutes.get('/detailed', async c => { - logger.debug('Detailed health check requested'); - - const health = { - status: 'healthy', - service: 'web-api', - timestamp: new Date().toISOString(), - checks: { - mongodb: { status: 'unknown', message: '' }, - postgresql: { status: 'unknown', message: '' }, - }, - }; - - // Check MongoDB - logger.debug('Checking MongoDB connectivity'); - try { - const mongoClient = getMongoDBClient(); - if (mongoClient.connected) { - // Try a simple operation - const db = mongoClient.getDatabase(); - await db.admin().ping(); - health.checks.mongodb = { status: 'healthy', message: 'Connected and responsive' }; - logger.debug('MongoDB health check passed'); - } else { - health.checks.mongodb = { status: 'unhealthy', message: 'Not connected' }; - logger.warn('MongoDB health check failed - not connected'); - } - } catch (error) { - const errorMessage = error instanceof Error ? error.message : 'Unknown error'; - health.checks.mongodb = { - status: 'unhealthy', - message: errorMessage, - }; - logger.error('MongoDB health check failed', { error: errorMessage }); - } - - // Check PostgreSQL - logger.debug('Checking PostgreSQL connectivity'); - try { - const postgresClient = getPostgreSQLClient(); - await postgresClient.query('SELECT 1'); - health.checks.postgresql = { status: 'healthy', message: 'Connected and responsive' }; - logger.debug('PostgreSQL health check passed'); - } catch (error) { - const errorMessage = error instanceof Error ? error.message : 'Unknown error'; - health.checks.postgresql = { - status: 'unhealthy', - message: errorMessage, - }; - logger.error('PostgreSQL health check failed', { error: errorMessage }); - } - - // Overall status - const allHealthy = Object.values(health.checks).every(check => check.status === 'healthy'); - health.status = allHealthy ? 'healthy' : 'unhealthy'; - - const statusCode = allHealthy ? 200 : 503; - - if (allHealthy) { - logger.info('Detailed health check successful - all systems healthy', { - mongodb: health.checks.mongodb.status, - postgresql: health.checks.postgresql.status - }); - } else { - logger.warn('Detailed health check failed - some systems unhealthy', { - mongodb: health.checks.mongodb.status, - postgresql: health.checks.postgresql.status, - overallStatus: health.status - }); - } - - return c.json(health, statusCode); -}); diff --git a/apps/web-api/tsconfig.json b/apps/web-api/tsconfig.json deleted file mode 100644 index d9f09df..0000000 --- a/apps/web-api/tsconfig.json +++ /dev/null @@ -1,14 +0,0 @@ -{ - "extends": "../../tsconfig.app.json", - "references": [ - { "path": "../../libs/types" }, - { "path": "../../libs/config" }, - { "path": "../../libs/logger" }, - { "path": "../../libs/cache" }, - { "path": "../../libs/queue" }, - { "path": "../../libs/mongodb-client" }, - { "path": "../../libs/postgres-client" }, - { "path": "../../libs/questdb-client" }, - { "path": "../../libs/shutdown" } - ] -} diff --git a/apps/web-app/src/components/layout/Sidebar.tsx b/apps/web-app/src/components/layout/Sidebar.tsx deleted file mode 100644 index f6822b0..0000000 --- a/apps/web-app/src/components/layout/Sidebar.tsx +++ /dev/null @@ -1,124 +0,0 @@ -import { navigation } from '@/lib/constants'; -import { cn } from '@/lib/utils'; -import { Dialog, Transition } from '@headlessui/react'; -import { XMarkIcon } from '@heroicons/react/24/outline'; -import { Fragment } from 'react'; -import { NavLink } from 'react-router-dom'; - -interface SidebarProps { - sidebarOpen: boolean; - setSidebarOpen: (open: boolean) => void; -} - -export function Sidebar({ sidebarOpen, setSidebarOpen }: SidebarProps) { - return ( - <> - {/* Mobile sidebar */} - - - -
- - -
- - - -
- -
-
- - -
-
-
-
-
- - {/* Static sidebar for desktop */} -
- -
- - ); -} - -function SidebarContent() { - return ( -
-
-

Stock Bot

-
- -
- ); -} diff --git a/apps/web-app/src/features/exchanges/components/AddSourceDialog.tsx b/apps/web-app/src/features/exchanges/components/AddSourceDialog.tsx deleted file mode 100644 index bc44c1b..0000000 --- a/apps/web-app/src/features/exchanges/components/AddSourceDialog.tsx +++ /dev/null @@ -1,216 +0,0 @@ -import { Dialog, Transition } from '@headlessui/react'; -import { XMarkIcon } from '@heroicons/react/24/outline'; -import React, { useState } from 'react'; -import { AddSourceRequest } from '../types'; - -interface AddSourceDialogProps { - isOpen: boolean; - onClose: () => void; - onAddSource: (request: AddSourceRequest) => Promise; - exchangeId: string; - exchangeName: string; -} - -export function AddSourceDialog({ - isOpen, - onClose, - onAddSource, - exchangeName, -}: AddSourceDialogProps) { - const [source, setSource] = useState(''); - const [sourceCode, setSourceCode] = useState(''); - const [id, setId] = useState(''); - const [name, setName] = useState(''); - const [code, setCode] = useState(''); - const [aliases, setAliases] = useState(''); - const [loading, setLoading] = useState(false); - - const handleSubmit = async (e: React.FormEvent) => { - e.preventDefault(); - if (!source || !sourceCode || !id || !name || !code) {return;} - - setLoading(true); - try { - await onAddSource({ - source, - source_code: sourceCode, - mapping: { - id, - name, - code, - aliases: aliases - .split(',') - .map(a => a.trim()) - .filter(Boolean), - }, - }); - - // Reset form - setSource(''); - setSourceCode(''); - setId(''); - setName(''); - setCode(''); - setAliases(''); - } catch (_error) { - // TODO: Implement proper error handling/toast notification - // eslint-disable-next-line no-console - console.error('Error adding source:', _error); - } finally { - setLoading(false); - } - }; - - return ( - - - -
- - -
-
- - -
- - Add Source to {exchangeName} - - -
- -
-
- - -
- -
- - setSourceCode(e.target.value)} - className="w-full bg-surface border border-border rounded px-3 py-2 text-text-primary focus:ring-1 focus:ring-primary-500 focus:border-primary-500" - placeholder="e.g., IB, ALP, POLY" - required - /> -
- -
- - setId(e.target.value)} - className="w-full bg-surface border border-border rounded px-3 py-2 text-text-primary focus:ring-1 focus:ring-primary-500 focus:border-primary-500" - placeholder="e.g., NYSE, NASDAQ" - required - /> -
- -
- - setName(e.target.value)} - className="w-full bg-surface border border-border rounded px-3 py-2 text-text-primary focus:ring-1 focus:ring-primary-500 focus:border-primary-500" - placeholder="e.g., New York Stock Exchange" - required - /> -
- -
- - setCode(e.target.value)} - className="w-full bg-surface border border-border rounded px-3 py-2 text-text-primary focus:ring-1 focus:ring-primary-500 focus:border-primary-500" - placeholder="e.g., NYSE" - required - /> -
- -
- - setAliases(e.target.value)} - className="w-full bg-surface border border-border rounded px-3 py-2 text-text-primary focus:ring-1 focus:ring-primary-500 focus:border-primary-500" - placeholder="e.g., NYSE, New York, Big Board" - /> -
- -
- - -
-
-
-
-
-
-
-
- ); -} diff --git a/apps/web-app/src/features/exchanges/hooks/useFormValidation.ts b/apps/web-app/src/features/exchanges/hooks/useFormValidation.ts deleted file mode 100644 index 41f4bf1..0000000 --- a/apps/web-app/src/features/exchanges/hooks/useFormValidation.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { useState, useCallback } from 'react'; -import { FormErrors } from '../types'; - -export function useFormValidation( - initialData: T, - validateFn: (data: T) => FormErrors -) { - const [formData, setFormData] = useState(initialData); - const [errors, setErrors] = useState({}); - const [isSubmitting, setIsSubmitting] = useState(false); - - const updateField = useCallback((field: keyof T, value: T[keyof T]) => { - setFormData(prev => ({ ...prev, [field]: value })); - - // Clear error when user starts typing - if (errors[field as string]) { - setErrors(prev => ({ ...prev, [field as string]: '' })); - } - }, [errors]); - - const validate = useCallback((): boolean => { - const newErrors = validateFn(formData); - setErrors(newErrors); - return Object.keys(newErrors).length === 0; - }, [formData, validateFn]); - - const reset = useCallback(() => { - setFormData(initialData); - setErrors({}); - setIsSubmitting(false); - }, [initialData]); - - const handleSubmit = useCallback(async ( - onSubmit: (data: T) => Promise, - onSuccess?: () => void, - onError?: (error: unknown) => void - ) => { - if (!validate()) {return;} - - setIsSubmitting(true); - try { - await onSubmit(formData); - reset(); - onSuccess?.(); - } catch (error) { - onError?.(error); - } finally { - setIsSubmitting(false); - } - }, [formData, validate, reset]); - - return { - formData, - errors, - isSubmitting, - updateField, - validate, - reset, - handleSubmit, - setIsSubmitting, - }; -} \ No newline at end of file diff --git a/apps/web-app/src/features/exchanges/types/index.ts b/apps/web-app/src/features/exchanges/types/index.ts deleted file mode 100644 index f5d7cbc..0000000 --- a/apps/web-app/src/features/exchanges/types/index.ts +++ /dev/null @@ -1,7 +0,0 @@ -// Re-export all types from organized files -export * from './api.types'; -export * from './request.types'; -export * from './component.types'; - -// Legacy compatibility - can be removed later -export type ExchangesApiResponse = import('./api.types').ApiResponse; diff --git a/bun.lock b/bun.lock index 8c938a1..1730ad7 100644 --- a/bun.lock +++ b/bun.lock @@ -7,6 +7,7 @@ "@primeng/themes": "^19.1.3", "@tanstack/table-core": "^8.21.3", "@types/pg": "^8.15.4", + "awilix": "^12.0.5", "bullmq": "^5.53.2", "ioredis": "^5.6.1", "pg": "^8.16.0", @@ -20,8 +21,7 @@ "@modelcontextprotocol/server-postgres": "^0.6.2", "@testcontainers/mongodb": "^10.7.2", "@testcontainers/postgresql": "^10.7.2", - "@types/bun": "latest", - "@types/node": "^22.15.30", + "@types/bun": "^1.2.17", "@types/supertest": "^6.0.2", "@types/yup": "^0.32.0", "@typescript-eslint/eslint-plugin": "^8.34.0", @@ -36,60 +36,92 @@ "pg-mem": "^2.8.1", "prettier": "^3.5.3", "supertest": "^6.3.4", + "ts-unused-exports": "^11.0.1", "turbo": "^2.5.4", "typescript": "^5.8.3", "yup": "^1.6.1", }, }, - "apps/data-service": { - "name": "@stock-bot/data-service", + "apps/stock": { + "name": "@stock-bot/stock-app", + "version": "1.0.0", + "devDependencies": { + "@types/node": "^20.11.0", + "pm2": "^5.3.0", + "turbo": "^2.5.4", + "typescript": "^5.3.3", + }, + }, + "apps/stock/config": { + "name": "@stock-bot/stock-config", + "version": "1.0.0", + "dependencies": { + "@stock-bot/config": "*", + "zod": "^3.22.4", + }, + "devDependencies": { + "@types/node": "^20.11.0", + "typescript": "^5.3.3", + }, + }, + "apps/stock/data-ingestion": { + "name": "@stock-bot/data-ingestion", + "version": "1.0.0", + "dependencies": { + "@stock-bot/cache": "*", + "@stock-bot/config": "*", + "@stock-bot/di": "*", + "@stock-bot/handlers": "*", + "@stock-bot/logger": "*", + "@stock-bot/mongodb": "*", + "@stock-bot/postgres": "*", + "@stock-bot/questdb": "*", + "@stock-bot/queue": "*", + "@stock-bot/shutdown": "*", + "@stock-bot/stock-config": "*", + "@stock-bot/utils": "*", + "hono": "^4.0.0", + }, + "devDependencies": { + "typescript": "^5.0.0", + }, + }, + "apps/stock/data-pipeline": { + "name": "@stock-bot/data-pipeline", "version": "1.0.0", "dependencies": { "@stock-bot/cache": "*", "@stock-bot/config": "*", "@stock-bot/logger": "*", - "@stock-bot/mongodb-client": "*", - "@stock-bot/postgres-client": "*", - "@stock-bot/questdb-client": "*", + "@stock-bot/mongodb": "*", + "@stock-bot/postgres": "*", + "@stock-bot/questdb": "*", "@stock-bot/queue": "*", "@stock-bot/shutdown": "*", + "@stock-bot/stock-config": "*", "hono": "^4.0.0", }, "devDependencies": { "typescript": "^5.0.0", }, }, - "apps/data-sync-service": { - "name": "@stock-bot/data-sync-service", - "version": "1.0.0", - "dependencies": { - "@stock-bot/config": "*", - "@stock-bot/logger": "*", - "@stock-bot/mongodb-client": "*", - "@stock-bot/postgres-client": "*", - "@stock-bot/shutdown": "*", - "hono": "^4.0.0", - }, - "devDependencies": { - "typescript": "^5.0.0", - }, - }, - "apps/web-api": { + "apps/stock/web-api": { "name": "@stock-bot/web-api", "version": "1.0.0", "dependencies": { "@stock-bot/config": "*", "@stock-bot/logger": "*", - "@stock-bot/mongodb-client": "*", - "@stock-bot/postgres-client": "*", + "@stock-bot/mongodb": "*", + "@stock-bot/postgres": "*", "@stock-bot/shutdown": "*", + "@stock-bot/stock-config": "*", "hono": "^4.0.0", }, "devDependencies": { "typescript": "^5.0.0", }, }, - "apps/web-app": { + "apps/stock/web-app": { "name": "@stock-bot/web-app", "version": "0.1.0", "dependencies": { @@ -126,22 +158,7 @@ "vite": "^4.4.5", }, }, - "libs/browser": { - "name": "@stock-bot/browser", - "version": "1.0.0", - "dependencies": { - "playwright": "^1.53.0", - }, - "devDependencies": { - "@types/node": "^20.0.0", - "typescript": "^5.0.0", - }, - "peerDependencies": { - "@stock-bot/http": "workspace:*", - "@stock-bot/logger": "workspace:*", - }, - }, - "libs/cache": { + "libs/core/cache": { "name": "@stock-bot/cache", "version": "1.0.0", "dependencies": { @@ -154,7 +171,7 @@ "typescript": "^5.3.0", }, }, - "libs/config": { + "libs/core/config": { "name": "@stock-bot/config", "version": "1.0.0", "bin": { @@ -169,7 +186,19 @@ "typescript": "^5.3.3", }, }, - "libs/event-bus": { + "libs/core/di": { + "name": "@stock-bot/di", + "version": "1.0.0", + "dependencies": { + "@stock-bot/config": "workspace:*", + "@stock-bot/logger": "workspace:*", + "@stock-bot/types": "workspace:*", + }, + "devDependencies": { + "@types/pg": "^8.10.7", + }, + }, + "libs/core/event-bus": { "name": "@stock-bot/event-bus", "version": "1.0.0", "dependencies": { @@ -183,29 +212,23 @@ "typescript": "^5.3.0", }, }, - "libs/http": { - "name": "@stock-bot/http", + "libs/core/handlers": { + "name": "@stock-bot/handlers", "version": "1.0.0", "dependencies": { - "@stock-bot/logger": "*", - "@stock-bot/types": "*", - "axios": "^1.9.0", - "http-proxy-agent": "^7.0.2", - "https-proxy-agent": "^7.0.6", - "socks-proxy-agent": "^8.0.5", - "user-agents": "^1.1.567", + "@stock-bot/cache": "workspace:*", + "@stock-bot/config": "workspace:*", + "@stock-bot/logger": "workspace:*", + "@stock-bot/types": "workspace:*", + "@stock-bot/utils": "workspace:*", }, "devDependencies": { "@types/node": "^20.11.0", - "@types/user-agents": "^1.0.4", - "@typescript-eslint/eslint-plugin": "^6.19.0", - "@typescript-eslint/parser": "^6.19.0", "bun-types": "^1.2.15", - "eslint": "^8.56.0", "typescript": "^5.3.0", }, }, - "libs/logger": { + "libs/core/logger": { "name": "@stock-bot/logger", "version": "1.0.0", "dependencies": { @@ -220,8 +243,43 @@ "typescript": "^5.3.0", }, }, - "libs/mongodb-client": { - "name": "@stock-bot/mongodb-client", + "libs/core/queue": { + "name": "@stock-bot/queue", + "version": "1.0.0", + "dependencies": { + "@stock-bot/cache": "*", + "@stock-bot/handlers": "*", + "@stock-bot/logger": "*", + "@stock-bot/types": "*", + "bullmq": "^5.0.0", + "ioredis": "^5.3.0", + "rate-limiter-flexible": "^3.0.0", + }, + "devDependencies": { + "@types/node": "^20.0.0", + "testcontainers": "^10.0.0", + "typescript": "^5.3.0", + }, + }, + "libs/core/shutdown": { + "name": "@stock-bot/shutdown", + "version": "1.0.0", + "devDependencies": { + "@types/node": "^20.0.0", + "typescript": "^5.0.0", + }, + }, + "libs/core/types": { + "name": "@stock-bot/types", + "version": "1.0.0", + "devDependencies": { + "@types/node": "^20.11.0", + "bun-types": "^1.2.15", + "typescript": "^5.3.0", + }, + }, + "libs/data/mongodb": { + "name": "@stock-bot/mongodb", "version": "1.0.0", "dependencies": { "@stock-bot/logger": "*", @@ -238,8 +296,8 @@ "typescript": "^5.3.0", }, }, - "libs/postgres-client": { - "name": "@stock-bot/postgres-client", + "libs/data/postgres": { + "name": "@stock-bot/postgres", "version": "1.0.0", "dependencies": { "@stock-bot/logger": "*", @@ -256,8 +314,8 @@ "typescript": "^5.3.0", }, }, - "libs/questdb-client": { - "name": "@stock-bot/questdb-client", + "libs/data/questdb": { + "name": "@stock-bot/questdb", "version": "1.0.0", "dependencies": { "@stock-bot/logger": "*", @@ -273,46 +331,39 @@ "typescript": "^5.3.0", }, }, - "libs/queue": { - "name": "@stock-bot/queue", + "libs/services/browser": { + "name": "@stock-bot/browser", "version": "1.0.0", "dependencies": { - "@stock-bot/cache": "*", - "@stock-bot/logger": "*", - "@stock-bot/types": "*", - "bullmq": "^5.0.0", - "ioredis": "^5.3.0", - "rate-limiter-flexible": "^3.0.0", + "playwright": "^1.53.0", }, - "devDependencies": { - "@types/node": "^20.0.0", - "testcontainers": "^10.0.0", - "typescript": "^5.3.0", - }, - }, - "libs/shutdown": { - "name": "@stock-bot/shutdown", - "version": "1.0.0", "devDependencies": { "@types/node": "^20.0.0", "typescript": "^5.0.0", }, + "peerDependencies": { + "@stock-bot/logger": "workspace:*", + }, }, - "libs/types": { - "name": "@stock-bot/types", - "version": "1.0.0", + "libs/services/proxy": { + "name": "@stock-bot/proxy", + "version": "0.1.0", + "dependencies": { + "@stock-bot/cache": "workspace:*", + "@stock-bot/logger": "workspace:*", + }, "devDependencies": { - "@types/node": "^20.11.0", - "bun-types": "^1.2.15", - "typescript": "^5.3.0", + "typescript": "^5.0.0", }, }, "libs/utils": { "name": "@stock-bot/utils", "version": "1.0.0", "dependencies": { - "@stock-bot/types": "*", - "date-fns": "^2.30.0", + "@stock-bot/cache": "workspace:*", + "@stock-bot/config": "workspace:*", + "@stock-bot/logger": "workspace:*", + "@stock-bot/types": "workspace:*", }, "devDependencies": { "@types/node": "^20.11.0", @@ -375,7 +426,7 @@ "@aws-sdk/credential-provider-web-identity": ["@aws-sdk/credential-provider-web-identity@3.830.0", "", { "dependencies": { "@aws-sdk/core": "3.826.0", "@aws-sdk/nested-clients": "3.830.0", "@aws-sdk/types": "3.821.0", "@smithy/property-provider": "^4.0.4", "@smithy/types": "^4.3.1", "tslib": "^2.6.2" } }, "sha512-hPYrKsZeeOdLROJ59T6Y8yZ0iwC/60L3qhZXjapBFjbqBtMaQiMTI645K6xVXBioA6vxXq7B4aLOhYqk6Fy/Ww=="], - "@aws-sdk/credential-providers": ["@aws-sdk/credential-providers@3.830.0", "", { "dependencies": { "@aws-sdk/client-cognito-identity": "3.830.0", "@aws-sdk/core": "3.826.0", "@aws-sdk/credential-provider-cognito-identity": "3.830.0", "@aws-sdk/credential-provider-env": "3.826.0", "@aws-sdk/credential-provider-http": "3.826.0", "@aws-sdk/credential-provider-ini": "3.830.0", "@aws-sdk/credential-provider-node": "3.830.0", "@aws-sdk/credential-provider-process": "3.826.0", "@aws-sdk/credential-provider-sso": "3.830.0", "@aws-sdk/credential-provider-web-identity": "3.830.0", "@aws-sdk/nested-clients": "3.830.0", "@aws-sdk/types": "3.821.0", "@smithy/config-resolver": "^4.1.4", "@smithy/core": "^3.5.3", "@smithy/credential-provider-imds": "^4.0.6", "@smithy/node-config-provider": "^4.1.3", "@smithy/property-provider": "^4.0.4", "@smithy/types": "^4.3.1", "tslib": "^2.6.2" } }, "sha512-Q16Yf52L9QWsRhaaG/Q6eUkUWGUrbKTM2ba8at8ZZ8tsGaKO5pYgXUTErxB1bin11S6JszinbLqUf9G9oUExxA=="], + "@aws-sdk/credential-providers": ["@aws-sdk/credential-providers@3.834.0", "", { "dependencies": { "@aws-sdk/client-cognito-identity": "3.830.0", "@aws-sdk/core": "3.826.0", "@aws-sdk/credential-provider-cognito-identity": "3.830.0", "@aws-sdk/credential-provider-env": "3.826.0", "@aws-sdk/credential-provider-http": "3.826.0", "@aws-sdk/credential-provider-ini": "3.830.0", "@aws-sdk/credential-provider-node": "3.830.0", "@aws-sdk/credential-provider-process": "3.826.0", "@aws-sdk/credential-provider-sso": "3.830.0", "@aws-sdk/credential-provider-web-identity": "3.830.0", "@aws-sdk/nested-clients": "3.830.0", "@aws-sdk/types": "3.821.0", "@smithy/config-resolver": "^4.1.4", "@smithy/core": "^3.5.3", "@smithy/credential-provider-imds": "^4.0.6", "@smithy/node-config-provider": "^4.1.3", "@smithy/property-provider": "^4.0.4", "@smithy/types": "^4.3.1", "tslib": "^2.6.2" } }, "sha512-ORIWCrLuqJnJg0fuI0rPhwaeuzqnIIJsbSkg1WV2XuiOpWXwLC/CfzhAbelQAv07/sRywZMnKqws0OOWg/ieYg=="], "@aws-sdk/middleware-host-header": ["@aws-sdk/middleware-host-header@3.821.0", "", { "dependencies": { "@aws-sdk/types": "3.821.0", "@smithy/protocol-http": "^5.1.2", "@smithy/types": "^4.3.1", "tslib": "^2.6.2" } }, "sha512-xSMR+sopSeWGx5/4pAGhhfMvGBHioVBbqGvDs6pG64xfNwM5vq5s5v6D04e2i+uSTj4qGa71dLUs5I0UzAK3sw=="], @@ -561,11 +612,11 @@ "@mongodb-js/device-id": ["@mongodb-js/device-id@0.2.1", "", {}, "sha512-kC/F1/ryJMNeIt+n7CATAf9AL/X5Nz1Tju8VseyViL2DF640dmF/JQwWmjakpsSTy5X9TVNOkG9ye4Mber8GHQ=="], - "@mongodb-js/devtools-connect": ["@mongodb-js/devtools-connect@3.8.1", "", { "dependencies": { "@mongodb-js/devtools-proxy-support": "^0.5.0", "@mongodb-js/oidc-http-server-pages": "1.1.5", "lodash.merge": "^4.6.2", "mongodb-connection-string-url": "^3.0.0", "socks": "^2.7.3" }, "optionalDependencies": { "kerberos": "^2.1.0", "mongodb-client-encryption": "^6.1.0", "os-dns-native": "^1.2.0", "resolve-mongodb-srv": "^1.1.1" }, "peerDependencies": { "@mongodb-js/oidc-plugin": "^1.1.0", "mongodb": "^6.9.0", "mongodb-log-writer": "^2.4.1" } }, "sha512-NJn30GU/WpqYAHKvl9O9fwR2UTljEXuvb+ig9Xvwclh7LIdcKy5Eftvp72qSLdGWElQZxxcBiUCbQ0CrbL3qSg=="], + "@mongodb-js/devtools-connect": ["@mongodb-js/devtools-connect@3.8.2", "", { "dependencies": { "@mongodb-js/devtools-proxy-support": "^0.5.1", "@mongodb-js/oidc-http-server-pages": "1.1.6", "lodash.merge": "^4.6.2", "mongodb-connection-string-url": "^3.0.0", "socks": "^2.7.3" }, "optionalDependencies": { "kerberos": "^2.1.0", "mongodb-client-encryption": "^6.1.0", "os-dns-native": "^1.2.0", "resolve-mongodb-srv": "^1.1.1" }, "peerDependencies": { "@mongodb-js/oidc-plugin": "^1.1.0", "mongodb": "^6.9.0", "mongodb-log-writer": "^2.4.1" } }, "sha512-wiu3mg69q3hIrzB7c/QWiiJqqgVu76GfyYxw9h5MjUCuP6sEErYXoAHvZo1K75obnUX986i/GV9c/6oPLhxfkA=="], - "@mongodb-js/devtools-proxy-support": ["@mongodb-js/devtools-proxy-support@0.5.0", "", { "dependencies": { "@mongodb-js/socksv5": "^0.0.10", "agent-base": "^7.1.1", "debug": "^4.4.0", "http-proxy-agent": "^7.0.2", "https-proxy-agent": "^7.0.5", "lru-cache": "^11.0.0", "node-fetch": "^3.3.2", "pac-proxy-agent": "^7.0.2", "socks-proxy-agent": "^8.0.4", "ssh2": "^1.15.0", "system-ca": "^2.0.1" } }, "sha512-45vloh7iNanpHZbVGooZq26pjM41iV077Q62m+HmGNMIAkH3U6oswHR1gSPJrEr74F+Wl8KZhpWR7+GaeEFq+Q=="], + "@mongodb-js/devtools-proxy-support": ["@mongodb-js/devtools-proxy-support@0.5.1", "", { "dependencies": { "@mongodb-js/socksv5": "^0.0.10", "agent-base": "^7.1.1", "debug": "^4.4.0", "http-proxy-agent": "^7.0.2", "https-proxy-agent": "^7.0.5", "lru-cache": "^11.0.0", "node-fetch": "^3.3.2", "pac-proxy-agent": "^7.0.2", "socks-proxy-agent": "^8.0.4", "ssh2": "^1.15.0", "system-ca": "^2.0.1" } }, "sha512-snIekrl3yj6fPnk6UfTIrBj8Wt43hvjqf7XhGaw1Qcn55BOClE7FgXcJjLXOIDsEMuzdGtLnJji+GbW2uD2ulg=="], - "@mongodb-js/oidc-http-server-pages": ["@mongodb-js/oidc-http-server-pages@1.1.5", "", {}, "sha512-Csr/qLGxhlOV/o+6DVwsO5yk7cS9ggI12rNNF5nt17iSVBPpYs5hIjdPwvqzOZCNwvnqDbzTqWYhmFKTqM+3bw=="], + "@mongodb-js/oidc-http-server-pages": ["@mongodb-js/oidc-http-server-pages@1.1.6", "", {}, "sha512-ZR/IZi/jI81TRas5X9kzN9t2GZI6u9JdawKctdCoXCrtyvQmRU6ktviCcvXGLwjcZnIWEWbZM7bkpnEdITYSCw=="], "@mongodb-js/oidc-plugin": ["@mongodb-js/oidc-plugin@1.1.8", "", { "dependencies": { "express": "^4.18.2", "open": "^9.1.0", "openid-client": "^5.6.4" } }, "sha512-83H6SuUm4opxYqEc81AJBXEXlTMO9qnMGXidQFpB2Qwo4MmQtJN4UVm4notqwTBb/ysf410tspUGXy+QLu7xJQ=="], @@ -605,6 +656,14 @@ "@pkgjs/parseargs": ["@pkgjs/parseargs@0.11.0", "", {}, "sha512-+1VkjdD0QBLPodGrJUeqarH8VAIvQODIbwh9XpP5Syisf7YoQgsJKPNFoqqLQlu+VQ/tVSshMR6loPMn8U+dPg=="], + "@pm2/agent": ["@pm2/agent@2.0.4", "", { "dependencies": { "async": "~3.2.0", "chalk": "~3.0.0", "dayjs": "~1.8.24", "debug": "~4.3.1", "eventemitter2": "~5.0.1", "fast-json-patch": "^3.0.0-1", "fclone": "~1.0.11", "nssocket": "0.6.0", "pm2-axon": "~4.0.1", "pm2-axon-rpc": "~0.7.0", "proxy-agent": "~6.3.0", "semver": "~7.5.0", "ws": "~7.5.10" } }, "sha512-n7WYvvTJhHLS2oBb1PjOtgLpMhgImOq8sXkPBw6smeg9LJBWZjiEgPKOpR8mn9UJZsB5P3W4V/MyvNnp31LKeA=="], + + "@pm2/io": ["@pm2/io@6.0.1", "", { "dependencies": { "async": "~2.6.1", "debug": "~4.3.1", "eventemitter2": "^6.3.1", "require-in-the-middle": "^5.0.0", "semver": "~7.5.4", "shimmer": "^1.2.0", "signal-exit": "^3.0.3", "tslib": "1.9.3" } }, "sha512-KiA+shC6sULQAr9mGZ1pg+6KVW9MF8NpG99x26Lf/082/Qy8qsTCtnJy+HQReW1A9Rdf0C/404cz0RZGZro+IA=="], + + "@pm2/js-api": ["@pm2/js-api@0.8.0", "", { "dependencies": { "async": "^2.6.3", "debug": "~4.3.1", "eventemitter2": "^6.3.1", "extrareqp2": "^1.0.0", "ws": "^7.0.0" } }, "sha512-nmWzrA/BQZik3VBz+npRcNIu01kdBhWL0mxKmP1ciF/gTcujPTQqt027N9fc1pK9ERM8RipFhymw7RcmCyOEYA=="], + + "@pm2/pm2-version-check": ["@pm2/pm2-version-check@1.0.4", "", { "dependencies": { "debug": "^4.3.1" } }, "sha512-SXsM27SGH3yTWKc2fKR4SYNxsmnvuBQ9dd6QHtEWmiZ/VqaOYPAIlS8+vMcn27YLtAEBGvNRSh3TPNvtjZgfqA=="], + "@primeng/themes": ["@primeng/themes@19.1.3", "", { "dependencies": { "@primeuix/styled": "^0.3.2" } }, "sha512-y4VryHHUTPWlmfR56NBANC0QPIxEngTUE/J3pGs4SJquq1n5EE/U16dxa1qW/wXqLF3jn3l/AO/4KZqGj5UuAA=="], "@primeuix/styled": ["@primeuix/styled@0.3.2", "", { "dependencies": { "@primeuix/utils": "^0.3.2" } }, "sha512-ColZes0+/WKqH4ob2x8DyNYf1NENpe5ZguOvx5yCLxaP8EIMVhLjWLO/3umJiDnQU4XXMLkn2mMHHw+fhTX/mw=="], @@ -723,39 +782,47 @@ "@smithy/util-utf8": ["@smithy/util-utf8@4.0.0", "", { "dependencies": { "@smithy/util-buffer-from": "^4.0.0", "tslib": "^2.6.2" } }, "sha512-b+zebfKCfRdgNJDknHCob3O7FpeYQN6ZG6YLExMcasDHsCXlsXCEuiPZeLnJLpwa5dvPetGlnGCiMHuLwGvFow=="], - "@stock-bot/browser": ["@stock-bot/browser@workspace:libs/browser"], + "@stock-bot/browser": ["@stock-bot/browser@workspace:libs/services/browser"], - "@stock-bot/cache": ["@stock-bot/cache@workspace:libs/cache"], + "@stock-bot/cache": ["@stock-bot/cache@workspace:libs/core/cache"], - "@stock-bot/config": ["@stock-bot/config@workspace:libs/config"], + "@stock-bot/config": ["@stock-bot/config@workspace:libs/core/config"], - "@stock-bot/data-service": ["@stock-bot/data-service@workspace:apps/data-service"], + "@stock-bot/data-ingestion": ["@stock-bot/data-ingestion@workspace:apps/stock/data-ingestion"], - "@stock-bot/data-sync-service": ["@stock-bot/data-sync-service@workspace:apps/data-sync-service"], + "@stock-bot/data-pipeline": ["@stock-bot/data-pipeline@workspace:apps/stock/data-pipeline"], - "@stock-bot/event-bus": ["@stock-bot/event-bus@workspace:libs/event-bus"], + "@stock-bot/di": ["@stock-bot/di@workspace:libs/core/di"], - "@stock-bot/http": ["@stock-bot/http@workspace:libs/http"], + "@stock-bot/event-bus": ["@stock-bot/event-bus@workspace:libs/core/event-bus"], - "@stock-bot/logger": ["@stock-bot/logger@workspace:libs/logger"], + "@stock-bot/handlers": ["@stock-bot/handlers@workspace:libs/core/handlers"], - "@stock-bot/mongodb-client": ["@stock-bot/mongodb-client@workspace:libs/mongodb-client"], + "@stock-bot/logger": ["@stock-bot/logger@workspace:libs/core/logger"], - "@stock-bot/postgres-client": ["@stock-bot/postgres-client@workspace:libs/postgres-client"], + "@stock-bot/mongodb": ["@stock-bot/mongodb@workspace:libs/data/mongodb"], - "@stock-bot/questdb-client": ["@stock-bot/questdb-client@workspace:libs/questdb-client"], + "@stock-bot/postgres": ["@stock-bot/postgres@workspace:libs/data/postgres"], - "@stock-bot/queue": ["@stock-bot/queue@workspace:libs/queue"], + "@stock-bot/proxy": ["@stock-bot/proxy@workspace:libs/services/proxy"], - "@stock-bot/shutdown": ["@stock-bot/shutdown@workspace:libs/shutdown"], + "@stock-bot/questdb": ["@stock-bot/questdb@workspace:libs/data/questdb"], - "@stock-bot/types": ["@stock-bot/types@workspace:libs/types"], + "@stock-bot/queue": ["@stock-bot/queue@workspace:libs/core/queue"], + + "@stock-bot/shutdown": ["@stock-bot/shutdown@workspace:libs/core/shutdown"], + + "@stock-bot/stock-app": ["@stock-bot/stock-app@workspace:apps/stock"], + + "@stock-bot/stock-config": ["@stock-bot/stock-config@workspace:apps/stock/config"], + + "@stock-bot/types": ["@stock-bot/types@workspace:libs/core/types"], "@stock-bot/utils": ["@stock-bot/utils@workspace:libs/utils"], - "@stock-bot/web-api": ["@stock-bot/web-api@workspace:apps/web-api"], + "@stock-bot/web-api": ["@stock-bot/web-api@workspace:apps/stock/web-api"], - "@stock-bot/web-app": ["@stock-bot/web-app@workspace:apps/web-app"], + "@stock-bot/web-app": ["@stock-bot/web-app@workspace:apps/stock/web-app"], "@szmarczak/http-timer": ["@szmarczak/http-timer@5.0.1", "", { "dependencies": { "defer-to-connect": "^2.0.1" } }, "sha512-+PmQX0PiAYPMeVYe237LJAYvOMYW1j2rH5YROyS3b4CTVJum34HfRvKvAzozHAQG0TnHNdUfY9nCeUyRAs//cw=="], @@ -781,7 +848,7 @@ "@types/babel__traverse": ["@types/babel__traverse@7.20.7", "", { "dependencies": { "@babel/types": "^7.20.7" } }, "sha512-dkO5fhS7+/oos4ciWxyEyjWe48zmG6wbCheo/G2ZnHx4fs3EU6YC6UM8rk56gAjNJ9P3MTH2jo5jb92/K6wbng=="], - "@types/bun": ["@types/bun@1.2.16", "", { "dependencies": { "bun-types": "1.2.16" } }, "sha512-1aCZJ/6nSiViw339RsaNhkNoEloLaPzZhxMOYEa7OzRzO41IGg5n/7I43/ZIAW/c+Q6cT12Vf7fOZOoVIzb5BQ=="], + "@types/bun": ["@types/bun@1.2.17", "", { "dependencies": { "bun-types": "1.2.17" } }, "sha512-l/BYs/JYt+cXA/0+wUhulYJB6a6p//GTPiJ7nV+QHa8iiId4HZmnu/3J/SowP5g0rTiERY2kfGKXEK5Ehltx4Q=="], "@types/cookiejar": ["@types/cookiejar@2.1.5", "", {}, "sha512-he+DHOWReW0nghN24E1WUqM0efK4kI9oTqDm6XmK8ZPe2djZ90BSNdGnIyCLzCPw7/pogPlGbzI2wHGGmi4O/Q=="], @@ -803,7 +870,7 @@ "@types/mongodb": ["@types/mongodb@4.0.7", "", { "dependencies": { "mongodb": "*" } }, "sha512-lPUYPpzA43baXqnd36cZ9xxorprybxXDzteVKCPAdp14ppHtFJHnXYvNpmBvtMUTb5fKXVv6sVbzo1LHkWhJlw=="], - "@types/node": ["@types/node@22.15.32", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-3jigKqgSjsH6gYZv2nEsqdXfZqIFGAV36XYYjf9KGZ3PSG+IhLecqPnI310RvjutyMwifE2hhhNEklOUrvx/wA=="], + "@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], "@types/pg": ["@types/pg@8.15.4", "", { "dependencies": { "@types/node": "*", "pg-protocol": "*", "pg-types": "^2.2.0" } }, "sha512-I6UNVBAoYbvuWkkU3oosC8yxqH21f4/Jc4DK71JLG3dT2mdlGe1z+ep/LQGXaKaOgcvUrsQoPRqfgtMcvZiJhg=="], @@ -823,7 +890,7 @@ "@types/semver": ["@types/semver@7.7.0", "", {}, "sha512-k107IF4+Xr7UHjwDc7Cfd6PRQfbdkiRabXGRjo07b4WyPahFBZCZ1sE+BNxYIJPPg73UkfOsVOLwqVc/6ETrIA=="], - "@types/ssh2": ["@types/ssh2@1.15.5", "", { "dependencies": { "@types/node": "^18.11.18" } }, "sha512-N1ASjp/nXH3ovBHddRJpli4ozpk6UdDYIX4RJWFa9L1YKnzdhTlVmiGHm4DZnj/jLbqZpes4aeR30EFGQtvhQQ=="], + "@types/ssh2": ["@types/ssh2@0.5.52", "", { "dependencies": { "@types/node": "*", "@types/ssh2-streams": "*" } }, "sha512-lbLLlXxdCZOSJMCInKH2+9V/77ET2J6NPQHpFI0kda61Dd1KglJs+fPQBchizmzYSOJBgdTajhPqBO1xxLywvg=="], "@types/ssh2-streams": ["@types/ssh2-streams@0.1.12", "", { "dependencies": { "@types/node": "*" } }, "sha512-Sy8tpEmCce4Tq0oSOYdfqaBpA3hDM8SoxoFh5vzFsu2oL+znzGz8oVWW7xb4K920yYMUY+PIG31qZnFMfPWNCg=="], @@ -831,8 +898,6 @@ "@types/supertest": ["@types/supertest@6.0.3", "", { "dependencies": { "@types/methods": "^1.1.4", "@types/superagent": "^8.1.0" } }, "sha512-8WzXq62EXFhJ7QsH3Ocb/iKQ/Ty9ZVWnVzoTKc9tyyFRRF3a74Tk2+TLFgaFFw364Ere+npzHKEJ6ga2LzIL7w=="], - "@types/user-agents": ["@types/user-agents@1.0.4", "", {}, "sha512-AjeFc4oX5WPPflgKfRWWJfkEk7Wu82fnj1rROPsiqFt6yElpdGFg8Srtm/4PU4rA9UiDUZlruGPgcwTMQlwq4w=="], - "@types/webidl-conversions": ["@types/webidl-conversions@7.0.3", "", {}, "sha512-CiJJvcRtIgzadHCYXw7dqEnMNRjhGZlYK05Mj9OyktqV8uVT8fD2BFOB7S1uwBE3Kj2Z+4UyPmFw/Ixgw/LAlA=="], "@types/whatwg-url": ["@types/whatwg-url@11.0.5", "", { "dependencies": { "@types/webidl-conversions": "*" } }, "sha512-coYR071JRaHa+xoEvvYqvnIHaVqaYrLPbsufM9BF63HkwI5Lgmy2QR8Q5K/lYDYo5AK82wOvSOS0UsLTpTG7uQ=="], @@ -875,6 +940,12 @@ "ajv": ["ajv@6.12.6", "", { "dependencies": { "fast-deep-equal": "^3.1.1", "fast-json-stable-stringify": "^2.0.0", "json-schema-traverse": "^0.4.1", "uri-js": "^4.2.2" } }, "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g=="], + "amp": ["amp@0.3.1", "", {}, "sha512-OwIuC4yZaRogHKiuU5WlMR5Xk/jAcpPtawWL05Gj8Lvm2F6mwoJt4O/bHI+DHwG79vWd+8OFYM4/BzYqyRd3qw=="], + + "amp-message": ["amp-message@0.1.2", "", { "dependencies": { "amp": "0.3.1" } }, "sha512-JqutcFwoU1+jhv7ArgW38bqrE+LQdcRv4NxNw0mp0JHQyB6tXesWRjtYKlDgHRY2o3JE5UTaBGUK8kSWUdxWUg=="], + + "ansi-colors": ["ansi-colors@4.1.3", "", {}, "sha512-/6w/C21Pm1A7aZitlI5Ni/2J6FFQN8i1Cvz3kHABAAbw93v/NlvKdVOqz7CCWz/3iv/JplRSEEZ83XION15ovw=="], + "ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="], "ansi-styles": ["ansi-styles@4.3.0", "", { "dependencies": { "color-convert": "^2.0.1" } }, "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="], @@ -933,9 +1004,9 @@ "available-typed-arrays": ["available-typed-arrays@1.0.7", "", { "dependencies": { "possible-typed-array-names": "^1.0.0" } }, "sha512-wvUjBtSGN7+7SjNpq/9M2Tg350UZD3q62IFZLbRAR1bSMlCo1ZaeW+BJ+D090e4hIIZLBcTDWe4Mh4jvUDajzQ=="], - "aws4": ["aws4@1.13.2", "", {}, "sha512-lHe62zvbTB5eEABUVi/AwVh0ZKY9rMMDhmm+eeyuuUQbQ3+J+fONVQOZyj+DdrvD4BY33uYniyRJ4UJIaSKAfw=="], + "awilix": ["awilix@12.0.5", "", { "dependencies": { "camel-case": "^4.1.2", "fast-glob": "^3.3.3" } }, "sha512-Qf/V/hRo6DK0FoBKJ9QiObasRxHAhcNi0mV6kW2JMawxS3zq6Un+VsZmVAZDUfvB+MjTEiJ2tUJUl4cr0JiUAw=="], - "axios": ["axios@1.10.0", "", { "dependencies": { "follow-redirects": "^1.15.6", "form-data": "^4.0.0", "proxy-from-env": "^1.1.0" } }, "sha512-/1xYAC4MP/HEG+3duIhFr4ZQXR4sQXOIe+o6sdqzeykGLx6Upp/1p8MHqhINOvGeP7xyNHe7tsiJByc4SSVUxw=="], + "aws4": ["aws4@1.13.2", "", {}, "sha512-lHe62zvbTB5eEABUVi/AwVh0ZKY9rMMDhmm+eeyuuUQbQ3+J+fONVQOZyj+DdrvD4BY33uYniyRJ4UJIaSKAfw=="], "b4a": ["b4a@1.6.7", "", {}, "sha512-OnAYlL5b7LEkALw87fUVafQw5rVR9RjwGd4KUwNQ6DrrNmaVaUCgLipfVlzrPQ4tWOR9P0IXGNOx50jYCCdSJg=="], @@ -965,6 +1036,10 @@ "bl": ["bl@4.1.0", "", { "dependencies": { "buffer": "^5.5.0", "inherits": "^2.0.4", "readable-stream": "^3.4.0" } }, "sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w=="], + "blessed": ["blessed@0.1.81", "", { "bin": { "blessed": "./bin/tput.js" } }, "sha512-LoF5gae+hlmfORcG1M5+5XZi4LBmvlXTzwJWzUlPryN/SJdSflZvROM2TwkT0GMpq7oqT48NRd4GS7BiVBc5OQ=="], + + "bodec": ["bodec@0.1.0", "", {}, "sha512-Ylo+MAo5BDUq1KA3f3R/MFhh+g8cnHmo8bz3YPGhI1znrMaf77ol1sfvYJzsw3nTE+Y2GryfDxBaR+AqpAkEHQ=="], + "body-parser": ["body-parser@2.2.0", "", { "dependencies": { "bytes": "^3.1.2", "content-type": "^1.0.5", "debug": "^4.4.0", "http-errors": "^2.0.0", "iconv-lite": "^0.6.3", "on-finished": "^2.4.1", "qs": "^6.14.0", "raw-body": "^3.0.0", "type-is": "^2.0.0" } }, "sha512-02qvAaxv8tp7fBa/mw1ga98OGm+eCbqzJOKoRt70sLmfEEi+jyBYVTDGfCL/k06/4EMk/z01gCe7HoCH/f2LTg=="], "bowser": ["bowser@2.11.0", "", {}, "sha512-AlcaJBi/pqqJBIQ8U9Mcpc9i8Aqxn88Skv5d+xBX006BY5u8N3mGLHa5Lgppa7L/HfwgwLgZ6NYs+Ag6uUmJRA=="], @@ -983,11 +1058,13 @@ "buffer-crc32": ["buffer-crc32@1.0.0", "", {}, "sha512-Db1SbgBS/fg/392AblrMJk97KggmvYhr4pB5ZIMTWtaivCPMWLkmb7m21cJvpvgK+J3nsU2CmmixNBZx4vFj/w=="], + "buffer-from": ["buffer-from@1.1.2", "", {}, "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ=="], + "buildcheck": ["buildcheck@0.0.6", "", {}, "sha512-8f9ZJCUXyT1M35Jx7MkBgmBMo3oHTTBIPLiY9xyL0pl3T5RwcPEY8cUHr5LBNfu/fk6c2T4DJZuVM/8ZZT2D2A=="], - "bullmq": ["bullmq@5.54.3", "", { "dependencies": { "cron-parser": "^4.9.0", "ioredis": "^5.4.1", "msgpackr": "^1.11.2", "node-abort-controller": "^3.1.1", "semver": "^7.5.4", "tslib": "^2.0.0", "uuid": "^9.0.0" } }, "sha512-MVK2pOkB3hvrIcubwI8dS4qWHJLNKakKPpgRBTw91sIpPZArmvZ4t2hvryyEaJXJbAS/JHd6pKYOUd+RGRkWQQ=="], + "bullmq": ["bullmq@5.55.0", "", { "dependencies": { "cron-parser": "^4.9.0", "ioredis": "^5.4.1", "msgpackr": "^1.11.2", "node-abort-controller": "^3.1.1", "semver": "^7.5.4", "tslib": "^2.0.0", "uuid": "^9.0.0" } }, "sha512-LKaQZroyXBYSQd/SNP9EcmCZgiZjIImtQHBlnupUvhX1GmmJfIXjn0bf8lek3bvajMUbvVf8FrYdFD0ajAuy0g=="], - "bun-types": ["bun-types@1.2.16", "", { "dependencies": { "@types/node": "*" } }, "sha512-ciXLrHV4PXax9vHvUrkvun9VPVGOVwbbbBF/Ev1cXz12lyEZMoJpIJABOfPcN9gDJRaiKF9MVbSygLg4NXu3/A=="], + "bun-types": ["bun-types@1.2.17", "", { "dependencies": { "@types/node": "*" } }, "sha512-ElC7ItwT3SCQwYZDYoAH+q6KT4Fxjl8DtZ6qDulUFBmXA8YB4xo+l54J9ZJN+k2pphfn9vk7kfubeSd5QfTVJQ=="], "bundle-name": ["bundle-name@3.0.0", "", { "dependencies": { "run-applescript": "^5.0.0" } }, "sha512-PKA4BeSvBpQKQ8iPOGCSiell+N8P+Tf1DlwqmYhpe2gAhKPHn8EYOxVT+ShuGmhg8lN8XiSlS80yiExKXrURlw=="], @@ -1007,20 +1084,26 @@ "callsites": ["callsites@3.1.0", "", {}, "sha512-P8BjAsXvZS+VIDUI11hHCQEv74YT67YUi5JJFNWIqL235sBmjX4+qx9Muvls5ivyNENctx46xQLQ3aTuE7ssaQ=="], + "camel-case": ["camel-case@4.1.2", "", { "dependencies": { "pascal-case": "^3.1.2", "tslib": "^2.0.3" } }, "sha512-gxGWBrTT1JuMx6R+o5PTXMmUnhnVzLQ9SNutD4YqKtI6ap897t3tKECYla6gCWEkplXnlNybEkZg9GEGxKFCgw=="], + "camelcase": ["camelcase@6.3.0", "", {}, "sha512-Gmy6FhYlCY7uOElZUSbxo2UCDH8owEk996gkbrpsgGtrJLM3J7jGxl9Ic7Qwwj4ivOE5AWZWRMecDdF7hqGjFA=="], "camelcase-css": ["camelcase-css@2.0.1", "", {}, "sha512-QOSvevhslijgYwRx6Rv7zKdMF8lbRmx+uQGx2+vDc+KI/eBnsy9kit5aj23AgGu3pa4t9AgwbnXWqS+iOY+2aA=="], - "caniuse-lite": ["caniuse-lite@1.0.30001723", "", {}, "sha512-1R/elMjtehrFejxwmexeXAtae5UO9iSyFn6G/I806CYC/BLyyBk1EPhrKBkWhy6wM6Xnm47dSJQec+tLJ39WHw=="], + "caniuse-lite": ["caniuse-lite@1.0.30001724", "", {}, "sha512-WqJo7p0TbHDOythNTqYujmaJTvtYRZrjpP8TCvH6Vb9CYJerJNKamKzIWOM4BkQatWj9H2lYulpdAQNBe7QhNA=="], "chalk": ["chalk@4.1.2", "", { "dependencies": { "ansi-styles": "^4.1.0", "supports-color": "^7.1.0" } }, "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA=="], + "charm": ["charm@0.1.2", "", {}, "sha512-syedaZ9cPe7r3hoQA9twWYKu5AIyCswN5+szkmPBe9ccdLrj4bYaCnLVPTLd2kgVRc7+zoX4tyPgRnFKCj5YjQ=="], + "chokidar": ["chokidar@3.6.0", "", { "dependencies": { "anymatch": "~3.1.2", "braces": "~3.0.2", "glob-parent": "~5.1.2", "is-binary-path": "~2.1.0", "is-glob": "~4.0.1", "normalize-path": "~3.0.0", "readdirp": "~3.6.0" }, "optionalDependencies": { "fsevents": "~2.3.2" } }, "sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw=="], "chownr": ["chownr@1.1.4", "", {}, "sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg=="], "cli-table": ["cli-table@0.3.11", "", { "dependencies": { "colors": "1.0.3" } }, "sha512-IqLQi4lO0nIB4tcdTpN4LCB9FI3uqrJZK7RC515EnhZ6qBaglkIgICb1wjeAqpdoOabm1+SuQtkXIPdYC93jhQ=="], + "cli-tableau": ["cli-tableau@2.0.1", "", { "dependencies": { "chalk": "3.0.0" } }, "sha512-he+WTicka9cl0Fg/y+YyxcN6/bfQ/1O3QmgxRXDhABKqLzvoOSM4fMzp39uMyLBulAFuywD2N7UaoQE7WaADxQ=="], + "client-only": ["client-only@0.0.1", "", {}, "sha512-IV3Ou0jSMzZrd3pZ48nLkT9DA7Ag1pnPzaiQhpW7c3RbcqqzvzzVu+L8gfqMp/8IM2MQtSiqaCxrrcfu8I8rMA=="], "cliui": ["cliui@8.0.1", "", { "dependencies": { "string-width": "^4.2.0", "strip-ansi": "^6.0.1", "wrap-ansi": "^7.0.0" } }, "sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ=="], @@ -1039,7 +1122,7 @@ "combined-stream": ["combined-stream@1.0.8", "", { "dependencies": { "delayed-stream": "~1.0.0" } }, "sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg=="], - "commander": ["commander@4.1.1", "", {}, "sha512-NOKm8xhkzAjzFx8B2v5OAHT+u5pRQc2UCa2Vq9jYL/31o2wi9mxBA7LIFs3sV5VSC49z6pEhfbMULvShKj26WA=="], + "commander": ["commander@2.15.1", "", {}, "sha512-VlfT9F3V0v+jr4yxPc5gg9s62/fIVWsd2Bk2iD435um1NlGMYdVCq+MjcXnhYq2icNOizHr1kK+5TI6H0Hy0ag=="], "commondir": ["commondir@1.0.1", "", {}, "sha512-W9pAhw0ja1Edb5GVdIF1mjZw/ASI0AlShXM83UUGe2DVr5TdAPEA1OA8m/g8zWp9x6On7gqufY+FatDbC3MDQg=="], @@ -1073,12 +1156,16 @@ "cron-parser": ["cron-parser@4.9.0", "", { "dependencies": { "luxon": "^3.2.1" } }, "sha512-p0SaNjrHOnQeR8/VnfGbmg9te2kfyYSQ7Sc/j/6DtPL3JQvKxmjO9TSjNFpujqV3vEYYBvNNvXSxzyksBWAx1Q=="], + "croner": ["croner@4.1.97", "", {}, "sha512-/f6gpQuxDaqXu+1kwQYSckUglPaOrHdbIlBAu0YuW8/Cdb45XwXYNUBXg3r/9Mo6n540Kn/smKcZWko5x99KrQ=="], + "cross-spawn": ["cross-spawn@7.0.6", "", { "dependencies": { "path-key": "^3.1.0", "shebang-command": "^2.0.0", "which": "^2.0.1" } }, "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA=="], "cssesc": ["cssesc@3.0.0", "", { "bin": { "cssesc": "bin/cssesc" } }, "sha512-/Tb/JcjK111nNScGob5MNtsntNM1aCNUDipB/TkwZFhyDrrE47SOx/18wF2bbjgc3ZzCSKW1T5nt5EbFoAz/Vg=="], "csstype": ["csstype@3.1.3", "", {}, "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw=="], + "culvert": ["culvert@0.1.2", "", {}, "sha512-yi1x3EAWKjQTreYWeSd98431AV+IEE0qoDyOoaHJ7KJ21gv6HtBXHVLX74opVSGqcR8/AbjJBHAHpcOy2bj5Gg=="], + "data-uri-to-buffer": ["data-uri-to-buffer@4.0.1", "", {}, "sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A=="], "data-view-buffer": ["data-view-buffer@1.0.2", "", { "dependencies": { "call-bound": "^1.0.3", "es-errors": "^1.3.0", "is-data-view": "^1.0.2" } }, "sha512-EmKO5V3OLXh1rtK2wgXRansaK1/mtVdTUEiEI0W8RkvgT05kfxaH29PliLnpLP73yYO6142Q72QNa8Wx/A5CqQ=="], @@ -1087,10 +1174,10 @@ "data-view-byte-offset": ["data-view-byte-offset@1.0.1", "", { "dependencies": { "call-bound": "^1.0.2", "es-errors": "^1.3.0", "is-data-view": "^1.0.1" } }, "sha512-BS8PfmtDGnrgYdOonGZQdLZslWIeCGFP9tpan0hi1Co2Zr2NKADsvGYA8XxuG/4UWgJ6Cjtv+YJnB6MM69QGlQ=="], - "date-fns": ["date-fns@2.30.0", "", { "dependencies": { "@babel/runtime": "^7.21.0" } }, "sha512-fnULvOpxnC5/Vg3NCiWelDsLiUc9bRwAPs/+LfTLNvetFCtCTN+yQz15C/fs4AwX1R9K5GLtLfn8QW+dWisaAw=="], - "dateformat": ["dateformat@4.6.3", "", {}, "sha512-2P0p0pFGzHS5EMnhdxQi7aJN+iMheud0UhG4dlE1DLAlvL8JHjJJTX/CSm4JXwV0Ka5nGk3zC5mcb5bUQUxxMA=="], + "dayjs": ["dayjs@1.11.13", "", {}, "sha512-oaMBel6gjolK862uaPQOVTA7q3TZhuSvuMQAAglQDOWYO9A91IrAOUJEyKVlqJlHE0vq5p5UXxzdPfMH/x6xNg=="], + "debug": ["debug@4.4.1", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ=="], "decompress-response": ["decompress-response@6.0.0", "", { "dependencies": { "mimic-response": "^3.1.0" } }, "sha512-aW35yZM6Bb/4oJlZncMH2LCoZtJXTRxES17vE3hoRiowU2kWHaJKFkSBDnDR+cm9J+9QhXmREyIfv0pji9ejCQ=="], @@ -1147,7 +1234,7 @@ "ee-first": ["ee-first@1.1.1", "", {}, "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow=="], - "electron-to-chromium": ["electron-to-chromium@1.5.170", "", {}, "sha512-GP+M7aeluQo9uAyiTCxgIj/j+PrWhMlY7LFVj8prlsPljd0Fdg9AprlfUi+OCSFWy9Y5/2D/Jrj9HS8Z4rpKWA=="], + "electron-to-chromium": ["electron-to-chromium@1.5.171", "", {}, "sha512-scWpzXEJEMrGJa4Y6m/tVotb0WuvNmasv3wWVzUAeCgKU0ToFOhUW6Z+xWnRQANMYGxN4ngJXIThgBJOqzVPCQ=="], "emoji-regex": ["emoji-regex@8.0.0", "", {}, "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="], @@ -1155,6 +1242,8 @@ "end-of-stream": ["end-of-stream@1.4.5", "", { "dependencies": { "once": "^1.4.0" } }, "sha512-ooEGc6HP26xXq/N+GCGOT0JKCLDGrq2bQUZrQ7gyrJiZANJ/8YDTxTpQBXGMn+WbIQXNVpyWymm7KYVICQnyOg=="], + "enquirer": ["enquirer@2.3.6", "", { "dependencies": { "ansi-colors": "^4.1.1" } }, "sha512-yjNnPr315/FjS4zIsUxYguYUPP2e1NK4d7E7ZOLiyYCcbFBiTMyID+2wvm2w6+pZ/odMA7cRkjhsPbltwBOrLg=="], + "entities": ["entities@6.0.1", "", {}, "sha512-aN97NXWF6AWBTahfVOIrB/NShkzi5H7F9r1s9mD3cDj4Ko5f2qhhVoYMibXF7GlLveb/D2ioWay8lxI97Ven3g=="], "es-abstract": ["es-abstract@1.24.0", "", { "dependencies": { "array-buffer-byte-length": "^1.0.2", "arraybuffer.prototype.slice": "^1.0.4", "available-typed-arrays": "^1.0.7", "call-bind": "^1.0.8", "call-bound": "^1.0.4", "data-view-buffer": "^1.0.2", "data-view-byte-length": "^1.0.2", "data-view-byte-offset": "^1.0.1", "es-define-property": "^1.0.1", "es-errors": "^1.3.0", "es-object-atoms": "^1.1.1", "es-set-tostringtag": "^2.1.0", "es-to-primitive": "^1.3.0", "function.prototype.name": "^1.1.8", "get-intrinsic": "^1.3.0", "get-proto": "^1.0.1", "get-symbol-description": "^1.1.0", "globalthis": "^1.0.4", "gopd": "^1.2.0", "has-property-descriptors": "^1.0.2", "has-proto": "^1.2.0", "has-symbols": "^1.1.0", "hasown": "^2.0.2", "internal-slot": "^1.1.0", "is-array-buffer": "^3.0.5", "is-callable": "^1.2.7", "is-data-view": "^1.0.2", "is-negative-zero": "^2.0.3", "is-regex": "^1.2.1", "is-set": "^2.0.3", "is-shared-array-buffer": "^1.0.4", "is-string": "^1.1.1", "is-typed-array": "^1.1.15", "is-weakref": "^1.1.1", "math-intrinsics": "^1.1.0", "object-inspect": "^1.13.4", "object-keys": "^1.1.1", "object.assign": "^4.1.7", "own-keys": "^1.0.1", "regexp.prototype.flags": "^1.5.4", "safe-array-concat": "^1.1.3", "safe-push-apply": "^1.0.0", "safe-regex-test": "^1.1.0", "set-proto": "^1.0.0", "stop-iteration-iterator": "^1.1.0", "string.prototype.trim": "^1.2.10", "string.prototype.trimend": "^1.0.9", "string.prototype.trimstart": "^1.0.8", "typed-array-buffer": "^1.0.3", "typed-array-byte-length": "^1.0.3", "typed-array-byte-offset": "^1.0.4", "typed-array-length": "^1.0.7", "unbox-primitive": "^1.1.0", "which-typed-array": "^1.1.19" } }, "sha512-WSzPgsdLtTcQwm4CROfS5ju2Wa1QQcVeT37jFjYzdFz1r9ahadC8B8/a4qxJxM+09F18iumCdRmlr96ZYkQvEg=="], @@ -1187,11 +1276,11 @@ "eslint-import-resolver-node": ["eslint-import-resolver-node@0.3.9", "", { "dependencies": { "debug": "^3.2.7", "is-core-module": "^2.13.0", "resolve": "^1.22.4" } }, "sha512-WFj2isz22JahUv+B788TlO3N6zL3nNJGU8CcZbPZvVEkBPaJdCV4vy5wyghty5ROFbCRnm132v8BScu5/1BQ8g=="], - "eslint-module-utils": ["eslint-module-utils@2.12.0", "", { "dependencies": { "debug": "^3.2.7" } }, "sha512-wALZ0HFoytlyh/1+4wuZ9FJCD/leWHQzzrxJ8+rebyReSLk7LApMyd3WJaLVoN+D5+WIdJyDK1c6JnE65V4Zyg=="], + "eslint-module-utils": ["eslint-module-utils@2.12.1", "", { "dependencies": { "debug": "^3.2.7" } }, "sha512-L8jSWTze7K2mTg0vos/RuLRS5soomksDPoJLXIslC7c8Wmut3bx7CPpJijDcBZtxQ5lrbUdM+s0OlNbz0DCDNw=="], "eslint-plugin-es": ["eslint-plugin-es@3.0.1", "", { "dependencies": { "eslint-utils": "^2.0.0", "regexpp": "^3.0.0" }, "peerDependencies": { "eslint": ">=4.19.1" } }, "sha512-GUmAsJaN4Fc7Gbtl8uOBlayo2DqhwWvEzykMHSCZHU3XdJ+NSzzZcVhXh3VxX5icqQ+oQdIEawXX8xkR3mIFmQ=="], - "eslint-plugin-import": ["eslint-plugin-import@2.31.0", "", { "dependencies": { "@rtsao/scc": "^1.1.0", "array-includes": "^3.1.8", "array.prototype.findlastindex": "^1.2.5", "array.prototype.flat": "^1.3.2", "array.prototype.flatmap": "^1.3.2", "debug": "^3.2.7", "doctrine": "^2.1.0", "eslint-import-resolver-node": "^0.3.9", "eslint-module-utils": "^2.12.0", "hasown": "^2.0.2", "is-core-module": "^2.15.1", "is-glob": "^4.0.3", "minimatch": "^3.1.2", "object.fromentries": "^2.0.8", "object.groupby": "^1.0.3", "object.values": "^1.2.0", "semver": "^6.3.1", "string.prototype.trimend": "^1.0.8", "tsconfig-paths": "^3.15.0" }, "peerDependencies": { "eslint": "^2 || ^3 || ^4 || ^5 || ^6 || ^7.2.0 || ^8 || ^9" } }, "sha512-ixmkI62Rbc2/w8Vfxyh1jQRTdRTF52VxwRVHl/ykPAmqG+Nb7/kNn+byLP0LxPgI7zWA16Jt82SybJInmMia3A=="], + "eslint-plugin-import": ["eslint-plugin-import@2.32.0", "", { "dependencies": { "@rtsao/scc": "^1.1.0", "array-includes": "^3.1.9", "array.prototype.findlastindex": "^1.2.6", "array.prototype.flat": "^1.3.3", "array.prototype.flatmap": "^1.3.3", "debug": "^3.2.7", "doctrine": "^2.1.0", "eslint-import-resolver-node": "^0.3.9", "eslint-module-utils": "^2.12.1", "hasown": "^2.0.2", "is-core-module": "^2.16.1", "is-glob": "^4.0.3", "minimatch": "^3.1.2", "object.fromentries": "^2.0.8", "object.groupby": "^1.0.3", "object.values": "^1.2.1", "semver": "^6.3.1", "string.prototype.trimend": "^1.0.9", "tsconfig-paths": "^3.15.0" }, "peerDependencies": { "eslint": "^2 || ^3 || ^4 || ^5 || ^6 || ^7.2.0 || ^8 || ^9" } }, "sha512-whOE1HFo/qJDyX4SnXzP4N6zOWn79WhnCUY/iDR0mPfQZO8wcYE4JClzI2oZrhBnnMUCBCHZhO6VQyoBU95mZA=="], "eslint-plugin-node": ["eslint-plugin-node@11.1.0", "", { "dependencies": { "eslint-plugin-es": "^3.0.0", "eslint-utils": "^2.0.0", "ignore": "^5.1.1", "minimatch": "^3.0.4", "resolve": "^1.10.1", "semver": "^6.1.0" }, "peerDependencies": { "eslint": ">=5.16.0" } }, "sha512-oUwtPJ1W0SKD0Tr+wqu92c5xuCeQqB3hSCHasn/ZgjFdA9iDGNkNf2Zi9ztY7X+hNuMib23LNGRm6+uN+KLE3g=="], @@ -1225,6 +1314,8 @@ "event-target-shim": ["event-target-shim@5.0.1", "", {}, "sha512-i/2XbnSz/uxRCU6+NdVJgKWDTM427+MqYbkQzD321DuCQJUqOuJKIA0IM2+W2xtYHdKOmZ4dR6fExsd4SXL+WQ=="], + "eventemitter2": ["eventemitter2@5.0.1", "", {}, "sha512-5EM1GHXycJBS6mauYAbVKT1cVs7POKWb2NXD4Vyt8dDqeZa7LaDK1/sjtL+Zb0lzTpSNil4596Dyu97hz37QLg=="], + "eventemitter3": ["eventemitter3@5.0.1", "", {}, "sha512-GWkBvjiSZK87ELrYOSESUYeVIc9mvLLf/nXalMOS5dYrgZq9o5OVkbZAVM06CVxYsCwH9BDZFPlQTlPA1j4ahA=="], "events": ["events@3.3.0", "", {}, "sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q=="], @@ -1239,7 +1330,9 @@ "express": ["express@5.1.0", "", { "dependencies": { "accepts": "^2.0.0", "body-parser": "^2.2.0", "content-disposition": "^1.0.0", "content-type": "^1.0.5", "cookie": "^0.7.1", "cookie-signature": "^1.2.1", "debug": "^4.4.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "etag": "^1.8.1", "finalhandler": "^2.1.0", "fresh": "^2.0.0", "http-errors": "^2.0.0", "merge-descriptors": "^2.0.0", "mime-types": "^3.0.0", "on-finished": "^2.4.1", "once": "^1.4.0", "parseurl": "^1.3.3", "proxy-addr": "^2.0.7", "qs": "^6.14.0", "range-parser": "^1.2.1", "router": "^2.2.0", "send": "^1.1.0", "serve-static": "^2.2.0", "statuses": "^2.0.1", "type-is": "^2.0.1", "vary": "^1.1.2" } }, "sha512-DT9ck5YIRU+8GYzzU5kT3eHGA5iL+1Zd0EutOmTE9Dtk+Tvuzd23VBU+ec7HPNSTxXYO55gPV/hq4pSBJDjFpA=="], - "express-rate-limit": ["express-rate-limit@7.5.0", "", { "peerDependencies": { "express": "^4.11 || 5 || ^5.0.0-beta.1" } }, "sha512-eB5zbQh5h+VenMPM3fh+nw1YExi5nMr6HUCR62ELSP11huvxm/Uir1H1QEyTkk5QX6A58pX6NmaTMceKZ0Eodg=="], + "express-rate-limit": ["express-rate-limit@7.5.1", "", { "peerDependencies": { "express": ">= 4.11" } }, "sha512-7iN8iPMDzOMHPUYllBEsQdWVB6fPDMPqwjBaFrgr4Jgr/+okjvzAy+UHlYYL/Vs0OsOrMkwS6PJDkFlJwoxUnw=="], + + "extrareqp2": ["extrareqp2@1.0.0", "", { "dependencies": { "follow-redirects": "^1.14.0" } }, "sha512-Gum0g1QYb6wpPJCVypWP3bbIuaibcFiJcpuPM10YSXp/tzqi84x9PJageob+eN4xVRIOto4wjSGNLyMD54D2xA=="], "fast-copy": ["fast-copy@3.0.2", "", {}, "sha512-dl0O9Vhju8IrcLndv2eU4ldt1ftXMqqfgN4H1cpmGV7P6jeB9FwpN9a2c8DPGE1Ys88rNUJVYDHq73CGAGOPfQ=="], @@ -1249,6 +1342,8 @@ "fast-glob": ["fast-glob@3.3.3", "", { "dependencies": { "@nodelib/fs.stat": "^2.0.2", "@nodelib/fs.walk": "^1.2.3", "glob-parent": "^5.1.2", "merge2": "^1.3.0", "micromatch": "^4.0.8" } }, "sha512-7MptL8U0cqcFdzIzwOTHoilX9x5BrNqye7Z/LuC7kCMRio1EMSyqRK3BEAUD7sXRq4iT4AzTVuZdhgQ2TCvYLg=="], + "fast-json-patch": ["fast-json-patch@3.1.1", "", {}, "sha512-vf6IHUX2SBcA+5/+4883dsIjpBTqmfBjmYiWK1savxQmFk4JfBMLa7ynTYOs1Rolp/T1betJxHiGD3g1Mn8lUQ=="], + "fast-json-stable-stringify": ["fast-json-stable-stringify@2.1.0", "", {}, "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw=="], "fast-levenshtein": ["fast-levenshtein@2.0.6", "", {}, "sha512-DCXu6Ifhqcks7TZKY3Hxp3y6qphY5SJZmrWMDrKcERSOXWQdMhU9Ig/PYrzyw/ul9jOIyh0N4M0tbC5hodg8dw=="], @@ -1261,6 +1356,8 @@ "fastq": ["fastq@1.19.1", "", { "dependencies": { "reusify": "^1.0.4" } }, "sha512-GwLTyxkCXjXbxqIhTsMI2Nui8huMPtnxg7krajPJAjnEG/iiOS7i+zCtWGZR9G0NBKbXKh6X9m9UIsYX/N6vvQ=="], + "fclone": ["fclone@1.0.11", "", {}, "sha512-GDqVQezKzRABdeqflsgMr7ktzgF9CyS+p2oe0jJqUY6izSSbhPIQJDpoU4PtGcD7VPM9xh/dVrTu6z1nwgmEGw=="], + "fetch-blob": ["fetch-blob@3.2.0", "", { "dependencies": { "node-domexception": "^1.0.0", "web-streams-polyfill": "^3.0.3" } }, "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ=="], "file-entry-cache": ["file-entry-cache@8.0.0", "", { "dependencies": { "flat-cache": "^4.0.0" } }, "sha512-XXTUwCvisa5oacNGRP9SfNtYBNAMi+RPwBFmblZEF7N7swHYQS6/Zfk7SRwx4D5j3CH211YNRco1DEMNVfZCnQ=="], @@ -1329,6 +1426,10 @@ "get-uri": ["get-uri@6.0.4", "", { "dependencies": { "basic-ftp": "^5.0.2", "data-uri-to-buffer": "^6.0.2", "debug": "^4.3.4" } }, "sha512-E1b1lFFLvLgak2whF2xDBcOy6NLVGZBqqjJjsIhvopKfWWEi64pLVTWWehV8KlLerZkfNTA95sTe2OdJKm1OzQ=="], + "git-node-fs": ["git-node-fs@1.0.0", "", {}, "sha512-bLQypt14llVXBg0S0u8q8HmU7g9p3ysH+NvVlae5vILuUvs759665HvmR5+wb04KjHyjFcDRxdYb4kyNnluMUQ=="], + + "git-sha1": ["git-sha1@0.1.2", "", {}, "sha512-2e/nZezdVlyCopOCYHeW0onkbZg7xP1Ad6pndPy1rCygeRykefUS6r7oA5cJRGEFvseiaz5a/qUHFVX1dd6Isg=="], + "github-from-package": ["github-from-package@0.0.0", "", {}, "sha512-SyHy3T1v2NUXn29OsWdxmK6RwHD+vkj3v8en8AOBZ1wBQ/hCAQ5bAQTD02kW4W9tUp/3Qh6J8r9EvntiyCmOOw=="], "glob": ["glob@10.4.5", "", { "dependencies": { "foreground-child": "^3.1.0", "jackspeak": "^3.1.2", "minimatch": "^9.0.4", "minipass": "^7.1.2", "package-json-from-dist": "^1.0.0", "path-scurry": "^1.11.1" }, "bin": { "glob": "dist/esm/bin.mjs" } }, "sha512-7Bv8RF0k6xjo7d4A/PxYLbUCfb6c+Vpd2/mB2yRDlew7Jb5hEXiCD9ibfO7wpk8i4sevK6DFny9h7EYbM3/sHg=="], @@ -1367,7 +1468,7 @@ "help-me": ["help-me@5.0.0", "", {}, "sha512-7xgomUX6ADmcYzFik0HzAxh/73YlKR9bmFzf51CZwR+b6YtzU2m0u49hQCqV6SvlqIqsaxovfwdvbnsw3b/zpg=="], - "hono": ["hono@4.8.0", "", {}, "sha512-NoiHrqJxoe1MYXqW+/0/Q4NCizKj2Ivm4KmX8mOSBtw9UJ7KYaOGKkO7csIwO5UlZpfvVRdcgiMb0GGyjEjtcw=="], + "hono": ["hono@4.8.2", "", {}, "sha512-hM+1RIn9PK1I6SiTNS6/y7O1mvg88awYLFEuEtoiMtRyT3SD2iu9pSFgbBXT3b1Ua4IwzvSTLvwO0SEhDxCi4w=="], "http-cache-semantics": ["http-cache-semantics@4.2.0", "", {}, "sha512-dTxcvPXqPvXBQpq5dUr6mEMJX4oIEFv6bwom3FDwKRDsuIjjJGANqhBuoAn9c1RQJIdAKav33ED65E2ys+87QQ=="], @@ -1381,7 +1482,7 @@ "human-signals": ["human-signals@4.3.1", "", {}, "sha512-nZXjEF2nbo7lIw3mgYjItAfgQXog3OjJogSbKa2CQIIvSGWcKgeJnQlNXip6NglNzYH45nSRiEVimMvYL8DDqQ=="], - "iconv-lite": ["iconv-lite@0.6.3", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3.0.0" } }, "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw=="], + "iconv-lite": ["iconv-lite@0.4.24", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3" } }, "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA=="], "ieee754": ["ieee754@1.2.1", "", {}, "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA=="], @@ -1493,6 +1594,8 @@ "joycon": ["joycon@3.1.1", "", {}, "sha512-34wB/Y7MW7bzjKRjUKTa46I2Z7eV62Rkhva+KkopW7Qvv/OSWBqvkSY7vusOPrNuZcUG3tApvdVgNB8POj3SPw=="], + "js-git": ["js-git@0.7.8", "", { "dependencies": { "bodec": "^0.1.0", "culvert": "^0.1.2", "git-sha1": "^0.1.2", "pako": "^0.2.5" } }, "sha512-+E5ZH/HeRnoc/LW0AmAyhU+mNcWBzAKE+30+IDMLSLbbK+Tdt02AdkOKq9u15rlJsDEGFqtgckc8ZM59LhhiUA=="], + "js-tokens": ["js-tokens@4.0.0", "", {}, "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ=="], "js-yaml": ["js-yaml@4.1.0", "", { "dependencies": { "argparse": "^2.0.1" }, "bin": { "js-yaml": "bin/js-yaml.js" } }, "sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA=="], @@ -1509,16 +1612,20 @@ "json-stable-stringify-without-jsonify": ["json-stable-stringify-without-jsonify@1.0.1", "", {}, "sha512-Bdboy+l7tA3OGW6FjyFHWkP5LuByj1Tk33Ljyq0axyzdk9//JSi2u3fP1QSmd1KNwq6VOKYGlAu87CisVir6Pw=="], + "json-stringify-safe": ["json-stringify-safe@5.0.1", "", {}, "sha512-ZClg6AaYvamvYEE82d3Iyd3vSSIjQ+odgjaTzRuO3s7toCdFKczob2i0zCh7JE8kWn17yvAWhUVxvqGwUalsRA=="], + "json5": ["json5@1.0.2", "", { "dependencies": { "minimist": "^1.2.0" }, "bin": { "json5": "lib/cli.js" } }, "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA=="], "jsonify": ["jsonify@0.0.1", "", {}, "sha512-2/Ki0GcmuqSrgFyelQq9M05y7PS0mEwuIzrf3f1fPqkVDVRvZrPZtVSMHxdgo8Aq0sxAOb/cr2aqqA3LeWHVPg=="], "jsx-ast-utils": ["jsx-ast-utils@3.3.5", "", { "dependencies": { "array-includes": "^3.1.6", "array.prototype.flat": "^1.3.1", "object.assign": "^4.1.4", "object.values": "^1.1.6" } }, "sha512-ZZow9HBI5O6EPgSJLUb8n2NKgmVWTwCvHGwFuJlMjvLFqlGG6pjirPhtdsseaLZjSibD8eegzmYpUZwoIlj2cQ=="], - "kerberos": ["kerberos@2.2.2", "", { "dependencies": { "node-addon-api": "^6.1.0", "prebuild-install": "^7.1.2" } }, "sha512-42O7+/1Zatsc3MkxaMPpXcIl/ukIrbQaGoArZEAr6GcEi2qhfprOBYOPhj+YvSMJkEkdpTjApUx+2DuWaKwRhg=="], + "kerberos": ["kerberos@2.1.0", "", { "dependencies": { "bindings": "^1.5.0", "node-addon-api": "^6.1.0", "prebuild-install": "7.1.1" } }, "sha512-HvOl6O6cyEN/8Z4CAocHe/sekJtvt5UrxUdCuu7bXDZ2Hnsy6OpsQbISW+lpm03vrbO2ir+1QQ5Sx/vMEhHnog=="], "keyv": ["keyv@4.5.4", "", { "dependencies": { "json-buffer": "3.0.1" } }, "sha512-oxVHkHR/EJf2CNXnWxRLW6mg7JyCCUcG0DtEGmL2ctUo1PNTin1PUil+r/+4r5MpVgC/fn1kjsx7mjSujKqIpw=="], + "lazy": ["lazy@1.0.11", "", {}, "sha512-Y+CjUfLmIpoUCCRl0ub4smrYtGGr5AOa2AKOaWelGHOGz33X/Y/KizefGqbkwfz44+cnq/+9habclf8vOmu2LA=="], + "lazystream": ["lazystream@1.0.1", "", { "dependencies": { "readable-stream": "^2.0.5" } }, "sha512-b94GiNHQNy6JNTrt5w6zNyffMrNkXZb3KTkCZJb2V1xaEGCk093vkZ2jk3tpaeP33/OiXC+WvK9AxUebnf5nbw=="], "levn": ["levn@0.4.1", "", { "dependencies": { "prelude-ls": "^1.2.1", "type-check": "~0.4.0" } }, "sha512-+bT2uH4E5LGE7h/n3evcS/sQlJXCpIp6ym8OWJ5eV6+67Dsql/LaaT7qJBAt2rzfoa/5QBGBhxDix1dMt2kQKQ=="], @@ -1533,8 +1640,6 @@ "lodash.camelcase": ["lodash.camelcase@4.3.0", "", {}, "sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA=="], - "lodash.clonedeep": ["lodash.clonedeep@4.5.0", "", {}, "sha512-H5ZhCF25riFd9uB5UCkVKo61m3S/xZk1x4wA6yp/L3RFP6Z/eHH1ymQcGLo7J3GMPfm0V/7m1tryHuGVxpqEBQ=="], - "lodash.defaults": ["lodash.defaults@4.2.0", "", {}, "sha512-qjxPLHd3r5DnsdGacqOMU6pb/avJzdh9tFX2ymgoZE27BmjXrNy/y4LoaiTeAb+O3gL8AfpJGtqfX/ae2leYYQ=="], "lodash.isarguments": ["lodash.isarguments@3.1.0", "", {}, "sha512-chi4NHZlZqZD18a0imDHnZPrDeBbTtVN7GXMwuGdRH9qotxAjYs3aVLKc7zNOG9eddR5Ksd8rvFEBc9SsggPpg=="], @@ -1545,6 +1650,8 @@ "loose-envify": ["loose-envify@1.4.0", "", { "dependencies": { "js-tokens": "^3.0.0 || ^4.0.0" }, "bin": { "loose-envify": "cli.js" } }, "sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q=="], + "lower-case": ["lower-case@2.0.2", "", { "dependencies": { "tslib": "^2.0.3" } }, "sha512-7fm3l3NAF9WfN6W3JOmf5drwpVqX78JtoGJ3A6W0a6ZnldM41w2fV5D490psKFTpMds8TJse/eHLFFsNHHjHgg=="], + "lowercase-keys": ["lowercase-keys@3.0.0", "", {}, "sha512-ozCC6gdQ+glXOQsveKD0YsDy8DSQFjDTz4zyzEHNV5+JP5D62LmfDZ6o1cycFx9ouG940M5dE8C8CTewdj2YWQ=="], "lru-cache": ["lru-cache@11.1.0", "", {}, "sha512-QIXZUBJUx+2zHUdQujWejBkcD9+cs94tLn0+YL8UrCh+D5sCXZ4c7LaEH48pNwRY3MLDgqUFyhlCyjJPf1WP0A=="], @@ -1593,6 +1700,8 @@ "mkdirp-classic": ["mkdirp-classic@0.5.3", "", {}, "sha512-gKLcREMhtuZRwRAfqP3RFW+TK4JqApVBtOIftVgjuABpAtpxhPGaDcfvbhNvD0B8iD1oUr/txX35NjcaY6Ns/A=="], + "module-details-from-path": ["module-details-from-path@1.0.4", "", {}, "sha512-EGWKgxALGMgzvxYF1UyGTy0HXX/2vHLkw6+NvDKW2jypWbHpjQuj4UMcqQWXHERJhVGKikolT06G3bcKe4fi7w=="], + "moment": ["moment@2.30.1", "", {}, "sha512-uEmtNhbDOrWPFS+hdjFCBfy9f2YoyzRpwcl+DqpC6taX21FzsTLQVbMV/W7PzNSX6x/bhC1zA3c2UQ5NzH6how=="], "mongodb": ["mongodb@6.17.0", "", { "dependencies": { "@mongodb-js/saslprep": "^1.1.9", "bson": "^6.10.4", "mongodb-connection-string-url": "^3.0.0" }, "peerDependencies": { "@aws-sdk/credential-providers": "^3.188.0", "@mongodb-js/zstd": "^1.1.0 || ^2.0.0", "gcp-metadata": "^5.2.0", "kerberos": "^2.0.1", "mongodb-client-encryption": ">=6.0.0 <7", "snappy": "^7.2.2", "socks": "^2.7.1" }, "optionalPeers": ["@aws-sdk/credential-providers", "@mongodb-js/zstd", "gcp-metadata", "kerberos", "mongodb-client-encryption", "snappy", "socks"] }, "sha512-neerUzg/8U26cgruLysKEjJvoNSXhyID3RvzvdcpsIi2COYM3FS3o9nlH7fxFtefTb942dX3W9i37oPfCVj4wA=="], @@ -1613,7 +1722,7 @@ "mongodb-ns": ["mongodb-ns@2.4.2", "", {}, "sha512-gYJjEYG4v4a1WSXgUf81OBoBRlj+Z1SlnQVO392fC/4a1CN7CLWDITajZWPFTPh/yRozYk6sHHtZwZmQhodBEA=="], - "mongodb-redact": ["mongodb-redact@1.1.7", "", {}, "sha512-Mqnr2OMpHYrxiK+0f+Bm2CG/E+7uLJGPs4n3N++nQKBXj54Ie2T8kon3+t3LlwwG+jcH2htCZ6EON9xBczmMnQ=="], + "mongodb-redact": ["mongodb-redact@1.1.8", "", {}, "sha512-EbZ+q7LsVz7q8n49mGIcXgP2UiBp6R6vHEVbmGnF21ThCnP6AIho7wqpHqyjqqGjg54DoXQJTCwHPSknsCHv6g=="], "mongodb-schema": ["mongodb-schema@12.6.2", "", { "dependencies": { "reservoir": "^0.1.2" }, "optionalDependencies": { "bson": "^6.7.0", "cli-table": "^0.3.4", "js-yaml": "^4.0.0", "mongodb": "^6.6.1", "mongodb-ns": "^2.4.0", "numeral": "^2.0.6", "progress": "^2.0.3", "stats-lite": "^2.0.0", "yargs": "^17.6.2" }, "bin": { "mongodb-schema": "bin/mongodb-schema" } }, "sha512-uKjkTAx6MqJi0Xj0aeYRjvYr3O7LrUQgXH1c0WQCOByPoYbNG9RAhWoc4IwriIqTHyBw1RJn0C/p7DISOPYpMg=="], @@ -1625,24 +1734,30 @@ "msgpackr-extract": ["msgpackr-extract@3.0.3", "", { "dependencies": { "node-gyp-build-optional-packages": "5.2.2" }, "optionalDependencies": { "@msgpackr-extract/msgpackr-extract-darwin-arm64": "3.0.3", "@msgpackr-extract/msgpackr-extract-darwin-x64": "3.0.3", "@msgpackr-extract/msgpackr-extract-linux-arm": "3.0.3", "@msgpackr-extract/msgpackr-extract-linux-arm64": "3.0.3", "@msgpackr-extract/msgpackr-extract-linux-x64": "3.0.3", "@msgpackr-extract/msgpackr-extract-win32-x64": "3.0.3" }, "bin": { "download-msgpackr-prebuilds": "bin/download-prebuilds.js" } }, "sha512-P0efT1C9jIdVRefqjzOQ9Xml57zpOXnIuS+csaB4MdZbTdmGDLo8XhzBG1N7aO11gKDDkJvBLULeFTo46wwreA=="], + "mute-stream": ["mute-stream@0.0.8", "", {}, "sha512-nnbWWOkoWyUsTjKrhgD0dcz22mdkSnpYqbEjIm2nhwhuxlSkpywJmBo8h0ZqJdkp73mb90SssHkN4rsRaBAfAA=="], + "mz": ["mz@2.7.0", "", { "dependencies": { "any-promise": "^1.0.0", "object-assign": "^4.0.1", "thenify-all": "^1.0.0" } }, "sha512-z81GNO7nnYMEhrGh9LeymoE4+Yr0Wn5McHIZMK5cfQCl+NDX08sCZgUc9/6MHni9IWuFLm1Z3HTCXu2z9fN62Q=="], "nan": ["nan@2.22.2", "", {}, "sha512-DANghxFkS1plDdRsX0X9pm0Z6SJNN6gBdtXfanwoZ8hooC5gosGFSBGRYHUVPz1asKA/kMRqDRdHrluZ61SpBQ=="], "nanoid": ["nanoid@3.3.11", "", { "bin": { "nanoid": "bin/nanoid.cjs" } }, "sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w=="], - "napi-build-utils": ["napi-build-utils@2.0.0", "", {}, "sha512-GEbrYkbfF7MoNaoh2iGG84Mnf/WZfB0GdGEsM8wz7Expx/LlWf5U8t9nvJKXSp3qr5IsEbK04cBGhol/KwOsWA=="], + "napi-build-utils": ["napi-build-utils@1.0.2", "", {}, "sha512-ONmRUqK7zj7DWX0D9ADe03wbwOBZxNAfF20PlGfCWQcD3+/MakShIHrMqx9YwPTfxDdF1zLeL+RGZiR9kGMLdg=="], "natural-compare": ["natural-compare@1.4.0", "", {}, "sha512-OWND8ei3VtNC9h7V60qff3SVobHr996CTwgxubgyQYEpg290h9J0buyECNNJexkFm5sOajh5G116RYA1c8ZMSw=="], "nearley": ["nearley@2.20.1", "", { "dependencies": { "commander": "^2.19.0", "moo": "^0.5.0", "railroad-diagrams": "^1.0.0", "randexp": "0.4.6" }, "bin": { "nearleyc": "bin/nearleyc.js", "nearley-test": "bin/nearley-test.js", "nearley-unparse": "bin/nearley-unparse.js", "nearley-railroad": "bin/nearley-railroad.js" } }, "sha512-+Mc8UaAebFzgV+KpI5n7DasuuQCHA89dmwm7JXw3TV43ukfNQ9DnBH3Mdb2g/I4Fdxc26pwimBWvjIw0UAILSQ=="], + "needle": ["needle@2.4.0", "", { "dependencies": { "debug": "^3.2.6", "iconv-lite": "^0.4.4", "sax": "^1.2.4" }, "bin": { "needle": "./bin/needle" } }, "sha512-4Hnwzr3mi5L97hMYeNl8wRW/Onhy4nUKR/lVemJ8gJedxxUyBLm9kkrDColJvoSfwi0jCNhD+xCdOtiGDQiRZg=="], + "negotiator": ["negotiator@1.0.0", "", {}, "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg=="], "netmask": ["netmask@2.0.2", "", {}, "sha512-dBpDMdxv9Irdq66304OLfEmQ9tbNRFnFTuZiLo+bD+r332bBmMJ8GBLXklIXXgxd3+v9+KUnZaUR5PJMa75Gsg=="], "new-find-package-json": ["new-find-package-json@2.0.0", "", { "dependencies": { "debug": "^4.3.4" } }, "sha512-lDcBsjBSMlj3LXH2v/FW3txlh2pYTjmbOXPYJD93HI5EwuLzI11tdHSIpUMmfq/IOsldj4Ps8M8flhm+pCK4Ew=="], + "no-case": ["no-case@3.0.4", "", { "dependencies": { "lower-case": "^2.0.2", "tslib": "^2.0.3" } }, "sha512-fgAN3jGAh+RoxUGZHTSOLJIqUc2wmoBwGR4tbpNAKmmovFoWq0OdRkb0VkldReO2a2iBT/OEulG9XSUc10r3zg=="], + "node-abi": ["node-abi@3.75.0", "", { "dependencies": { "semver": "^7.3.5" } }, "sha512-OhYaY5sDsIka7H7AtijtI9jwGYLyl29eQn/W623DiN/MIv5sUqc4g7BIDThX+gb7di9f6xK02nkp8sdfFWZLTg=="], "node-abort-controller": ["node-abort-controller@3.1.1", "", {}, "sha512-AGK2yQKIjRuqnc6VkX2Xj5d+QW8xZ87pa1UK6yA6ouUyuxfHuMP6umE5QK7UmTeOAymo+Zx1Fxiuw9rVx8taHQ=="], @@ -1667,6 +1782,8 @@ "npm-run-path": ["npm-run-path@5.3.0", "", { "dependencies": { "path-key": "^4.0.0" } }, "sha512-ppwTtiJZq0O/ai0z7yfudtBpWIoxM8yE6nHi1X47eFR2EWORqfbu6CnPlNsjeN683eT0qG6H/Pyf9fCcvjnnnQ=="], + "nssocket": ["nssocket@0.6.0", "", { "dependencies": { "eventemitter2": "~0.4.14", "lazy": "~1.0.11" } }, "sha512-a9GSOIql5IqgWJR3F/JXG4KpJTA3Z53Cj0MeMvGpglytB1nxE4PdFNC0jINe27CS7cGivoynwc054EzCcT3M3w=="], + "numeral": ["numeral@2.0.6", "", {}, "sha512-qaKRmtYPZ5qdw4jWJD6bxEf1FJEqllJrwxCLIm0sQU/A7v2/czigzOb+C2uSiFsa9lBUzeH7M1oK+Q+OLxL3kA=="], "object-assign": ["object-assign@4.1.1", "", {}, "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg=="], @@ -1725,12 +1842,16 @@ "package-json-from-dist": ["package-json-from-dist@1.0.1", "", {}, "sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw=="], + "pako": ["pako@0.2.9", "", {}, "sha512-NUcwaKxUxWrZLpDG+z/xZaCgQITkA/Dv4V/T6bw7VON6l1Xz/VnrBqrYjZQ12TamKHzITTfOEIYUj48y2KXImA=="], + "parent-module": ["parent-module@1.0.1", "", { "dependencies": { "callsites": "^3.0.0" } }, "sha512-GQ2EWRpQV8/o+Aw8YqtfZZPfNRWZYkbidE9k5rpl/hC3vtHHBfGm2Ifi6qWV+coDGkrUKZAxE3Lot5kcsRlh+g=="], "parse5": ["parse5@7.3.0", "", { "dependencies": { "entities": "^6.0.0" } }, "sha512-IInvU7fabl34qmi9gY8XOVxhYyMyuH2xUNpb2q8/Y+7552KlejkRvqvD19nMoUW/uQGGbqNpA6Tufu5FL5BZgw=="], "parseurl": ["parseurl@1.3.3", "", {}, "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ=="], + "pascal-case": ["pascal-case@3.1.2", "", { "dependencies": { "no-case": "^3.0.4", "tslib": "^2.0.3" } }, "sha512-uWlGT3YSnK9x3BQJaOdcZwrnV6hPpd8jFH1/ucpiLRPh/2zCVJKS19E4GvYHvaCcACn3foXZ0cLB9Wrx1KGe5g=="], + "path-exists": ["path-exists@4.0.0", "", {}, "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w=="], "path-is-absolute": ["path-is-absolute@1.0.1", "", {}, "sha512-AVbw3UJ2e9bq64vSaS9Am0fje1Pa8pbGqTTsmXfaIiMpnr5DlDhfJOuLj9Sf95ZPVDAUerDfEk88MPmPe7UCQg=="], @@ -1747,7 +1868,7 @@ "pend": ["pend@1.2.0", "", {}, "sha512-F3asv42UuXchdzt+xXqfW1OGlVBe+mxa2mqI0pg5yAHZPvFmY3Y6drSf/GQ1A86WgWEN9Kzh/WrgKa6iGcHXLg=="], - "pg": ["pg@8.16.1", "", { "dependencies": { "pg-connection-string": "^2.9.1", "pg-pool": "^3.10.1", "pg-protocol": "^1.10.1", "pg-types": "2.2.0", "pgpass": "1.0.5" }, "optionalDependencies": { "pg-cloudflare": "^1.2.6" }, "peerDependencies": { "pg-native": ">=3.0.1" }, "optionalPeers": ["pg-native"] }, "sha512-5n6e7MgF5ABRsssOsX9xC95p+NUuhgDQDBSsrKSZJjYVqZHGyrmJuknym2IbVhGtzV9siBdzH9SEIQAuWF+sdg=="], + "pg": ["pg@8.16.2", "", { "dependencies": { "pg-connection-string": "^2.9.1", "pg-pool": "^3.10.1", "pg-protocol": "^1.10.2", "pg-types": "2.2.0", "pgpass": "1.0.5" }, "optionalDependencies": { "pg-cloudflare": "^1.2.6" }, "peerDependencies": { "pg-native": ">=3.0.1" }, "optionalPeers": ["pg-native"] }, "sha512-OtLWF0mKLmpxelOt9BqVq83QV6bTfsS0XLegIeAKqKjurRnRKie1Dc1iL89MugmSLhftxw6NNCyZhm1yQFLMEQ=="], "pg-cloudflare": ["pg-cloudflare@1.2.6", "", {}, "sha512-uxmJAnmIgmYgnSFzgOf2cqGQBzwnRYcrEgXuFjJNEkpedEIPBSEzxY7ph4uA9k1mI+l/GR0HjPNS6FKNZe8SBQ=="], @@ -1759,7 +1880,7 @@ "pg-pool": ["pg-pool@3.10.1", "", { "peerDependencies": { "pg": ">=8.0" } }, "sha512-Tu8jMlcX+9d8+QVzKIvM/uJtp07PKr82IUOYEphaWcoBhIYkoHpLXN3qO59nAI11ripznDsEzEv8nUxBVWajGg=="], - "pg-protocol": ["pg-protocol@1.10.1", "", {}, "sha512-9YS3ZonDj0Lxny//aF0ITPdfrEPgKWCJvONsSXAaIUhgpzlzl5JgaZNlbTFxvYNfm2terGEnHeOSUlF6qRGBzw=="], + "pg-protocol": ["pg-protocol@1.10.2", "", {}, "sha512-Ci7jy8PbaWxfsck2dwZdERcDG2A0MG8JoQILs+uZNjABFuBuItAZCWUNz8sXRDMoui24rJw7WlXqgpMdBSN/vQ=="], "pg-types": ["pg-types@2.2.0", "", { "dependencies": { "pg-int8": "1.0.1", "postgres-array": "~2.0.0", "postgres-bytea": "~1.0.0", "postgres-date": "~1.0.4", "postgres-interval": "^1.1.0" } }, "sha512-qTAAlrEsl8s4OiEQY69wDvcMIdQN6wdz5ojQiOy6YRMuynxenON0O5oCpJI6lshc6scgAY8qvJ2On/p+CXY0GA=="], @@ -1771,6 +1892,8 @@ "picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="], + "pidusage": ["pidusage@3.0.2", "", { "dependencies": { "safe-buffer": "^5.2.1" } }, "sha512-g0VU+y08pKw5M8EZ2rIGiEBaB8wrQMjYGFfW2QVIfyT8V+fq8YFLkvlz4bz5ljvFDJYNFCWT3PWqcRr2FKO81w=="], + "pify": ["pify@2.3.0", "", {}, "sha512-udgsAY+fTnvv7kI7aaxbqwWNb0AHiB0qBO89PZKPkoTmGOgdbrHDKD+0B2X4uTfJ/FT1R09r9gTsjUjNJotuog=="], "pino": ["pino@9.7.0", "", { "dependencies": { "atomic-sleep": "^1.0.0", "fast-redact": "^3.1.1", "on-exit-leak-free": "^2.1.0", "pino-abstract-transport": "^2.0.0", "pino-std-serializers": "^7.0.0", "process-warning": "^5.0.0", "quick-format-unescaped": "^4.0.3", "real-require": "^0.2.0", "safe-stable-stringify": "^2.3.1", "sonic-boom": "^4.0.1", "thread-stream": "^3.0.0" }, "bin": { "pino": "bin.js" } }, "sha512-vnMCM6xZTb1WDmLvtG2lE/2p+t9hDEIvTWJsu6FejkE62vB7gDhvzrpFR4Cw2to+9JNQxVnkAKVPA1KPB98vWg=="], @@ -1793,6 +1916,18 @@ "playwright-core": ["playwright-core@1.53.1", "", { "bin": { "playwright-core": "cli.js" } }, "sha512-Z46Oq7tLAyT0lGoFx4DOuB1IA9D1TPj0QkYxpPVUnGDqHHvDpCftu1J2hM2PiWsNMoZh8+LQaarAWcDfPBc6zg=="], + "pm2": ["pm2@5.4.3", "", { "dependencies": { "@pm2/agent": "~2.0.0", "@pm2/io": "~6.0.1", "@pm2/js-api": "~0.8.0", "@pm2/pm2-version-check": "latest", "async": "~3.2.0", "blessed": "0.1.81", "chalk": "3.0.0", "chokidar": "^3.5.3", "cli-tableau": "^2.0.0", "commander": "2.15.1", "croner": "~4.1.92", "dayjs": "~1.11.5", "debug": "^4.3.1", "enquirer": "2.3.6", "eventemitter2": "5.0.1", "fclone": "1.0.11", "js-yaml": "~4.1.0", "mkdirp": "1.0.4", "needle": "2.4.0", "pidusage": "~3.0", "pm2-axon": "~4.0.1", "pm2-axon-rpc": "~0.7.1", "pm2-deploy": "~1.0.2", "pm2-multimeter": "^0.1.2", "promptly": "^2", "semver": "^7.2", "source-map-support": "0.5.21", "sprintf-js": "1.1.2", "vizion": "~2.2.1" }, "optionalDependencies": { "pm2-sysmonit": "^1.2.8" }, "bin": { "pm2": "bin/pm2", "pm2-dev": "bin/pm2-dev", "pm2-docker": "bin/pm2-docker", "pm2-runtime": "bin/pm2-runtime" } }, "sha512-4/I1htIHzZk1Y67UgOCo4F1cJtas1kSds31N8zN0PybO230id1nigyjGuGFzUnGmUFPmrJ0On22fO1ChFlp7VQ=="], + + "pm2-axon": ["pm2-axon@4.0.1", "", { "dependencies": { "amp": "~0.3.1", "amp-message": "~0.1.1", "debug": "^4.3.1", "escape-string-regexp": "^4.0.0" } }, "sha512-kES/PeSLS8orT8dR5jMlNl+Yu4Ty3nbvZRmaAtROuVm9nYYGiaoXqqKQqQYzWQzMYWUKHMQTvBlirjE5GIIxqg=="], + + "pm2-axon-rpc": ["pm2-axon-rpc@0.7.1", "", { "dependencies": { "debug": "^4.3.1" } }, "sha512-FbLvW60w+vEyvMjP/xom2UPhUN/2bVpdtLfKJeYM3gwzYhoTEEChCOICfFzxkxuoEleOlnpjie+n1nue91bDQw=="], + + "pm2-deploy": ["pm2-deploy@1.0.2", "", { "dependencies": { "run-series": "^1.1.8", "tv4": "^1.3.0" } }, "sha512-YJx6RXKrVrWaphEYf++EdOOx9EH18vM8RSZN/P1Y+NokTKqYAca/ejXwVLyiEpNju4HPZEk3Y2uZouwMqUlcgg=="], + + "pm2-multimeter": ["pm2-multimeter@0.1.2", "", { "dependencies": { "charm": "~0.1.1" } }, "sha512-S+wT6XfyKfd7SJIBqRgOctGxaBzUOmVQzTAS+cg04TsEUObJVreha7lvCfX8zzGVr871XwCSnHUU7DQQ5xEsfA=="], + + "pm2-sysmonit": ["pm2-sysmonit@1.2.8", "", { "dependencies": { "async": "^3.2.0", "debug": "^4.3.1", "pidusage": "^2.0.21", "systeminformation": "^5.7", "tx2": "~1.0.4" } }, "sha512-ACOhlONEXdCTVwKieBIQLSi2tQZ8eKinhcr9JpZSUAL8Qy0ajIgRtsLxG/lwPOW3JEKqPyw/UaHmTWhUzpP4kA=="], + "possible-typed-array-names": ["possible-typed-array-names@1.1.0", "", {}, "sha512-/+5VFTchJDoVj3bhoqi6UeymcD00DAwb1nJwamzPvHEszJ4FpF6SNNbUbOS8yI56qHzdV8eK0qEfOSiodkTdxg=="], "postcss": ["postcss@8.5.6", "", { "dependencies": { "nanoid": "^3.3.11", "picocolors": "^1.1.1", "source-map-js": "^1.2.1" } }, "sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg=="], @@ -1817,7 +1952,7 @@ "postgres-interval": ["postgres-interval@1.2.0", "", { "dependencies": { "xtend": "^4.0.0" } }, "sha512-9ZhXKM/rw350N1ovuWHbGxnGh/SNJ4cnxHiM0rxE4VN41wsg8P8zWn9hv/buK00RP4WvlOyr/RBDiptyxVbkZQ=="], - "prebuild-install": ["prebuild-install@7.1.3", "", { "dependencies": { "detect-libc": "^2.0.0", "expand-template": "^2.0.3", "github-from-package": "0.0.0", "minimist": "^1.2.3", "mkdirp-classic": "^0.5.3", "napi-build-utils": "^2.0.0", "node-abi": "^3.3.0", "pump": "^3.0.0", "rc": "^1.2.7", "simple-get": "^4.0.0", "tar-fs": "^2.0.0", "tunnel-agent": "^0.6.0" }, "bin": { "prebuild-install": "bin.js" } }, "sha512-8Mf2cbV7x1cXPUILADGI3wuhfqWvtiLA1iclTDbFRZkgRQS0NqsPZphna9V+HyTEadheuPmjaJMsbzKQFOzLug=="], + "prebuild-install": ["prebuild-install@7.1.1", "", { "dependencies": { "detect-libc": "^2.0.0", "expand-template": "^2.0.3", "github-from-package": "0.0.0", "minimist": "^1.2.3", "mkdirp-classic": "^0.5.3", "napi-build-utils": "^1.0.1", "node-abi": "^3.3.0", "pump": "^3.0.0", "rc": "^1.2.7", "simple-get": "^4.0.0", "tar-fs": "^2.0.0", "tunnel-agent": "^0.6.0" }, "bin": { "prebuild-install": "bin.js" } }, "sha512-jAXscXWMcCK8GgCoHOfIr0ODh5ai8mj63L2nWrjuAgXE6tDyYGnx4/8o/rCgU+B4JSyZBKbeZqzhtwtC3ovxjw=="], "prelude-ls": ["prelude-ls@1.2.1", "", {}, "sha512-vkcDPrRZo1QZLbn5RLGPpg/WmIQ65qoWWhcGKf/b5eplkkarX0m9z8ppCat4mlOqUsWpyNuYgO3VRyrYHSzX5g=="], @@ -1835,6 +1970,8 @@ "progress": ["progress@2.0.3", "", {}, "sha512-7PiHtLll5LdnKIMw100I+8xJXR5gW2QwWYkT6iJva0bXitZKa/XMrSbdmg3r2Xnaidz9Qumd0VPaMrZlF9V9sA=="], + "promptly": ["promptly@2.2.0", "", { "dependencies": { "read": "^1.0.4" } }, "sha512-aC9j+BZsRSSzEsXBNBwDnAxujdx19HycZoKgRgzWnS8eOHg1asuf9heuLprfbe739zY3IdUQx+Egv6Jn135WHA=="], + "prop-types": ["prop-types@15.8.1", "", { "dependencies": { "loose-envify": "^1.4.0", "object-assign": "^4.1.1", "react-is": "^16.13.1" } }, "sha512-oj87CgZICdulUohogVAR7AjlC0327U4el4L6eAvOqCeudMDVU0NThNaV+b9Df4dXgSP1gXMTnPdhfe/2qDH5cg=="], "proper-lockfile": ["proper-lockfile@4.1.2", "", { "dependencies": { "graceful-fs": "^4.2.4", "retry": "^0.12.0", "signal-exit": "^3.0.2" } }, "sha512-TjNPblN4BwAWMXU8s9AEz4JmQxnD1NNL7bNOY/AKUzyamc379FWASUhc/K1pL2noVb+XmZKLL68cjzLsiOAMaA=="], @@ -1847,6 +1984,8 @@ "proxy-addr": ["proxy-addr@2.0.7", "", { "dependencies": { "forwarded": "0.2.0", "ipaddr.js": "1.9.1" } }, "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg=="], + "proxy-agent": ["proxy-agent@6.3.1", "", { "dependencies": { "agent-base": "^7.0.2", "debug": "^4.3.4", "http-proxy-agent": "^7.0.0", "https-proxy-agent": "^7.0.2", "lru-cache": "^7.14.1", "pac-proxy-agent": "^7.0.1", "proxy-from-env": "^1.1.0", "socks-proxy-agent": "^8.0.2" } }, "sha512-Rb5RVBy1iyqOtNl15Cw/llpeLH8bsb37gM1FUfKQ+Wck6xHlbAhWGUFiTRHtkjqGTA5pSHz6+0hrPW/oECihPQ=="], + "proxy-from-env": ["proxy-from-env@1.1.0", "", {}, "sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg=="], "pump": ["pump@3.0.3", "", { "dependencies": { "end-of-stream": "^1.1.0", "once": "^1.3.1" } }, "sha512-todwxLMY7/heScKmntwQG8CXVkWUOdYxIvY2s0VWAAMh/nd8SoYiRaKjlr7+iCs984f2P8zvrfWcDDYVb73NfA=="], @@ -1893,6 +2032,8 @@ "react-window-infinite-loader": ["react-window-infinite-loader@1.0.10", "", { "peerDependencies": { "react": "^15.3.0 || ^16.0.0-alpha || ^17.0.0 || ^18.0.0 || ^19.0.0", "react-dom": "^15.3.0 || ^16.0.0-alpha || ^17.0.0 || ^18.0.0 || ^19.0.0" } }, "sha512-NO/csdHlxjWqA2RJZfzQgagAjGHspbO2ik9GtWZb0BY1Nnapq0auG8ErI+OhGCzpjYJsCYerqUlK6hkq9dfAAA=="], + "read": ["read@1.0.7", "", { "dependencies": { "mute-stream": "~0.0.4" } }, "sha512-rSOKNYUmaxy0om1BNjMN4ezNT6VKK+2xF4GBhc81mkH7L60i6dp8qPYrkndNLT3QPphoII3maL9PVC9XmhHwVQ=="], + "read-cache": ["read-cache@1.0.0", "", { "dependencies": { "pify": "^2.3.0" } }, "sha512-Owdv/Ft7IjOgm/i0xvNDZ1LrRANRfew4b2prF3OWMQLxLfu3bS8FVhCsrSCMK4lR56Y9ya+AThoTpDCTxCmpRA=="], "readable-stream": ["readable-stream@4.7.0", "", { "dependencies": { "abort-controller": "^3.0.0", "buffer": "^6.0.3", "events": "^3.3.0", "process": "^0.11.10", "string_decoder": "^1.3.0" } }, "sha512-oIGGmcpTLwPga8Bn6/Z75SVaH1z5dUut2ibSyAMVhmUggWpmDn2dapB0n7f8nwaSiRtepAsfJyfXIO5DCVAODg=="], @@ -1915,6 +2056,8 @@ "require-directory": ["require-directory@2.1.1", "", {}, "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q=="], + "require-in-the-middle": ["require-in-the-middle@5.2.0", "", { "dependencies": { "debug": "^4.1.1", "module-details-from-path": "^1.0.3", "resolve": "^1.22.1" } }, "sha512-efCx3b+0Z69/LGJmm9Yvi4cqEdxnoGnxYxGxBghkkTTFeXRtTCmmhO0AnAfHz59k957uTSuy8WaHqOs8wbYUWg=="], + "reservoir": ["reservoir@0.1.2", "", {}, "sha512-ysyw95gLBhMAzqIVrOHJ2yMrRQHAS+h97bS9r89Z7Ou10Jhl2k5KOsyjPqrxL+WfEanov0o5bAMVzQ7AKyENHA=="], "resolve": ["resolve@1.22.10", "", { "dependencies": { "is-core-module": "^2.16.0", "path-parse": "^1.0.7", "supports-preserve-symlinks-flag": "^1.0.0" }, "bin": { "resolve": "bin/resolve" } }, "sha512-NPRy+/ncIMeDlTAsuqwKIiferiawhefFJtkNSW0qZJEqMEb+qBt/77B/jGeeek+F0uOeN05CDa6HXbbIgtVX4w=="], @@ -1943,6 +2086,8 @@ "run-parallel": ["run-parallel@1.2.0", "", { "dependencies": { "queue-microtask": "^1.2.2" } }, "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA=="], + "run-series": ["run-series@1.1.9", "", {}, "sha512-Arc4hUN896vjkqCYrUXquBFtRZdv1PfLbTYP71efP6butxyQ0kWpiNJyAgsxscmQg1cqvHY32/UCBzXedTpU2g=="], + "rxjs": ["rxjs@7.8.2", "", { "dependencies": { "tslib": "^2.1.0" } }, "sha512-dhKf903U/PQZY6boNNtAGdWbG85WAbjT/1xYoZIC7FAY0yWapOBQVsVrDl58W86//e1VpMNBtRV4MaXfdMySFA=="], "safe-array-concat": ["safe-array-concat@1.1.3", "", { "dependencies": { "call-bind": "^1.0.8", "call-bound": "^1.0.2", "get-intrinsic": "^1.2.6", "has-symbols": "^1.1.0", "isarray": "^2.0.5" } }, "sha512-AURm5f0jYEOydBj7VQlVvDrjeFgthDdEF5H1dP+6mNpoXOMo1quQqJ4wvJDyRZ9+pO3kGWoOdmV08cSv2aJV6Q=="], @@ -1957,6 +2102,8 @@ "safer-buffer": ["safer-buffer@2.1.2", "", {}, "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg=="], + "sax": ["sax@1.4.1", "", {}, "sha512-+aWOz7yVScEGoKNd4PA10LZ8sk0A/z5+nXQG5giUO5rprX9jgYsTdov9qCchZiPIZezbZH+jRut8nPodFAX4Jg=="], + "scheduler": ["scheduler@0.23.2", "", { "dependencies": { "loose-envify": "^1.1.0" } }, "sha512-UOShsPwz7NrMUqhR6t0hWjFduvOzbtv7toDH1/hIrfRNIDBnnBWd0CwJTGvTpngVlmwGCdP9/Zl/tVrDqcuYzQ=="], "secure-json-parse": ["secure-json-parse@2.7.0", "", {}, "sha512-6aU+Rwsezw7VR8/nyvKTx8QpWH9FrcYiXXlqC4z5d5XQBDRqtbfsRjnwGyqbi3gddNtWHuEk9OANUotL26qKUw=="], @@ -1981,6 +2128,8 @@ "shebang-regex": ["shebang-regex@3.0.0", "", {}, "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A=="], + "shimmer": ["shimmer@1.2.1", "", {}, "sha512-sQTKC1Re/rM6XyFM6fIAGHRPVGvyXfgzIDvzoq608vM+jeyVD0Tu1E6Np0Kc2zAIFWIj963V2800iF/9LPieQw=="], + "side-channel": ["side-channel@1.1.0", "", { "dependencies": { "es-errors": "^1.3.0", "object-inspect": "^1.13.3", "side-channel-list": "^1.0.0", "side-channel-map": "^1.0.1", "side-channel-weakmap": "^1.0.2" } }, "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw=="], "side-channel-list": ["side-channel-list@1.0.0", "", { "dependencies": { "es-errors": "^1.3.0", "object-inspect": "^1.13.3" } }, "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA=="], @@ -2011,13 +2160,15 @@ "source-map-js": ["source-map-js@1.2.1", "", {}, "sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA=="], + "source-map-support": ["source-map-support@0.5.21", "", { "dependencies": { "buffer-from": "^1.0.0", "source-map": "^0.6.0" } }, "sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w=="], + "sparse-bitfield": ["sparse-bitfield@3.0.3", "", { "dependencies": { "memory-pager": "^1.0.2" } }, "sha512-kvzhi7vqKTfkh0PZU+2D2PIllw2ymqJKujUcyPMd9Y75Nv4nPbGJZXNhxsgdQab2BmlDct1YnfQCguEvHr7VsQ=="], "split-ca": ["split-ca@1.0.1", "", {}, "sha512-Q5thBSxp5t8WPTTJQS59LrGqOZqOsrhDGDVm8azCqIBjSBd7nd9o2PM+mDulQQkh8h//4U6hFZnc/mul8t5pWQ=="], "split2": ["split2@4.2.0", "", {}, "sha512-UcjcJOWknrNkF6PLX83qcHM6KHgVKNkV62Y8a5uYDVv9ydGQVwAHMKqHdJje1VTWpljG0WYpCDhrCdAOYH4TWg=="], - "sprintf-js": ["sprintf-js@1.1.3", "", {}, "sha512-Oo+0REFV59/rz3gfJNKQiBlwfHaSESl1pcGyABQsnnIfWOFt6JNj5gCog2U6MLZ//IGYD+nA8nI+mTShREReaA=="], + "sprintf-js": ["sprintf-js@1.1.2", "", {}, "sha512-VE0SOVEHCk7Qc8ulkWw3ntAzXuqf7S2lvwQaDLRnUeIEaKNQJzV6BwmLKhOqT61aGhfUMrXeaBk+oDGCzvhcug=="], "ssh-remote-port-forward": ["ssh-remote-port-forward@1.0.4", "", { "dependencies": { "@types/ssh2": "^0.5.48", "ssh2": "^1.4.0" } }, "sha512-x0LV1eVDwjf1gmG7TTnfqIzf+3VPRz7vrNIjX6oYLbeCrf/PeVY6hkT68Mg+q02qXxQhrLjB0jfgvhevoCRmLQ=="], @@ -2073,6 +2224,8 @@ "system-ca": ["system-ca@2.0.1", "", { "optionalDependencies": { "macos-export-certificate-and-key": "^1.2.0", "win-export-certificate-and-key": "^2.1.0" } }, "sha512-9ZDV9yl8ph6Op67wDGPr4LykX86usE9x3le+XZSHfVMiiVJ5IRgmCWjLgxyz35ju9H3GDIJJZm4ogAeIfN5cQQ=="], + "systeminformation": ["systeminformation@5.27.6", "", { "os": "!aix", "bin": { "systeminformation": "lib/cli.js" } }, "sha512-9gmEXEtFp8vkewF8MLo69OmYBf0UpvGnqfAQs0kO+dgJRyFuCDxBwX53NQj4p/aV4fFmJQry+K1LLxPadAgmFQ=="], + "tailwind-merge": ["tailwind-merge@3.3.1", "", {}, "sha512-gBXpgUm/3rp1lMZZrM/w7D8GKqshif0zAymAhbCyIt8KMe+0v9DQ7cdYLR4FHH/cKpdTXb+A/tKKU3eolfsI+g=="], "tailwindcss": ["tailwindcss@3.4.17", "", { "dependencies": { "@alloc/quick-lru": "^5.2.0", "arg": "^5.0.2", "chokidar": "^3.6.0", "didyoumean": "^1.2.2", "dlv": "^1.1.3", "fast-glob": "^3.3.2", "glob-parent": "^6.0.2", "is-glob": "^4.0.3", "jiti": "^1.21.6", "lilconfig": "^3.1.3", "micromatch": "^4.0.8", "normalize-path": "^3.0.0", "object-hash": "^3.0.0", "picocolors": "^1.1.1", "postcss": "^8.4.47", "postcss-import": "^15.1.0", "postcss-js": "^4.0.1", "postcss-load-config": "^4.0.2", "postcss-nested": "^6.2.0", "postcss-selector-parser": "^6.1.2", "resolve": "^1.22.8", "sucrase": "^3.35.0" }, "bin": { "tailwind": "lib/cli.js", "tailwindcss": "lib/cli.js" } }, "sha512-w33E2aCvSDP0tW9RZuNXadXlkHXqFzSkQew/aIa2i/Sj8fThxwovwlXHSPXTbAHwEIhBFXAedUhP2tueAKP8Og=="], @@ -2111,6 +2264,8 @@ "ts-interface-checker": ["ts-interface-checker@0.1.13", "", {}, "sha512-Y/arvbn+rrz3JCKl9C4kVNfTfSm2/mEp5FSz5EsZSANGPSlQrpRI5M4PKF+mJnE52jOO90PnPSc3Ur3bTQw0gA=="], + "ts-unused-exports": ["ts-unused-exports@11.0.1", "", { "dependencies": { "chalk": "^4.0.0", "tsconfig-paths": "^3.9.0" }, "peerDependencies": { "typescript": ">=3.8.3" }, "bin": { "ts-unused-exports": "bin/ts-unused-exports" } }, "sha512-b1uIe0B8YfNZjeb+bx62LrB6qaO4CHT8SqMVBkwbwLj7Nh0xQ4J8uV0dS9E6AABId0U4LQ+3yB/HXZBMslGn2A=="], + "tsconfig-paths": ["tsconfig-paths@3.15.0", "", { "dependencies": { "@types/json5": "^0.0.29", "json5": "^1.0.2", "minimist": "^1.2.6", "strip-bom": "^3.0.0" } }, "sha512-2Ac2RgzDe/cn48GvOe3M+o82pEFewD3UPbyoUHHdKasHwJKjds4fLXWf/Ux5kATBKN20oaFGu+jbElp1pos0mg=="], "tslib": ["tslib@2.8.1", "", {}, "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w=="], @@ -2131,8 +2286,12 @@ "turbo-windows-arm64": ["turbo-windows-arm64@2.5.4", "", { "os": "win32", "cpu": "arm64" }, "sha512-oQ8RrK1VS8lrxkLriotFq+PiF7iiGgkZtfLKF4DDKsmdbPo0O9R2mQxm7jHLuXraRCuIQDWMIw6dpcr7Iykf4A=="], + "tv4": ["tv4@1.3.0", "", {}, "sha512-afizzfpJgvPr+eDkREK4MxJ/+r8nEEHcmitwgnPUqpaP+FpwQyadnxNoSACbgc/b1LsZYtODGoPiFxQrgJgjvw=="], + "tweetnacl": ["tweetnacl@0.14.5", "", {}, "sha512-KXXFFdAbFXY4geFIwoyNK+f5Z1b7swfXABfL7HXCmoIWMKU3dmS26672A4EeQtDzLKy7SXmfBu51JolvEKwtGA=="], + "tx2": ["tx2@1.0.5", "", { "dependencies": { "json-stringify-safe": "^5.0.1" } }, "sha512-sJ24w0y03Md/bxzK4FU8J8JveYYUbSs2FViLJ2D/8bytSiyPRbuE3DyL/9UKYXTZlV3yXq0L8GLlhobTnekCVg=="], + "type-check": ["type-check@0.4.0", "", { "dependencies": { "prelude-ls": "^1.2.1" } }, "sha512-XleUoc9uwGXqjWwXaUTZAmzMcFZ5858QA2vvx1Ur5xIcixXIP+8LnFDgRplU30us6teqdlskFfu+ae4K79Ooew=="], "type-fest": ["type-fest@2.19.0", "", {}, "sha512-RAH822pAdBgcNMAfWnCBU3CFZcfZ/i1eZjwFU/dsLKumyuuP3niueg2UAukXYF0E2AAoc82ZSSf9J0WQBinzHA=="], @@ -2163,8 +2322,6 @@ "uri-js": ["uri-js@4.4.1", "", { "dependencies": { "punycode": "^2.1.0" } }, "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg=="], - "user-agents": ["user-agents@1.1.574", "", { "dependencies": { "lodash.clonedeep": "^4.5.0" } }, "sha512-g20Fp+U2U/Qs9qWDjeg14CyAXs+I8/eo9UBQVG/Tkerlp4yVKoJxjJmRGnB/gTre6IGQtBCMMqfeEb1IvyZFNg=="], - "util-deprecate": ["util-deprecate@1.0.2", "", {}, "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw=="], "utils-merge": ["utils-merge@1.0.1", "", {}, "sha512-pMZTvIkT1d+TFGvDOqodOclx0QWkkgi6Tdoa8gC8ffGAAqz9pzPTZWAybbsHHoED/ztMtkv/VoYTYyShUn81hA=="], @@ -2175,6 +2332,8 @@ "vite": ["vite@4.5.14", "", { "dependencies": { "esbuild": "^0.18.10", "postcss": "^8.4.27", "rollup": "^3.27.1" }, "optionalDependencies": { "fsevents": "~2.3.2" }, "peerDependencies": { "@types/node": ">= 14", "less": "*", "lightningcss": "^1.21.0", "sass": "*", "stylus": "*", "sugarss": "*", "terser": "^5.4.0" }, "optionalPeers": ["@types/node", "less", "lightningcss", "sass", "stylus", "sugarss", "terser"], "bin": { "vite": "bin/vite.js" } }, "sha512-+v57oAaoYNnO3hIu5Z/tJRZjq5aHM2zDve9YZ8HngVHbhk66RStobhb1sqPMIPEleV6cNKYK4eGrAbE9Ulbl2g=="], + "vizion": ["vizion@2.2.1", "", { "dependencies": { "async": "^2.6.3", "git-node-fs": "^1.0.0", "ini": "^1.3.5", "js-git": "^0.7.8" } }, "sha512-sfAcO2yeSU0CSPFI/DmZp3FsFE9T+8913nv1xWBOyzODv13fwkn6Vl7HqxGpkr9F608M+8SuFId3s+BlZqfXww=="], + "web-streams-polyfill": ["web-streams-polyfill@3.3.3", "", {}, "sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw=="], "webidl-conversions": ["webidl-conversions@7.0.0", "", {}, "sha512-VwddBukDzu71offAQR975unBIGqfKZpM+8ZX6ySk8nYhVoo5CYaZyzt3YBvYtRtO+aoGlqxPg/B87NGVZ/fu6g=="], @@ -2201,6 +2360,8 @@ "wrappy": ["wrappy@1.0.2", "", {}, "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="], + "ws": ["ws@7.5.10", "", { "peerDependencies": { "bufferutil": "^4.0.1", "utf-8-validate": "^5.0.2" }, "optionalPeers": ["bufferutil", "utf-8-validate"] }, "sha512-+dbF1tHwZpXcbOJdVOkzLDxZP1ailvSxM6ZweXTegylPny803bFhA+vqBYw4s31NSAk4S2Qz+AKXK9a4wkdjcQ=="], + "xtend": ["xtend@4.0.2", "", {}, "sha512-LKYU1iAXJXUgAXn9URjiu+MWhyUXHsvfp7mcuYm9dSUKK0/CjtrUwFAxD82/mCWbtLsGjFIad0wIsod4zrTAEQ=="], "y18n": ["y18n@5.0.8", "", {}, "sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA=="], @@ -2261,59 +2422,49 @@ "@mongodb-js/oidc-plugin/express": ["express@4.21.2", "", { "dependencies": { "accepts": "~1.3.8", "array-flatten": "1.1.1", "body-parser": "1.20.3", "content-disposition": "0.5.4", "content-type": "~1.0.4", "cookie": "0.7.1", "cookie-signature": "1.0.6", "debug": "2.6.9", "depd": "2.0.0", "encodeurl": "~2.0.0", "escape-html": "~1.0.3", "etag": "~1.8.1", "finalhandler": "1.3.1", "fresh": "0.5.2", "http-errors": "2.0.0", "merge-descriptors": "1.0.3", "methods": "~1.1.2", "on-finished": "2.4.1", "parseurl": "~1.3.3", "path-to-regexp": "0.1.12", "proxy-addr": "~2.0.7", "qs": "6.13.0", "range-parser": "~1.2.1", "safe-buffer": "5.2.1", "send": "0.19.0", "serve-static": "1.16.2", "setprototypeof": "1.2.0", "statuses": "2.0.1", "type-is": "~1.6.18", "utils-merge": "1.0.1", "vary": "~1.1.2" } }, "sha512-28HqgMZAmih1Czt9ny7qr6ek2qddF4FclbMzwhCREB6OFfH+rXAnuNCwo1/wFvrtbgsQDb4kSbX9de9lFbrXnA=="], - "@mongosh/service-provider-node-driver/kerberos": ["kerberos@2.1.0", "", { "dependencies": { "bindings": "^1.5.0", "node-addon-api": "^6.1.0", "prebuild-install": "7.1.1" } }, "sha512-HvOl6O6cyEN/8Z4CAocHe/sekJtvt5UrxUdCuu7bXDZ2Hnsy6OpsQbISW+lpm03vrbO2ir+1QQ5Sx/vMEhHnog=="], + "@pm2/agent/chalk": ["chalk@3.0.0", "", { "dependencies": { "ansi-styles": "^4.1.0", "supports-color": "^7.1.0" } }, "sha512-4D3B6Wf41KOYRFdszmDqMCGq5VV/uMAB273JILmO+3jAlh8X4qDtdtgCR3fxtbLEMzSx22QdhnDcJvu2u1fVwg=="], + + "@pm2/agent/dayjs": ["dayjs@1.8.36", "", {}, "sha512-3VmRXEtw7RZKAf+4Tv1Ym9AGeo8r8+CjDi26x+7SYQil1UqtqdaokhzoEJohqlzt0m5kacJSDhJQkG/LWhpRBw=="], + + "@pm2/agent/debug": ["debug@4.3.7", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-Er2nc/H7RrMXZBFCEim6TCmMk02Z8vLC2Rbi1KEBggpo0fS6l0S1nnapwmIi3yW/+GOJap1Krg4w0Hg80oCqgQ=="], + + "@pm2/agent/semver": ["semver@7.5.4", "", { "dependencies": { "lru-cache": "^6.0.0" }, "bin": { "semver": "bin/semver.js" } }, "sha512-1bCSESV6Pv+i21Hvpxp3Dx+pSD8lIPt8uVjRrxAUt/nbswYc+tK6Y2btiULjd4+fnq15PX+nqQDC7Oft7WkwcA=="], + + "@pm2/io/async": ["async@2.6.4", "", { "dependencies": { "lodash": "^4.17.14" } }, "sha512-mzo5dfJYwAn29PeiJ0zvwTo04zj8HDJj0Mn8TD7sno7q12prdbnasKJHhkm2c1LgrhlJ0teaea8860oxi51mGA=="], + + "@pm2/io/debug": ["debug@4.3.7", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-Er2nc/H7RrMXZBFCEim6TCmMk02Z8vLC2Rbi1KEBggpo0fS6l0S1nnapwmIi3yW/+GOJap1Krg4w0Hg80oCqgQ=="], + + "@pm2/io/eventemitter2": ["eventemitter2@6.4.9", "", {}, "sha512-JEPTiaOt9f04oa6NOkc4aH+nVp5I3wEjpHbIPqfgCdD5v5bUzy7xQqwcVO2aDQgOWhI28da57HksMrzK9HlRxg=="], + + "@pm2/io/semver": ["semver@7.5.4", "", { "dependencies": { "lru-cache": "^6.0.0" }, "bin": { "semver": "bin/semver.js" } }, "sha512-1bCSESV6Pv+i21Hvpxp3Dx+pSD8lIPt8uVjRrxAUt/nbswYc+tK6Y2btiULjd4+fnq15PX+nqQDC7Oft7WkwcA=="], + + "@pm2/io/tslib": ["tslib@1.9.3", "", {}, "sha512-4krF8scpejhaOgqzBEcGM7yDIEfi0/8+8zDRZhNZZ2kjmHJ4hv3zCbQWxoJGz1iw5U0Jl0nma13xzHXcncMavQ=="], + + "@pm2/js-api/async": ["async@2.6.4", "", { "dependencies": { "lodash": "^4.17.14" } }, "sha512-mzo5dfJYwAn29PeiJ0zvwTo04zj8HDJj0Mn8TD7sno7q12prdbnasKJHhkm2c1LgrhlJ0teaea8860oxi51mGA=="], + + "@pm2/js-api/debug": ["debug@4.3.7", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-Er2nc/H7RrMXZBFCEim6TCmMk02Z8vLC2Rbi1KEBggpo0fS6l0S1nnapwmIi3yW/+GOJap1Krg4w0Hg80oCqgQ=="], + + "@pm2/js-api/eventemitter2": ["eventemitter2@6.4.9", "", {}, "sha512-JEPTiaOt9f04oa6NOkc4aH+nVp5I3wEjpHbIPqfgCdD5v5bUzy7xQqwcVO2aDQgOWhI28da57HksMrzK9HlRxg=="], "@sideway/address/@hapi/hoek": ["@hapi/hoek@9.3.0", "", {}, "sha512-/c6rf4UJlmHlC9b5BaNvzAcFv7HZ2QHaV0D4/HNlBdvFnvQq8RI4kYdhyPCl7Xj+oWvTWQ8ujhqS53LIgAe6KQ=="], - "@stock-bot/browser/@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin": ["@typescript-eslint/eslint-plugin@6.21.0", "", { "dependencies": { "@eslint-community/regexpp": "^4.5.1", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/type-utils": "6.21.0", "@typescript-eslint/utils": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "graphemer": "^1.4.0", "ignore": "^5.2.4", "natural-compare": "^1.4.0", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "@typescript-eslint/parser": "^6.0.0 || ^6.0.0-alpha", "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-oy9+hTPCUFpngkEZUSzbf9MxI65wbKFoQYsgPdILTfbUldp5ovUuphZVe4i30emU9M/kP+T64Di0mxl7dSw3MA=="], - "@stock-bot/cache/@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], + "@stock-bot/mongodb/@typescript-eslint/parser": ["@typescript-eslint/parser@6.21.0", "", { "dependencies": { "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-tbsV1jPne5CkFQCgPBcDOt30ItF7aJoZL997JSF7MhGQqOeT3svWRYxiqlfA5RUdlHN6Fi+EI9bxqbdyAUZjYQ=="], - "@stock-bot/config/@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], + "@stock-bot/mongodb/eslint": ["eslint@8.57.1", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.2.0", "@eslint-community/regexpp": "^4.6.1", "@eslint/eslintrc": "^2.1.4", "@eslint/js": "8.57.1", "@humanwhocodes/config-array": "^0.13.0", "@humanwhocodes/module-importer": "^1.0.1", "@nodelib/fs.walk": "^1.2.8", "@ungap/structured-clone": "^1.2.0", "ajv": "^6.12.4", "chalk": "^4.0.0", "cross-spawn": "^7.0.2", "debug": "^4.3.2", "doctrine": "^3.0.0", "escape-string-regexp": "^4.0.0", "eslint-scope": "^7.2.2", "eslint-visitor-keys": "^3.4.3", "espree": "^9.6.1", "esquery": "^1.4.2", "esutils": "^2.0.2", "fast-deep-equal": "^3.1.3", "file-entry-cache": "^6.0.1", "find-up": "^5.0.0", "glob-parent": "^6.0.2", "globals": "^13.19.0", "graphemer": "^1.4.0", "ignore": "^5.2.0", "imurmurhash": "^0.1.4", "is-glob": "^4.0.0", "is-path-inside": "^3.0.3", "js-yaml": "^4.1.0", "json-stable-stringify-without-jsonify": "^1.0.1", "levn": "^0.4.1", "lodash.merge": "^4.6.2", "minimatch": "^3.1.2", "natural-compare": "^1.4.0", "optionator": "^0.9.3", "strip-ansi": "^6.0.1", "text-table": "^0.2.0" }, "bin": { "eslint": "bin/eslint.js" } }, "sha512-ypowyDxpVSYpkXr9WPv2PAZCtNip1Mv5KTW0SCurXv/9iOpcrH9PaqUElksqEB6pChqHGDRCFTyrZlGhnLNGiA=="], - "@stock-bot/event-bus/@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin": ["@typescript-eslint/eslint-plugin@6.21.0", "", { "dependencies": { "@eslint-community/regexpp": "^4.5.1", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/type-utils": "6.21.0", "@typescript-eslint/utils": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "graphemer": "^1.4.0", "ignore": "^5.2.4", "natural-compare": "^1.4.0", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "@typescript-eslint/parser": "^6.0.0 || ^6.0.0-alpha", "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-oy9+hTPCUFpngkEZUSzbf9MxI65wbKFoQYsgPdILTfbUldp5ovUuphZVe4i30emU9M/kP+T64Di0mxl7dSw3MA=="], - "@stock-bot/http/@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], + "@stock-bot/postgres/@typescript-eslint/parser": ["@typescript-eslint/parser@6.21.0", "", { "dependencies": { "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-tbsV1jPne5CkFQCgPBcDOt30ItF7aJoZL997JSF7MhGQqOeT3svWRYxiqlfA5RUdlHN6Fi+EI9bxqbdyAUZjYQ=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin": ["@typescript-eslint/eslint-plugin@6.21.0", "", { "dependencies": { "@eslint-community/regexpp": "^4.5.1", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/type-utils": "6.21.0", "@typescript-eslint/utils": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "graphemer": "^1.4.0", "ignore": "^5.2.4", "natural-compare": "^1.4.0", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "@typescript-eslint/parser": "^6.0.0 || ^6.0.0-alpha", "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-oy9+hTPCUFpngkEZUSzbf9MxI65wbKFoQYsgPdILTfbUldp5ovUuphZVe4i30emU9M/kP+T64Di0mxl7dSw3MA=="], + "@stock-bot/postgres/eslint": ["eslint@8.57.1", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.2.0", "@eslint-community/regexpp": "^4.6.1", "@eslint/eslintrc": "^2.1.4", "@eslint/js": "8.57.1", "@humanwhocodes/config-array": "^0.13.0", "@humanwhocodes/module-importer": "^1.0.1", "@nodelib/fs.walk": "^1.2.8", "@ungap/structured-clone": "^1.2.0", "ajv": "^6.12.4", "chalk": "^4.0.0", "cross-spawn": "^7.0.2", "debug": "^4.3.2", "doctrine": "^3.0.0", "escape-string-regexp": "^4.0.0", "eslint-scope": "^7.2.2", "eslint-visitor-keys": "^3.4.3", "espree": "^9.6.1", "esquery": "^1.4.2", "esutils": "^2.0.2", "fast-deep-equal": "^3.1.3", "file-entry-cache": "^6.0.1", "find-up": "^5.0.0", "glob-parent": "^6.0.2", "globals": "^13.19.0", "graphemer": "^1.4.0", "ignore": "^5.2.0", "imurmurhash": "^0.1.4", "is-glob": "^4.0.0", "is-path-inside": "^3.0.3", "js-yaml": "^4.1.0", "json-stable-stringify-without-jsonify": "^1.0.1", "levn": "^0.4.1", "lodash.merge": "^4.6.2", "minimatch": "^3.1.2", "natural-compare": "^1.4.0", "optionator": "^0.9.3", "strip-ansi": "^6.0.1", "text-table": "^0.2.0" }, "bin": { "eslint": "bin/eslint.js" } }, "sha512-ypowyDxpVSYpkXr9WPv2PAZCtNip1Mv5KTW0SCurXv/9iOpcrH9PaqUElksqEB6pChqHGDRCFTyrZlGhnLNGiA=="], - "@stock-bot/http/@typescript-eslint/parser": ["@typescript-eslint/parser@6.21.0", "", { "dependencies": { "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-tbsV1jPne5CkFQCgPBcDOt30ItF7aJoZL997JSF7MhGQqOeT3svWRYxiqlfA5RUdlHN6Fi+EI9bxqbdyAUZjYQ=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin": ["@typescript-eslint/eslint-plugin@6.21.0", "", { "dependencies": { "@eslint-community/regexpp": "^4.5.1", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/type-utils": "6.21.0", "@typescript-eslint/utils": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "graphemer": "^1.4.0", "ignore": "^5.2.4", "natural-compare": "^1.4.0", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "@typescript-eslint/parser": "^6.0.0 || ^6.0.0-alpha", "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-oy9+hTPCUFpngkEZUSzbf9MxI65wbKFoQYsgPdILTfbUldp5ovUuphZVe4i30emU9M/kP+T64Di0mxl7dSw3MA=="], - "@stock-bot/http/eslint": ["eslint@8.57.1", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.2.0", "@eslint-community/regexpp": "^4.6.1", "@eslint/eslintrc": "^2.1.4", "@eslint/js": "8.57.1", "@humanwhocodes/config-array": "^0.13.0", "@humanwhocodes/module-importer": "^1.0.1", "@nodelib/fs.walk": "^1.2.8", "@ungap/structured-clone": "^1.2.0", "ajv": "^6.12.4", "chalk": "^4.0.0", "cross-spawn": "^7.0.2", "debug": "^4.3.2", "doctrine": "^3.0.0", "escape-string-regexp": "^4.0.0", "eslint-scope": "^7.2.2", "eslint-visitor-keys": "^3.4.3", "espree": "^9.6.1", "esquery": "^1.4.2", "esutils": "^2.0.2", "fast-deep-equal": "^3.1.3", "file-entry-cache": "^6.0.1", "find-up": "^5.0.0", "glob-parent": "^6.0.2", "globals": "^13.19.0", "graphemer": "^1.4.0", "ignore": "^5.2.0", "imurmurhash": "^0.1.4", "is-glob": "^4.0.0", "is-path-inside": "^3.0.3", "js-yaml": "^4.1.0", "json-stable-stringify-without-jsonify": "^1.0.1", "levn": "^0.4.1", "lodash.merge": "^4.6.2", "minimatch": "^3.1.2", "natural-compare": "^1.4.0", "optionator": "^0.9.3", "strip-ansi": "^6.0.1", "text-table": "^0.2.0" }, "bin": { "eslint": "bin/eslint.js" } }, "sha512-ypowyDxpVSYpkXr9WPv2PAZCtNip1Mv5KTW0SCurXv/9iOpcrH9PaqUElksqEB6pChqHGDRCFTyrZlGhnLNGiA=="], + "@stock-bot/questdb/@typescript-eslint/parser": ["@typescript-eslint/parser@6.21.0", "", { "dependencies": { "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-tbsV1jPne5CkFQCgPBcDOt30ItF7aJoZL997JSF7MhGQqOeT3svWRYxiqlfA5RUdlHN6Fi+EI9bxqbdyAUZjYQ=="], - "@stock-bot/logger/@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], - - "@stock-bot/mongodb-client/@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], - - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin": ["@typescript-eslint/eslint-plugin@6.21.0", "", { "dependencies": { "@eslint-community/regexpp": "^4.5.1", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/type-utils": "6.21.0", "@typescript-eslint/utils": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "graphemer": "^1.4.0", "ignore": "^5.2.4", "natural-compare": "^1.4.0", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "@typescript-eslint/parser": "^6.0.0 || ^6.0.0-alpha", "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-oy9+hTPCUFpngkEZUSzbf9MxI65wbKFoQYsgPdILTfbUldp5ovUuphZVe4i30emU9M/kP+T64Di0mxl7dSw3MA=="], - - "@stock-bot/mongodb-client/@typescript-eslint/parser": ["@typescript-eslint/parser@6.21.0", "", { "dependencies": { "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-tbsV1jPne5CkFQCgPBcDOt30ItF7aJoZL997JSF7MhGQqOeT3svWRYxiqlfA5RUdlHN6Fi+EI9bxqbdyAUZjYQ=="], - - "@stock-bot/mongodb-client/eslint": ["eslint@8.57.1", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.2.0", "@eslint-community/regexpp": "^4.6.1", "@eslint/eslintrc": "^2.1.4", "@eslint/js": "8.57.1", "@humanwhocodes/config-array": "^0.13.0", "@humanwhocodes/module-importer": "^1.0.1", "@nodelib/fs.walk": "^1.2.8", "@ungap/structured-clone": "^1.2.0", "ajv": "^6.12.4", "chalk": "^4.0.0", "cross-spawn": "^7.0.2", "debug": "^4.3.2", "doctrine": "^3.0.0", "escape-string-regexp": "^4.0.0", "eslint-scope": "^7.2.2", "eslint-visitor-keys": "^3.4.3", "espree": "^9.6.1", "esquery": "^1.4.2", "esutils": "^2.0.2", "fast-deep-equal": "^3.1.3", "file-entry-cache": "^6.0.1", "find-up": "^5.0.0", "glob-parent": "^6.0.2", "globals": "^13.19.0", "graphemer": "^1.4.0", "ignore": "^5.2.0", "imurmurhash": "^0.1.4", "is-glob": "^4.0.0", "is-path-inside": "^3.0.3", "js-yaml": "^4.1.0", "json-stable-stringify-without-jsonify": "^1.0.1", "levn": "^0.4.1", "lodash.merge": "^4.6.2", "minimatch": "^3.1.2", "natural-compare": "^1.4.0", "optionator": "^0.9.3", "strip-ansi": "^6.0.1", "text-table": "^0.2.0" }, "bin": { "eslint": "bin/eslint.js" } }, "sha512-ypowyDxpVSYpkXr9WPv2PAZCtNip1Mv5KTW0SCurXv/9iOpcrH9PaqUElksqEB6pChqHGDRCFTyrZlGhnLNGiA=="], - - "@stock-bot/postgres-client/@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], - - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin": ["@typescript-eslint/eslint-plugin@6.21.0", "", { "dependencies": { "@eslint-community/regexpp": "^4.5.1", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/type-utils": "6.21.0", "@typescript-eslint/utils": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "graphemer": "^1.4.0", "ignore": "^5.2.4", "natural-compare": "^1.4.0", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "@typescript-eslint/parser": "^6.0.0 || ^6.0.0-alpha", "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-oy9+hTPCUFpngkEZUSzbf9MxI65wbKFoQYsgPdILTfbUldp5ovUuphZVe4i30emU9M/kP+T64Di0mxl7dSw3MA=="], - - "@stock-bot/postgres-client/@typescript-eslint/parser": ["@typescript-eslint/parser@6.21.0", "", { "dependencies": { "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-tbsV1jPne5CkFQCgPBcDOt30ItF7aJoZL997JSF7MhGQqOeT3svWRYxiqlfA5RUdlHN6Fi+EI9bxqbdyAUZjYQ=="], - - "@stock-bot/postgres-client/eslint": ["eslint@8.57.1", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.2.0", "@eslint-community/regexpp": "^4.6.1", "@eslint/eslintrc": "^2.1.4", "@eslint/js": "8.57.1", "@humanwhocodes/config-array": "^0.13.0", "@humanwhocodes/module-importer": "^1.0.1", "@nodelib/fs.walk": "^1.2.8", "@ungap/structured-clone": "^1.2.0", "ajv": "^6.12.4", "chalk": "^4.0.0", "cross-spawn": "^7.0.2", "debug": "^4.3.2", "doctrine": "^3.0.0", "escape-string-regexp": "^4.0.0", "eslint-scope": "^7.2.2", "eslint-visitor-keys": "^3.4.3", "espree": "^9.6.1", "esquery": "^1.4.2", "esutils": "^2.0.2", "fast-deep-equal": "^3.1.3", "file-entry-cache": "^6.0.1", "find-up": "^5.0.0", "glob-parent": "^6.0.2", "globals": "^13.19.0", "graphemer": "^1.4.0", "ignore": "^5.2.0", "imurmurhash": "^0.1.4", "is-glob": "^4.0.0", "is-path-inside": "^3.0.3", "js-yaml": "^4.1.0", "json-stable-stringify-without-jsonify": "^1.0.1", "levn": "^0.4.1", "lodash.merge": "^4.6.2", "minimatch": "^3.1.2", "natural-compare": "^1.4.0", "optionator": "^0.9.3", "strip-ansi": "^6.0.1", "text-table": "^0.2.0" }, "bin": { "eslint": "bin/eslint.js" } }, "sha512-ypowyDxpVSYpkXr9WPv2PAZCtNip1Mv5KTW0SCurXv/9iOpcrH9PaqUElksqEB6pChqHGDRCFTyrZlGhnLNGiA=="], - - "@stock-bot/questdb-client/@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin": ["@typescript-eslint/eslint-plugin@6.21.0", "", { "dependencies": { "@eslint-community/regexpp": "^4.5.1", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/type-utils": "6.21.0", "@typescript-eslint/utils": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "graphemer": "^1.4.0", "ignore": "^5.2.4", "natural-compare": "^1.4.0", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "@typescript-eslint/parser": "^6.0.0 || ^6.0.0-alpha", "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-oy9+hTPCUFpngkEZUSzbf9MxI65wbKFoQYsgPdILTfbUldp5ovUuphZVe4i30emU9M/kP+T64Di0mxl7dSw3MA=="], - - "@stock-bot/questdb-client/@typescript-eslint/parser": ["@typescript-eslint/parser@6.21.0", "", { "dependencies": { "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-tbsV1jPne5CkFQCgPBcDOt30ItF7aJoZL997JSF7MhGQqOeT3svWRYxiqlfA5RUdlHN6Fi+EI9bxqbdyAUZjYQ=="], - - "@stock-bot/questdb-client/eslint": ["eslint@8.57.1", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.2.0", "@eslint-community/regexpp": "^4.6.1", "@eslint/eslintrc": "^2.1.4", "@eslint/js": "8.57.1", "@humanwhocodes/config-array": "^0.13.0", "@humanwhocodes/module-importer": "^1.0.1", "@nodelib/fs.walk": "^1.2.8", "@ungap/structured-clone": "^1.2.0", "ajv": "^6.12.4", "chalk": "^4.0.0", "cross-spawn": "^7.0.2", "debug": "^4.3.2", "doctrine": "^3.0.0", "escape-string-regexp": "^4.0.0", "eslint-scope": "^7.2.2", "eslint-visitor-keys": "^3.4.3", "espree": "^9.6.1", "esquery": "^1.4.2", "esutils": "^2.0.2", "fast-deep-equal": "^3.1.3", "file-entry-cache": "^6.0.1", "find-up": "^5.0.0", "glob-parent": "^6.0.2", "globals": "^13.19.0", "graphemer": "^1.4.0", "ignore": "^5.2.0", "imurmurhash": "^0.1.4", "is-glob": "^4.0.0", "is-path-inside": "^3.0.3", "js-yaml": "^4.1.0", "json-stable-stringify-without-jsonify": "^1.0.1", "levn": "^0.4.1", "lodash.merge": "^4.6.2", "minimatch": "^3.1.2", "natural-compare": "^1.4.0", "optionator": "^0.9.3", "strip-ansi": "^6.0.1", "text-table": "^0.2.0" }, "bin": { "eslint": "bin/eslint.js" } }, "sha512-ypowyDxpVSYpkXr9WPv2PAZCtNip1Mv5KTW0SCurXv/9iOpcrH9PaqUElksqEB6pChqHGDRCFTyrZlGhnLNGiA=="], - - "@stock-bot/queue/@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], - - "@stock-bot/shutdown/@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], - - "@stock-bot/types/@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], - - "@stock-bot/utils/@types/node": ["@types/node@20.19.1", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-jJD50LtlD2dodAEO653i3YF04NWak6jN3ky+Ri3Em3mGR39/glWiboM/IePaRbgwSfqM1TpGXfAg8ohn/4dTgA=="], + "@stock-bot/questdb/eslint": ["eslint@8.57.1", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.2.0", "@eslint-community/regexpp": "^4.6.1", "@eslint/eslintrc": "^2.1.4", "@eslint/js": "8.57.1", "@humanwhocodes/config-array": "^0.13.0", "@humanwhocodes/module-importer": "^1.0.1", "@nodelib/fs.walk": "^1.2.8", "@ungap/structured-clone": "^1.2.0", "ajv": "^6.12.4", "chalk": "^4.0.0", "cross-spawn": "^7.0.2", "debug": "^4.3.2", "doctrine": "^3.0.0", "escape-string-regexp": "^4.0.0", "eslint-scope": "^7.2.2", "eslint-visitor-keys": "^3.4.3", "espree": "^9.6.1", "esquery": "^1.4.2", "esutils": "^2.0.2", "fast-deep-equal": "^3.1.3", "file-entry-cache": "^6.0.1", "find-up": "^5.0.0", "glob-parent": "^6.0.2", "globals": "^13.19.0", "graphemer": "^1.4.0", "ignore": "^5.2.0", "imurmurhash": "^0.1.4", "is-glob": "^4.0.0", "is-path-inside": "^3.0.3", "js-yaml": "^4.1.0", "json-stable-stringify-without-jsonify": "^1.0.1", "levn": "^0.4.1", "lodash.merge": "^4.6.2", "minimatch": "^3.1.2", "natural-compare": "^1.4.0", "optionator": "^0.9.3", "strip-ansi": "^6.0.1", "text-table": "^0.2.0" }, "bin": { "eslint": "bin/eslint.js" } }, "sha512-ypowyDxpVSYpkXr9WPv2PAZCtNip1Mv5KTW0SCurXv/9iOpcrH9PaqUElksqEB6pChqHGDRCFTyrZlGhnLNGiA=="], "@stock-bot/web-app/@typescript-eslint/eslint-plugin": ["@typescript-eslint/eslint-plugin@6.21.0", "", { "dependencies": { "@eslint-community/regexpp": "^4.5.1", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/type-utils": "6.21.0", "@typescript-eslint/utils": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "graphemer": "^1.4.0", "ignore": "^5.2.4", "natural-compare": "^1.4.0", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "@typescript-eslint/parser": "^6.0.0 || ^6.0.0-alpha", "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-oy9+hTPCUFpngkEZUSzbf9MxI65wbKFoQYsgPdILTfbUldp5ovUuphZVe4i30emU9M/kP+T64Di0mxl7dSw3MA=="], @@ -2321,7 +2472,17 @@ "@stock-bot/web-app/eslint": ["eslint@8.57.1", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.2.0", "@eslint-community/regexpp": "^4.6.1", "@eslint/eslintrc": "^2.1.4", "@eslint/js": "8.57.1", "@humanwhocodes/config-array": "^0.13.0", "@humanwhocodes/module-importer": "^1.0.1", "@nodelib/fs.walk": "^1.2.8", "@ungap/structured-clone": "^1.2.0", "ajv": "^6.12.4", "chalk": "^4.0.0", "cross-spawn": "^7.0.2", "debug": "^4.3.2", "doctrine": "^3.0.0", "escape-string-regexp": "^4.0.0", "eslint-scope": "^7.2.2", "eslint-visitor-keys": "^3.4.3", "espree": "^9.6.1", "esquery": "^1.4.2", "esutils": "^2.0.2", "fast-deep-equal": "^3.1.3", "file-entry-cache": "^6.0.1", "find-up": "^5.0.0", "glob-parent": "^6.0.2", "globals": "^13.19.0", "graphemer": "^1.4.0", "ignore": "^5.2.0", "imurmurhash": "^0.1.4", "is-glob": "^4.0.0", "is-path-inside": "^3.0.3", "js-yaml": "^4.1.0", "json-stable-stringify-without-jsonify": "^1.0.1", "levn": "^0.4.1", "lodash.merge": "^4.6.2", "minimatch": "^3.1.2", "natural-compare": "^1.4.0", "optionator": "^0.9.3", "strip-ansi": "^6.0.1", "text-table": "^0.2.0" }, "bin": { "eslint": "bin/eslint.js" } }, "sha512-ypowyDxpVSYpkXr9WPv2PAZCtNip1Mv5KTW0SCurXv/9iOpcrH9PaqUElksqEB6pChqHGDRCFTyrZlGhnLNGiA=="], - "@types/ssh2/@types/node": ["@types/node@18.19.112", "", { "dependencies": { "undici-types": "~5.26.4" } }, "sha512-i+Vukt9POdS/MBI7YrrkkI5fMfwFtOjphSmt4WXYLfwqsfr6z/HdCx7LqT9M7JktGob8WNgj8nFB4TbGNE4Cog=="], + "@types/docker-modem/@types/node": ["@types/node@22.15.32", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-3jigKqgSjsH6gYZv2nEsqdXfZqIFGAV36XYYjf9KGZ3PSG+IhLecqPnI310RvjutyMwifE2hhhNEklOUrvx/wA=="], + + "@types/dockerode/@types/node": ["@types/node@22.15.32", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-3jigKqgSjsH6gYZv2nEsqdXfZqIFGAV36XYYjf9KGZ3PSG+IhLecqPnI310RvjutyMwifE2hhhNEklOUrvx/wA=="], + + "@types/pg/@types/node": ["@types/node@22.15.32", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-3jigKqgSjsH6gYZv2nEsqdXfZqIFGAV36XYYjf9KGZ3PSG+IhLecqPnI310RvjutyMwifE2hhhNEklOUrvx/wA=="], + + "@types/ssh2/@types/node": ["@types/node@22.15.32", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-3jigKqgSjsH6gYZv2nEsqdXfZqIFGAV36XYYjf9KGZ3PSG+IhLecqPnI310RvjutyMwifE2hhhNEklOUrvx/wA=="], + + "@types/ssh2-streams/@types/node": ["@types/node@22.15.32", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-3jigKqgSjsH6gYZv2nEsqdXfZqIFGAV36XYYjf9KGZ3PSG+IhLecqPnI310RvjutyMwifE2hhhNEklOUrvx/wA=="], + + "@types/superagent/@types/node": ["@types/node@22.15.32", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-3jigKqgSjsH6gYZv2nEsqdXfZqIFGAV36XYYjf9KGZ3PSG+IhLecqPnI310RvjutyMwifE2hhhNEklOUrvx/wA=="], "@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.5", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow=="], @@ -2333,8 +2494,14 @@ "bl/readable-stream": ["readable-stream@3.6.2", "", { "dependencies": { "inherits": "^2.0.3", "string_decoder": "^1.1.1", "util-deprecate": "^1.0.1" } }, "sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA=="], + "body-parser/iconv-lite": ["iconv-lite@0.6.3", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3.0.0" } }, "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw=="], + + "bun-types/@types/node": ["@types/node@22.15.32", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-3jigKqgSjsH6gYZv2nEsqdXfZqIFGAV36XYYjf9KGZ3PSG+IhLecqPnI310RvjutyMwifE2hhhNEklOUrvx/wA=="], + "chokidar/glob-parent": ["glob-parent@5.1.2", "", { "dependencies": { "is-glob": "^4.0.1" } }, "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow=="], + "cli-tableau/chalk": ["chalk@3.0.0", "", { "dependencies": { "ansi-styles": "^4.1.0", "supports-color": "^7.1.0" } }, "sha512-4D3B6Wf41KOYRFdszmDqMCGq5VV/uMAB273JILmO+3jAlh8X4qDtdtgCR3fxtbLEMzSx22QdhnDcJvu2u1fVwg=="], + "compress-commons/is-stream": ["is-stream@2.0.1", "", {}, "sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg=="], "decompress-response/mimic-response": ["mimic-response@3.1.0", "", {}, "sha512-z0yWI+4FDrrweS8Zmt4Ej5HdJmky15+L2e6Wgn3+iK5fWzb6T3fhNFq2+MeTRb064c6Wr4N/wv0DzQTjNzHNGQ=="], @@ -2389,6 +2556,8 @@ "http-errors/statuses": ["statuses@2.0.1", "", {}, "sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ=="], + "ip-address/sprintf-js": ["sprintf-js@1.1.3", "", {}, "sha512-Oo+0REFV59/rz3gfJNKQiBlwfHaSESl1pcGyABQsnnIfWOFt6JNj5gCog2U6MLZ//IGYD+nA8nI+mTShREReaA=="], + "is-wsl/is-docker": ["is-docker@2.2.1", "", { "bin": { "is-docker": "cli.js" } }, "sha512-F+i2BKsFrH66iaUFc0woD8sLy8getkwTwtOBjvs56Cx4CgJDeKQeqfz8wAYiSb8JOprWhHH5p77PbmYCvvUuXQ=="], "joi/@hapi/hoek": ["@hapi/hoek@9.3.0", "", {}, "sha512-/c6rf4UJlmHlC9b5BaNvzAcFv7HZ2QHaV0D4/HNlBdvFnvQq8RI4kYdhyPCl7Xj+oWvTWQ8ujhqS53LIgAe6KQ=="], @@ -2401,14 +2570,20 @@ "mongodb-client-encryption/node-addon-api": ["node-addon-api@4.3.0", "", {}, "sha512-73sE9+3UaLYYFmDsFZnqCInzPyh3MqIwZO9cw58yIqAZhONrrabrYyYe3TuIqtIiOuTXVhsGau8hcrhhwSsDIQ=="], + "mongodb-client-encryption/prebuild-install": ["prebuild-install@7.1.3", "", { "dependencies": { "detect-libc": "^2.0.0", "expand-template": "^2.0.3", "github-from-package": "0.0.0", "minimist": "^1.2.3", "mkdirp-classic": "^0.5.3", "napi-build-utils": "^2.0.0", "node-abi": "^3.3.0", "pump": "^3.0.0", "rc": "^1.2.7", "simple-get": "^4.0.0", "tar-fs": "^2.0.0", "tunnel-agent": "^0.6.0" }, "bin": { "prebuild-install": "bin.js" } }, "sha512-8Mf2cbV7x1cXPUILADGI3wuhfqWvtiLA1iclTDbFRZkgRQS0NqsPZphna9V+HyTEadheuPmjaJMsbzKQFOzLug=="], + "mongodb-mcp-server/@modelcontextprotocol/sdk": ["@modelcontextprotocol/sdk@1.13.0", "", { "dependencies": { "ajv": "^6.12.6", "content-type": "^1.0.5", "cors": "^2.8.5", "cross-spawn": "^7.0.5", "eventsource": "^3.0.2", "express": "^5.0.1", "express-rate-limit": "^7.5.0", "pkce-challenge": "^5.0.0", "raw-body": "^3.0.0", "zod": "^3.23.8", "zod-to-json-schema": "^3.24.1" } }, "sha512-P5FZsXU0kY881F6Hbk9GhsYx02/KgWK1DYf7/tyE/1lcFKhDYPQR9iYjhQXJn+Sg6hQleMo3DB7h7+p4wgp2Lw=="], "mongodb-memory-server-core/mongodb": ["mongodb@5.9.2", "", { "dependencies": { "bson": "^5.5.0", "mongodb-connection-string-url": "^2.6.0", "socks": "^2.7.1" }, "optionalDependencies": { "@mongodb-js/saslprep": "^1.1.0" }, "peerDependencies": { "@aws-sdk/credential-providers": "^3.188.0", "@mongodb-js/zstd": "^1.0.0", "kerberos": "^1.0.0 || ^2.0.0", "mongodb-client-encryption": ">=2.3.0 <3", "snappy": "^7.2.2" }, "optionalPeers": ["@aws-sdk/credential-providers", "@mongodb-js/zstd", "kerberos", "mongodb-client-encryption", "snappy"] }, "sha512-H60HecKO4Bc+7dhOv4sJlgvenK4fQNqqUIlXxZYQNbfEWSALGAwGoyJd/0Qwk4TttFXUOHJ2ZJQe/52ScaUwtQ=="], "nearley/commander": ["commander@2.20.3", "", {}, "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ=="], + "needle/debug": ["debug@3.2.7", "", { "dependencies": { "ms": "^2.1.1" } }, "sha512-CFjzYYAi4ThfiQvizrFQevTTXHtnCqWfe7x1AhgEscTz6ZbLbfoLRLPugTQyBth6f8ZERVUSyWHFD/7Wu4t1XQ=="], + "npm-run-path/path-key": ["path-key@4.0.0", "", {}, "sha512-haREypq7xkM7ErfgIyA0z+Bj4AGKlMSdlQE2jvJo6huWD1EdkKYV+G/T4nq0YEF2vgTT8kqMFKo1uHn950r4SQ=="], + "nssocket/eventemitter2": ["eventemitter2@0.4.14", "", {}, "sha512-K7J4xq5xAD5jHsGM5ReWXRTFa3JRGofHiMcVgQ8PRwgWxzjHpMWCIzsmyf60+mh8KLsqYPcjUMa0AC4hd6lPyQ=="], + "openid-client/lru-cache": ["lru-cache@6.0.0", "", { "dependencies": { "yallist": "^4.0.0" } }, "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA=="], "os-dns-native/node-addon-api": ["node-addon-api@4.3.0", "", {}, "sha512-73sE9+3UaLYYFmDsFZnqCInzPyh3MqIwZO9cw58yIqAZhONrrabrYyYe3TuIqtIiOuTXVhsGau8hcrhhwSsDIQ=="], @@ -2419,8 +2594,18 @@ "pkg-dir/find-up": ["find-up@4.1.0", "", { "dependencies": { "locate-path": "^5.0.0", "path-exists": "^4.0.0" } }, "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw=="], + "pm2/chalk": ["chalk@3.0.0", "", { "dependencies": { "ansi-styles": "^4.1.0", "supports-color": "^7.1.0" } }, "sha512-4D3B6Wf41KOYRFdszmDqMCGq5VV/uMAB273JILmO+3jAlh8X4qDtdtgCR3fxtbLEMzSx22QdhnDcJvu2u1fVwg=="], + + "pm2-sysmonit/pidusage": ["pidusage@2.0.21", "", { "dependencies": { "safe-buffer": "^5.2.1" } }, "sha512-cv3xAQos+pugVX+BfXpHsbyz/dLzX+lr44zNMsYiGxUw+kV5sgQCIcLd1z+0vq+KyC7dJ+/ts2PsfgWfSC3WXA=="], + "prebuild-install/tar-fs": ["tar-fs@2.1.3", "", { "dependencies": { "chownr": "^1.1.1", "mkdirp-classic": "^0.5.2", "pump": "^3.0.0", "tar-stream": "^2.1.4" } }, "sha512-090nwYJDmlhwFwEW3QQl+vaNnxsO2yVsd45eTKRBzSzu+hlb1w2K9inVq5b0ngXuLVqQ4ApvsUHHnu/zQNkWAg=="], + "protobufjs/@types/node": ["@types/node@22.15.32", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-3jigKqgSjsH6gYZv2nEsqdXfZqIFGAV36XYYjf9KGZ3PSG+IhLecqPnI310RvjutyMwifE2hhhNEklOUrvx/wA=="], + + "proxy-agent/lru-cache": ["lru-cache@7.18.3", "", {}, "sha512-jumlc0BIUrS3qJGgIkWZsyfAM7NCWiBcCDhnd+3NNM5KbBmLTgHVfWBcg6W+rLUsIpzpERPsvwUP7CckAQSOoA=="], + + "raw-body/iconv-lite": ["iconv-lite@0.6.3", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3.0.0" } }, "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw=="], + "rc/strip-json-comments": ["strip-json-comments@2.0.1", "", {}, "sha512-4gB8na07fecVVkOI6Rs4e7T6NOTki5EmL7TUduTs6bu3EdnSycntVJ4re8kgZA+wx9IueI2Y11bfbgwtzuE0KQ=="], "readdir-glob/minimatch": ["minimatch@5.1.6", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-lKwV/1brpG6mBUFHtb7NUmtABCb2WZZmm2wNiOA5hAb8VdCS4B3dtMWyvcoViccwAW/COERjXLt0zP1zXUN26g=="], @@ -2431,12 +2616,14 @@ "send/mime-types": ["mime-types@3.0.1", "", { "dependencies": { "mime-db": "^1.54.0" } }, "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA=="], - "ssh-remote-port-forward/@types/ssh2": ["@types/ssh2@0.5.52", "", { "dependencies": { "@types/node": "*", "@types/ssh2-streams": "*" } }, "sha512-lbLLlXxdCZOSJMCInKH2+9V/77ET2J6NPQHpFI0kda61Dd1KglJs+fPQBchizmzYSOJBgdTajhPqBO1xxLywvg=="], + "sucrase/commander": ["commander@4.1.1", "", {}, "sha512-NOKm8xhkzAjzFx8B2v5OAHT+u5pRQc2UCa2Vq9jYL/31o2wi9mxBA7LIFs3sV5VSC49z6pEhfbMULvShKj26WA=="], "tailwindcss/object-hash": ["object-hash@3.0.0", "", {}, "sha512-RSn9F68PjH9HqtltsSnqYC1XXoWe9Bju5+213R98cNGttag9q9yAOTzdbsqvIa7aNm5WffBZFpWYr2aWrklWAw=="], "type-is/mime-types": ["mime-types@3.0.1", "", { "dependencies": { "mime-db": "^1.54.0" } }, "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA=="], + "vizion/async": ["async@2.6.4", "", { "dependencies": { "lodash": "^4.17.14" } }, "sha512-mzo5dfJYwAn29PeiJ0zvwTo04zj8HDJj0Mn8TD7sno7q12prdbnasKJHhkm2c1LgrhlJ0teaea8860oxi51mGA=="], + "win-export-certificate-and-key/node-addon-api": ["node-addon-api@3.2.1", "", {}, "sha512-mmcei9JghVNDYydghQmeDX8KoAm0FAiYyIcUt/N4nhyAipB17pllZQDOJD2fotxABnt4Mdz+dKTO7eftLg4d0A=="], "yauzl/buffer-crc32": ["buffer-crc32@0.2.13", "", {}, "sha512-VO9Ht/+p3SN7SKWqcrgEzjGbRSJYTx+Q1pTQC0wrWqHx0vpJraQ6GtHx8tvcg1rlK1byhU5gccxgOgj7B0TDkQ=="], @@ -2483,151 +2670,117 @@ "@mongodb-js/oidc-plugin/express/type-is": ["type-is@1.6.18", "", { "dependencies": { "media-typer": "0.3.0", "mime-types": "~2.1.24" } }, "sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g=="], - "@mongosh/service-provider-node-driver/kerberos/prebuild-install": ["prebuild-install@7.1.1", "", { "dependencies": { "detect-libc": "^2.0.0", "expand-template": "^2.0.3", "github-from-package": "0.0.0", "minimist": "^1.2.3", "mkdirp-classic": "^0.5.3", "napi-build-utils": "^1.0.1", "node-abi": "^3.3.0", "pump": "^3.0.0", "rc": "^1.2.7", "simple-get": "^4.0.0", "tar-fs": "^2.0.0", "tunnel-agent": "^0.6.0" }, "bin": { "prebuild-install": "bin.js" } }, "sha512-jAXscXWMcCK8GgCoHOfIr0ODh5ai8mj63L2nWrjuAgXE6tDyYGnx4/8o/rCgU+B4JSyZBKbeZqzhtwtC3ovxjw=="], + "@pm2/agent/semver/lru-cache": ["lru-cache@6.0.0", "", { "dependencies": { "yallist": "^4.0.0" } }, "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], + "@pm2/io/semver/lru-cache": ["lru-cache@6.0.0", "", { "dependencies": { "yallist": "^4.0.0" } }, "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils": ["@typescript-eslint/type-utils@6.21.0", "", { "dependencies": { "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/utils": "6.21.0", "debug": "^4.3.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-rZQI7wHfao8qMX3Rd3xqeYSMCL3SoiSQLBATSiVKARdFGCYSRvmViieZjqc58jKgs8Y8i9YvVVhRbHSTA4VBag=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/utils": ["@typescript-eslint/utils@6.21.0", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.4.0", "@types/json-schema": "^7.0.12", "@types/semver": "^7.5.0", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "semver": "^7.5.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-NfWVaC8HP9T8cbKQxHcsJBY5YE1O33+jpMwN45qzWWaPDZgLIbo12toGMWnmhvCpd3sIxkpDw3Wv1B3dYrbDQQ=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils": ["@typescript-eslint/type-utils@6.21.0", "", { "dependencies": { "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/utils": "6.21.0", "debug": "^4.3.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-rZQI7wHfao8qMX3Rd3xqeYSMCL3SoiSQLBATSiVKARdFGCYSRvmViieZjqc58jKgs8Y8i9YvVVhRbHSTA4VBag=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/utils": ["@typescript-eslint/utils@6.21.0", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.4.0", "@types/json-schema": "^7.0.12", "@types/semver": "^7.5.0", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "semver": "^7.5.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-NfWVaC8HP9T8cbKQxHcsJBY5YE1O33+jpMwN45qzWWaPDZgLIbo12toGMWnmhvCpd3sIxkpDw3Wv1B3dYrbDQQ=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], - "@stock-bot/http/@typescript-eslint/parser/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], - "@stock-bot/http/@typescript-eslint/parser/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/mongodb/@typescript-eslint/parser/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], - "@stock-bot/http/@typescript-eslint/parser/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], + "@stock-bot/mongodb/@typescript-eslint/parser/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/http/@typescript-eslint/parser/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], + "@stock-bot/mongodb/@typescript-eslint/parser/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], - "@stock-bot/http/eslint/@eslint/eslintrc": ["@eslint/eslintrc@2.1.4", "", { "dependencies": { "ajv": "^6.12.4", "debug": "^4.3.2", "espree": "^9.6.0", "globals": "^13.19.0", "ignore": "^5.2.0", "import-fresh": "^3.2.1", "js-yaml": "^4.1.0", "minimatch": "^3.1.2", "strip-json-comments": "^3.1.1" } }, "sha512-269Z39MS6wVJtsoUl10L60WdkhJVdPG24Q4eZTH3nnF6lpvSShEK3wQjDX9JRWAUPvPh7COouPpU9IrqaZFvtQ=="], + "@stock-bot/mongodb/@typescript-eslint/parser/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], - "@stock-bot/http/eslint/@eslint/js": ["@eslint/js@8.57.1", "", {}, "sha512-d9zaMRSTIKDLhctzH12MtXvJKSSUhaHcjV+2Z+GK+EEY7XKpP5yR4x+N3TAcHTcu963nIr+TMcCb4DBCYX1z6Q=="], + "@stock-bot/mongodb/eslint/@eslint/eslintrc": ["@eslint/eslintrc@2.1.4", "", { "dependencies": { "ajv": "^6.12.4", "debug": "^4.3.2", "espree": "^9.6.0", "globals": "^13.19.0", "ignore": "^5.2.0", "import-fresh": "^3.2.1", "js-yaml": "^4.1.0", "minimatch": "^3.1.2", "strip-json-comments": "^3.1.1" } }, "sha512-269Z39MS6wVJtsoUl10L60WdkhJVdPG24Q4eZTH3nnF6lpvSShEK3wQjDX9JRWAUPvPh7COouPpU9IrqaZFvtQ=="], - "@stock-bot/http/eslint/doctrine": ["doctrine@3.0.0", "", { "dependencies": { "esutils": "^2.0.2" } }, "sha512-yS+Q5i3hBf7GBkd4KG8a7eBNNWNGLTaEwwYWUijIYM7zrlYDM0BFXHjjPWlWZ1Rg7UaddZeIDmi9jF3HmqiQ2w=="], + "@stock-bot/mongodb/eslint/@eslint/js": ["@eslint/js@8.57.1", "", {}, "sha512-d9zaMRSTIKDLhctzH12MtXvJKSSUhaHcjV+2Z+GK+EEY7XKpP5yR4x+N3TAcHTcu963nIr+TMcCb4DBCYX1z6Q=="], - "@stock-bot/http/eslint/eslint-scope": ["eslint-scope@7.2.2", "", { "dependencies": { "esrecurse": "^4.3.0", "estraverse": "^5.2.0" } }, "sha512-dOt21O7lTMhDM+X9mB4GX+DZrZtCUJPL/wlcTqxyrx5IvO0IYtILdtrQGQp+8n5S0gwSVmOf9NQrjMOgfQZlIg=="], + "@stock-bot/mongodb/eslint/doctrine": ["doctrine@3.0.0", "", { "dependencies": { "esutils": "^2.0.2" } }, "sha512-yS+Q5i3hBf7GBkd4KG8a7eBNNWNGLTaEwwYWUijIYM7zrlYDM0BFXHjjPWlWZ1Rg7UaddZeIDmi9jF3HmqiQ2w=="], - "@stock-bot/http/eslint/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], + "@stock-bot/mongodb/eslint/eslint-scope": ["eslint-scope@7.2.2", "", { "dependencies": { "esrecurse": "^4.3.0", "estraverse": "^5.2.0" } }, "sha512-dOt21O7lTMhDM+X9mB4GX+DZrZtCUJPL/wlcTqxyrx5IvO0IYtILdtrQGQp+8n5S0gwSVmOf9NQrjMOgfQZlIg=="], - "@stock-bot/http/eslint/espree": ["espree@9.6.1", "", { "dependencies": { "acorn": "^8.9.0", "acorn-jsx": "^5.3.2", "eslint-visitor-keys": "^3.4.1" } }, "sha512-oruZaFkjorTpF32kDSI5/75ViwGeZginGGy2NoOSg3Q9bnwlnmDm4HLnkl0RE3n+njDXR037aY1+x58Z/zFdwQ=="], + "@stock-bot/mongodb/eslint/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], - "@stock-bot/http/eslint/file-entry-cache": ["file-entry-cache@6.0.1", "", { "dependencies": { "flat-cache": "^3.0.4" } }, "sha512-7Gps/XWymbLk2QLYK4NzpMOrYjMhdIxXuIvy2QBsLE6ljuodKvdkWs/cpyJJ3CVIVpH0Oi1Hvg1ovbMzLdFBBg=="], + "@stock-bot/mongodb/eslint/espree": ["espree@9.6.1", "", { "dependencies": { "acorn": "^8.9.0", "acorn-jsx": "^5.3.2", "eslint-visitor-keys": "^3.4.1" } }, "sha512-oruZaFkjorTpF32kDSI5/75ViwGeZginGGy2NoOSg3Q9bnwlnmDm4HLnkl0RE3n+njDXR037aY1+x58Z/zFdwQ=="], - "@stock-bot/http/eslint/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], + "@stock-bot/mongodb/eslint/file-entry-cache": ["file-entry-cache@6.0.1", "", { "dependencies": { "flat-cache": "^3.0.4" } }, "sha512-7Gps/XWymbLk2QLYK4NzpMOrYjMhdIxXuIvy2QBsLE6ljuodKvdkWs/cpyJJ3CVIVpH0Oi1Hvg1ovbMzLdFBBg=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], + "@stock-bot/mongodb/eslint/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils": ["@typescript-eslint/type-utils@6.21.0", "", { "dependencies": { "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/utils": "6.21.0", "debug": "^4.3.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-rZQI7wHfao8qMX3Rd3xqeYSMCL3SoiSQLBATSiVKARdFGCYSRvmViieZjqc58jKgs8Y8i9YvVVhRbHSTA4VBag=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils": ["@typescript-eslint/utils@6.21.0", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.4.0", "@types/json-schema": "^7.0.12", "@types/semver": "^7.5.0", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "semver": "^7.5.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-NfWVaC8HP9T8cbKQxHcsJBY5YE1O33+jpMwN45qzWWaPDZgLIbo12toGMWnmhvCpd3sIxkpDw3Wv1B3dYrbDQQ=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils": ["@typescript-eslint/type-utils@6.21.0", "", { "dependencies": { "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/utils": "6.21.0", "debug": "^4.3.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-rZQI7wHfao8qMX3Rd3xqeYSMCL3SoiSQLBATSiVKARdFGCYSRvmViieZjqc58jKgs8Y8i9YvVVhRbHSTA4VBag=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/utils": ["@typescript-eslint/utils@6.21.0", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.4.0", "@types/json-schema": "^7.0.12", "@types/semver": "^7.5.0", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "semver": "^7.5.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-NfWVaC8HP9T8cbKQxHcsJBY5YE1O33+jpMwN45qzWWaPDZgLIbo12toGMWnmhvCpd3sIxkpDw3Wv1B3dYrbDQQ=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], - "@stock-bot/mongodb-client/@typescript-eslint/parser/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], - "@stock-bot/mongodb-client/@typescript-eslint/parser/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/postgres/@typescript-eslint/parser/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], - "@stock-bot/mongodb-client/@typescript-eslint/parser/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], + "@stock-bot/postgres/@typescript-eslint/parser/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/mongodb-client/@typescript-eslint/parser/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], + "@stock-bot/postgres/@typescript-eslint/parser/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], - "@stock-bot/mongodb-client/eslint/@eslint/eslintrc": ["@eslint/eslintrc@2.1.4", "", { "dependencies": { "ajv": "^6.12.4", "debug": "^4.3.2", "espree": "^9.6.0", "globals": "^13.19.0", "ignore": "^5.2.0", "import-fresh": "^3.2.1", "js-yaml": "^4.1.0", "minimatch": "^3.1.2", "strip-json-comments": "^3.1.1" } }, "sha512-269Z39MS6wVJtsoUl10L60WdkhJVdPG24Q4eZTH3nnF6lpvSShEK3wQjDX9JRWAUPvPh7COouPpU9IrqaZFvtQ=="], + "@stock-bot/postgres/@typescript-eslint/parser/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], - "@stock-bot/mongodb-client/eslint/@eslint/js": ["@eslint/js@8.57.1", "", {}, "sha512-d9zaMRSTIKDLhctzH12MtXvJKSSUhaHcjV+2Z+GK+EEY7XKpP5yR4x+N3TAcHTcu963nIr+TMcCb4DBCYX1z6Q=="], + "@stock-bot/postgres/eslint/@eslint/eslintrc": ["@eslint/eslintrc@2.1.4", "", { "dependencies": { "ajv": "^6.12.4", "debug": "^4.3.2", "espree": "^9.6.0", "globals": "^13.19.0", "ignore": "^5.2.0", "import-fresh": "^3.2.1", "js-yaml": "^4.1.0", "minimatch": "^3.1.2", "strip-json-comments": "^3.1.1" } }, "sha512-269Z39MS6wVJtsoUl10L60WdkhJVdPG24Q4eZTH3nnF6lpvSShEK3wQjDX9JRWAUPvPh7COouPpU9IrqaZFvtQ=="], - "@stock-bot/mongodb-client/eslint/doctrine": ["doctrine@3.0.0", "", { "dependencies": { "esutils": "^2.0.2" } }, "sha512-yS+Q5i3hBf7GBkd4KG8a7eBNNWNGLTaEwwYWUijIYM7zrlYDM0BFXHjjPWlWZ1Rg7UaddZeIDmi9jF3HmqiQ2w=="], + "@stock-bot/postgres/eslint/@eslint/js": ["@eslint/js@8.57.1", "", {}, "sha512-d9zaMRSTIKDLhctzH12MtXvJKSSUhaHcjV+2Z+GK+EEY7XKpP5yR4x+N3TAcHTcu963nIr+TMcCb4DBCYX1z6Q=="], - "@stock-bot/mongodb-client/eslint/eslint-scope": ["eslint-scope@7.2.2", "", { "dependencies": { "esrecurse": "^4.3.0", "estraverse": "^5.2.0" } }, "sha512-dOt21O7lTMhDM+X9mB4GX+DZrZtCUJPL/wlcTqxyrx5IvO0IYtILdtrQGQp+8n5S0gwSVmOf9NQrjMOgfQZlIg=="], + "@stock-bot/postgres/eslint/doctrine": ["doctrine@3.0.0", "", { "dependencies": { "esutils": "^2.0.2" } }, "sha512-yS+Q5i3hBf7GBkd4KG8a7eBNNWNGLTaEwwYWUijIYM7zrlYDM0BFXHjjPWlWZ1Rg7UaddZeIDmi9jF3HmqiQ2w=="], - "@stock-bot/mongodb-client/eslint/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], + "@stock-bot/postgres/eslint/eslint-scope": ["eslint-scope@7.2.2", "", { "dependencies": { "esrecurse": "^4.3.0", "estraverse": "^5.2.0" } }, "sha512-dOt21O7lTMhDM+X9mB4GX+DZrZtCUJPL/wlcTqxyrx5IvO0IYtILdtrQGQp+8n5S0gwSVmOf9NQrjMOgfQZlIg=="], - "@stock-bot/mongodb-client/eslint/espree": ["espree@9.6.1", "", { "dependencies": { "acorn": "^8.9.0", "acorn-jsx": "^5.3.2", "eslint-visitor-keys": "^3.4.1" } }, "sha512-oruZaFkjorTpF32kDSI5/75ViwGeZginGGy2NoOSg3Q9bnwlnmDm4HLnkl0RE3n+njDXR037aY1+x58Z/zFdwQ=="], + "@stock-bot/postgres/eslint/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], - "@stock-bot/mongodb-client/eslint/file-entry-cache": ["file-entry-cache@6.0.1", "", { "dependencies": { "flat-cache": "^3.0.4" } }, "sha512-7Gps/XWymbLk2QLYK4NzpMOrYjMhdIxXuIvy2QBsLE6ljuodKvdkWs/cpyJJ3CVIVpH0Oi1Hvg1ovbMzLdFBBg=="], + "@stock-bot/postgres/eslint/espree": ["espree@9.6.1", "", { "dependencies": { "acorn": "^8.9.0", "acorn-jsx": "^5.3.2", "eslint-visitor-keys": "^3.4.1" } }, "sha512-oruZaFkjorTpF32kDSI5/75ViwGeZginGGy2NoOSg3Q9bnwlnmDm4HLnkl0RE3n+njDXR037aY1+x58Z/zFdwQ=="], - "@stock-bot/mongodb-client/eslint/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], + "@stock-bot/postgres/eslint/file-entry-cache": ["file-entry-cache@6.0.1", "", { "dependencies": { "flat-cache": "^3.0.4" } }, "sha512-7Gps/XWymbLk2QLYK4NzpMOrYjMhdIxXuIvy2QBsLE6ljuodKvdkWs/cpyJJ3CVIVpH0Oi1Hvg1ovbMzLdFBBg=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], + "@stock-bot/postgres/eslint/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils": ["@typescript-eslint/type-utils@6.21.0", "", { "dependencies": { "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/utils": "6.21.0", "debug": "^4.3.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-rZQI7wHfao8qMX3Rd3xqeYSMCL3SoiSQLBATSiVKARdFGCYSRvmViieZjqc58jKgs8Y8i9YvVVhRbHSTA4VBag=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils": ["@typescript-eslint/utils@6.21.0", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.4.0", "@types/json-schema": "^7.0.12", "@types/semver": "^7.5.0", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "semver": "^7.5.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-NfWVaC8HP9T8cbKQxHcsJBY5YE1O33+jpMwN45qzWWaPDZgLIbo12toGMWnmhvCpd3sIxkpDw3Wv1B3dYrbDQQ=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils": ["@typescript-eslint/type-utils@6.21.0", "", { "dependencies": { "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/utils": "6.21.0", "debug": "^4.3.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-rZQI7wHfao8qMX3Rd3xqeYSMCL3SoiSQLBATSiVKARdFGCYSRvmViieZjqc58jKgs8Y8i9YvVVhRbHSTA4VBag=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/utils": ["@typescript-eslint/utils@6.21.0", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.4.0", "@types/json-schema": "^7.0.12", "@types/semver": "^7.5.0", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "semver": "^7.5.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-NfWVaC8HP9T8cbKQxHcsJBY5YE1O33+jpMwN45qzWWaPDZgLIbo12toGMWnmhvCpd3sIxkpDw3Wv1B3dYrbDQQ=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], - "@stock-bot/postgres-client/@typescript-eslint/parser/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], - "@stock-bot/postgres-client/@typescript-eslint/parser/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/questdb/@typescript-eslint/parser/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], - "@stock-bot/postgres-client/@typescript-eslint/parser/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], + "@stock-bot/questdb/@typescript-eslint/parser/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/postgres-client/@typescript-eslint/parser/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], + "@stock-bot/questdb/@typescript-eslint/parser/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], - "@stock-bot/postgres-client/eslint/@eslint/eslintrc": ["@eslint/eslintrc@2.1.4", "", { "dependencies": { "ajv": "^6.12.4", "debug": "^4.3.2", "espree": "^9.6.0", "globals": "^13.19.0", "ignore": "^5.2.0", "import-fresh": "^3.2.1", "js-yaml": "^4.1.0", "minimatch": "^3.1.2", "strip-json-comments": "^3.1.1" } }, "sha512-269Z39MS6wVJtsoUl10L60WdkhJVdPG24Q4eZTH3nnF6lpvSShEK3wQjDX9JRWAUPvPh7COouPpU9IrqaZFvtQ=="], + "@stock-bot/questdb/@typescript-eslint/parser/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], - "@stock-bot/postgres-client/eslint/@eslint/js": ["@eslint/js@8.57.1", "", {}, "sha512-d9zaMRSTIKDLhctzH12MtXvJKSSUhaHcjV+2Z+GK+EEY7XKpP5yR4x+N3TAcHTcu963nIr+TMcCb4DBCYX1z6Q=="], + "@stock-bot/questdb/eslint/@eslint/eslintrc": ["@eslint/eslintrc@2.1.4", "", { "dependencies": { "ajv": "^6.12.4", "debug": "^4.3.2", "espree": "^9.6.0", "globals": "^13.19.0", "ignore": "^5.2.0", "import-fresh": "^3.2.1", "js-yaml": "^4.1.0", "minimatch": "^3.1.2", "strip-json-comments": "^3.1.1" } }, "sha512-269Z39MS6wVJtsoUl10L60WdkhJVdPG24Q4eZTH3nnF6lpvSShEK3wQjDX9JRWAUPvPh7COouPpU9IrqaZFvtQ=="], - "@stock-bot/postgres-client/eslint/doctrine": ["doctrine@3.0.0", "", { "dependencies": { "esutils": "^2.0.2" } }, "sha512-yS+Q5i3hBf7GBkd4KG8a7eBNNWNGLTaEwwYWUijIYM7zrlYDM0BFXHjjPWlWZ1Rg7UaddZeIDmi9jF3HmqiQ2w=="], + "@stock-bot/questdb/eslint/@eslint/js": ["@eslint/js@8.57.1", "", {}, "sha512-d9zaMRSTIKDLhctzH12MtXvJKSSUhaHcjV+2Z+GK+EEY7XKpP5yR4x+N3TAcHTcu963nIr+TMcCb4DBCYX1z6Q=="], - "@stock-bot/postgres-client/eslint/eslint-scope": ["eslint-scope@7.2.2", "", { "dependencies": { "esrecurse": "^4.3.0", "estraverse": "^5.2.0" } }, "sha512-dOt21O7lTMhDM+X9mB4GX+DZrZtCUJPL/wlcTqxyrx5IvO0IYtILdtrQGQp+8n5S0gwSVmOf9NQrjMOgfQZlIg=="], + "@stock-bot/questdb/eslint/doctrine": ["doctrine@3.0.0", "", { "dependencies": { "esutils": "^2.0.2" } }, "sha512-yS+Q5i3hBf7GBkd4KG8a7eBNNWNGLTaEwwYWUijIYM7zrlYDM0BFXHjjPWlWZ1Rg7UaddZeIDmi9jF3HmqiQ2w=="], - "@stock-bot/postgres-client/eslint/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], + "@stock-bot/questdb/eslint/eslint-scope": ["eslint-scope@7.2.2", "", { "dependencies": { "esrecurse": "^4.3.0", "estraverse": "^5.2.0" } }, "sha512-dOt21O7lTMhDM+X9mB4GX+DZrZtCUJPL/wlcTqxyrx5IvO0IYtILdtrQGQp+8n5S0gwSVmOf9NQrjMOgfQZlIg=="], - "@stock-bot/postgres-client/eslint/espree": ["espree@9.6.1", "", { "dependencies": { "acorn": "^8.9.0", "acorn-jsx": "^5.3.2", "eslint-visitor-keys": "^3.4.1" } }, "sha512-oruZaFkjorTpF32kDSI5/75ViwGeZginGGy2NoOSg3Q9bnwlnmDm4HLnkl0RE3n+njDXR037aY1+x58Z/zFdwQ=="], + "@stock-bot/questdb/eslint/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], - "@stock-bot/postgres-client/eslint/file-entry-cache": ["file-entry-cache@6.0.1", "", { "dependencies": { "flat-cache": "^3.0.4" } }, "sha512-7Gps/XWymbLk2QLYK4NzpMOrYjMhdIxXuIvy2QBsLE6ljuodKvdkWs/cpyJJ3CVIVpH0Oi1Hvg1ovbMzLdFBBg=="], + "@stock-bot/questdb/eslint/espree": ["espree@9.6.1", "", { "dependencies": { "acorn": "^8.9.0", "acorn-jsx": "^5.3.2", "eslint-visitor-keys": "^3.4.1" } }, "sha512-oruZaFkjorTpF32kDSI5/75ViwGeZginGGy2NoOSg3Q9bnwlnmDm4HLnkl0RE3n+njDXR037aY1+x58Z/zFdwQ=="], - "@stock-bot/postgres-client/eslint/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], + "@stock-bot/questdb/eslint/file-entry-cache": ["file-entry-cache@6.0.1", "", { "dependencies": { "flat-cache": "^3.0.4" } }, "sha512-7Gps/XWymbLk2QLYK4NzpMOrYjMhdIxXuIvy2QBsLE6ljuodKvdkWs/cpyJJ3CVIVpH0Oi1Hvg1ovbMzLdFBBg=="], - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils": ["@typescript-eslint/type-utils@6.21.0", "", { "dependencies": { "@typescript-eslint/typescript-estree": "6.21.0", "@typescript-eslint/utils": "6.21.0", "debug": "^4.3.4", "ts-api-utils": "^1.0.1" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-rZQI7wHfao8qMX3Rd3xqeYSMCL3SoiSQLBATSiVKARdFGCYSRvmViieZjqc58jKgs8Y8i9YvVVhRbHSTA4VBag=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils": ["@typescript-eslint/utils@6.21.0", "", { "dependencies": { "@eslint-community/eslint-utils": "^4.4.0", "@types/json-schema": "^7.0.12", "@types/semver": "^7.5.0", "@typescript-eslint/scope-manager": "6.21.0", "@typescript-eslint/types": "6.21.0", "@typescript-eslint/typescript-estree": "6.21.0", "semver": "^7.5.4" }, "peerDependencies": { "eslint": "^7.0.0 || ^8.0.0" } }, "sha512-NfWVaC8HP9T8cbKQxHcsJBY5YE1O33+jpMwN45qzWWaPDZgLIbo12toGMWnmhvCpd3sIxkpDw3Wv1B3dYrbDQQ=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], - - "@stock-bot/questdb-client/@typescript-eslint/parser/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], - - "@stock-bot/questdb-client/@typescript-eslint/parser/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - - "@stock-bot/questdb-client/@typescript-eslint/parser/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], - - "@stock-bot/questdb-client/@typescript-eslint/parser/@typescript-eslint/visitor-keys": ["@typescript-eslint/visitor-keys@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "eslint-visitor-keys": "^3.4.1" } }, "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A=="], - - "@stock-bot/questdb-client/eslint/@eslint/eslintrc": ["@eslint/eslintrc@2.1.4", "", { "dependencies": { "ajv": "^6.12.4", "debug": "^4.3.2", "espree": "^9.6.0", "globals": "^13.19.0", "ignore": "^5.2.0", "import-fresh": "^3.2.1", "js-yaml": "^4.1.0", "minimatch": "^3.1.2", "strip-json-comments": "^3.1.1" } }, "sha512-269Z39MS6wVJtsoUl10L60WdkhJVdPG24Q4eZTH3nnF6lpvSShEK3wQjDX9JRWAUPvPh7COouPpU9IrqaZFvtQ=="], - - "@stock-bot/questdb-client/eslint/@eslint/js": ["@eslint/js@8.57.1", "", {}, "sha512-d9zaMRSTIKDLhctzH12MtXvJKSSUhaHcjV+2Z+GK+EEY7XKpP5yR4x+N3TAcHTcu963nIr+TMcCb4DBCYX1z6Q=="], - - "@stock-bot/questdb-client/eslint/doctrine": ["doctrine@3.0.0", "", { "dependencies": { "esutils": "^2.0.2" } }, "sha512-yS+Q5i3hBf7GBkd4KG8a7eBNNWNGLTaEwwYWUijIYM7zrlYDM0BFXHjjPWlWZ1Rg7UaddZeIDmi9jF3HmqiQ2w=="], - - "@stock-bot/questdb-client/eslint/eslint-scope": ["eslint-scope@7.2.2", "", { "dependencies": { "esrecurse": "^4.3.0", "estraverse": "^5.2.0" } }, "sha512-dOt21O7lTMhDM+X9mB4GX+DZrZtCUJPL/wlcTqxyrx5IvO0IYtILdtrQGQp+8n5S0gwSVmOf9NQrjMOgfQZlIg=="], - - "@stock-bot/questdb-client/eslint/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], - - "@stock-bot/questdb-client/eslint/espree": ["espree@9.6.1", "", { "dependencies": { "acorn": "^8.9.0", "acorn-jsx": "^5.3.2", "eslint-visitor-keys": "^3.4.1" } }, "sha512-oruZaFkjorTpF32kDSI5/75ViwGeZginGGy2NoOSg3Q9bnwlnmDm4HLnkl0RE3n+njDXR037aY1+x58Z/zFdwQ=="], - - "@stock-bot/questdb-client/eslint/file-entry-cache": ["file-entry-cache@6.0.1", "", { "dependencies": { "flat-cache": "^3.0.4" } }, "sha512-7Gps/XWymbLk2QLYK4NzpMOrYjMhdIxXuIvy2QBsLE6ljuodKvdkWs/cpyJJ3CVIVpH0Oi1Hvg1ovbMzLdFBBg=="], - - "@stock-bot/questdb-client/eslint/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], + "@stock-bot/questdb/eslint/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], "@stock-bot/web-app/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager": ["@typescript-eslint/scope-manager@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0" } }, "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg=="], @@ -2665,8 +2818,6 @@ "@stock-bot/web-app/eslint/ignore": ["ignore@5.3.2", "", {}, "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g=="], - "@types/ssh2/@types/node/undici-types": ["undici-types@5.26.5", "", {}, "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA=="], - "@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], "accepts/mime-types/mime-db": ["mime-db@1.54.0", "", {}, "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ=="], @@ -2683,6 +2834,10 @@ "lazystream/readable-stream/string_decoder": ["string_decoder@1.1.1", "", { "dependencies": { "safe-buffer": "~5.1.0" } }, "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg=="], + "mongodb-client-encryption/prebuild-install/napi-build-utils": ["napi-build-utils@2.0.0", "", {}, "sha512-GEbrYkbfF7MoNaoh2iGG84Mnf/WZfB0GdGEsM8wz7Expx/LlWf5U8t9nvJKXSp3qr5IsEbK04cBGhol/KwOsWA=="], + + "mongodb-client-encryption/prebuild-install/tar-fs": ["tar-fs@2.1.3", "", { "dependencies": { "chownr": "^1.1.1", "mkdirp-classic": "^0.5.2", "pump": "^3.0.0", "tar-stream": "^2.1.4" } }, "sha512-090nwYJDmlhwFwEW3QQl+vaNnxsO2yVsd45eTKRBzSzu+hlb1w2K9inVq5b0ngXuLVqQ4ApvsUHHnu/zQNkWAg=="], + "mongodb-memory-server-core/mongodb/bson": ["bson@5.5.1", "", {}, "sha512-ix0EwukN2EpC0SRWIj/7B5+A6uQMQy6KMREI9qQqvgpkV2frH63T0UDVd1SYedL6dNCmDBYB3QtXi4ISk9YT+g=="], "mongodb-memory-server-core/mongodb/mongodb-connection-string-url": ["mongodb-connection-string-url@2.6.0", "", { "dependencies": { "@types/whatwg-url": "^8.2.1", "whatwg-url": "^11.0.0" } }, "sha512-WvTZlI9ab0QYtTYnuMLgobULWhokRjtC7db9LtcVfJ+Hsnyr5eo6ZtNAt3Ly24XZScGMelOcGtm7lSn0332tPQ=="], @@ -2715,8 +2870,6 @@ "@mongodb-js/oidc-plugin/express/accepts/negotiator": ["negotiator@0.6.3", "", {}, "sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg=="], - "@mongodb-js/oidc-plugin/express/body-parser/iconv-lite": ["iconv-lite@0.4.24", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3" } }, "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA=="], - "@mongodb-js/oidc-plugin/express/body-parser/raw-body": ["raw-body@2.5.2", "", { "dependencies": { "bytes": "3.1.2", "http-errors": "2.0.0", "iconv-lite": "0.4.24", "unpipe": "1.0.0" } }, "sha512-8zGqypfENjCIqGhgXToC8aB2r7YrBX+AQAfIPs/Mlk+BtPTztOvTS01NRW/3Eh60J+a48lt8qsCzirQ6loCVfA=="], "@mongodb-js/oidc-plugin/express/debug/ms": ["ms@2.0.0", "", {}, "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="], @@ -2727,89 +2880,65 @@ "@mongodb-js/oidc-plugin/express/type-is/media-typer": ["media-typer@0.3.0", "", {}, "sha512-dq+qelQ9akHpcOl/gUVRTxVIOkAJ1wR3QAvb4RsVjS8oVoFjDGTc679wJYmUmknUF5HwMLOgb5O+a3KxfWapPQ=="], - "@mongosh/service-provider-node-driver/kerberos/prebuild-install/napi-build-utils": ["napi-build-utils@1.0.2", "", {}, "sha512-ONmRUqK7zj7DWX0D9ADe03wbwOBZxNAfF20PlGfCWQcD3+/MakShIHrMqx9YwPTfxDdF1zLeL+RGZiR9kGMLdg=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@mongosh/service-provider-node-driver/kerberos/prebuild-install/tar-fs": ["tar-fs@2.1.3", "", { "dependencies": { "chownr": "^1.1.1", "mkdirp-classic": "^0.5.2", "pump": "^3.0.0", "tar-stream": "^2.1.4" } }, "sha512-090nwYJDmlhwFwEW3QQl+vaNnxsO2yVsd45eTKRBzSzu+hlb1w2K9inVq5b0ngXuLVqQ4ApvsUHHnu/zQNkWAg=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/mongodb/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], + "@stock-bot/mongodb/@typescript-eslint/parser/@typescript-eslint/typescript-estree/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], - "@stock-bot/http/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], + "@stock-bot/mongodb/@typescript-eslint/parser/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], - "@stock-bot/http/@typescript-eslint/parser/@typescript-eslint/typescript-estree/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], + "@stock-bot/mongodb/eslint/file-entry-cache/flat-cache": ["flat-cache@3.2.0", "", { "dependencies": { "flatted": "^3.2.9", "keyv": "^4.5.3", "rimraf": "^3.0.2" } }, "sha512-CYcENa+FtcUKLmhhqyctpclsq7QF38pKjZHsGNiSQF5r4FtoKDWabFDl3hzaEQMvT1LHEysw5twgLvpYYb4vbw=="], - "@stock-bot/http/@typescript-eslint/parser/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/http/eslint/file-entry-cache/flat-cache": ["flat-cache@3.2.0", "", { "dependencies": { "flatted": "^3.2.9", "keyv": "^4.5.3", "rimraf": "^3.0.2" } }, "sha512-CYcENa+FtcUKLmhhqyctpclsq7QF38pKjZHsGNiSQF5r4FtoKDWabFDl3hzaEQMvT1LHEysw5twgLvpYYb4vbw=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/postgres/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], + "@stock-bot/postgres/@typescript-eslint/parser/@typescript-eslint/typescript-estree/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], - "@stock-bot/mongodb-client/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], + "@stock-bot/postgres/@typescript-eslint/parser/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], - "@stock-bot/mongodb-client/@typescript-eslint/parser/@typescript-eslint/typescript-estree/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], + "@stock-bot/postgres/eslint/file-entry-cache/flat-cache": ["flat-cache@3.2.0", "", { "dependencies": { "flatted": "^3.2.9", "keyv": "^4.5.3", "rimraf": "^3.0.2" } }, "sha512-CYcENa+FtcUKLmhhqyctpclsq7QF38pKjZHsGNiSQF5r4FtoKDWabFDl3hzaEQMvT1LHEysw5twgLvpYYb4vbw=="], - "@stock-bot/mongodb-client/@typescript-eslint/parser/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/mongodb-client/eslint/file-entry-cache/flat-cache": ["flat-cache@3.2.0", "", { "dependencies": { "flatted": "^3.2.9", "keyv": "^4.5.3", "rimraf": "^3.0.2" } }, "sha512-CYcENa+FtcUKLmhhqyctpclsq7QF38pKjZHsGNiSQF5r4FtoKDWabFDl3hzaEQMvT1LHEysw5twgLvpYYb4vbw=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/questdb/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], + "@stock-bot/questdb/@typescript-eslint/parser/@typescript-eslint/typescript-estree/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], - "@stock-bot/postgres-client/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], + "@stock-bot/questdb/@typescript-eslint/parser/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], - "@stock-bot/postgres-client/@typescript-eslint/parser/@typescript-eslint/typescript-estree/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], - - "@stock-bot/postgres-client/@typescript-eslint/parser/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], - - "@stock-bot/postgres-client/eslint/file-entry-cache/flat-cache": ["flat-cache@3.2.0", "", { "dependencies": { "flatted": "^3.2.9", "keyv": "^4.5.3", "rimraf": "^3.0.2" } }, "sha512-CYcENa+FtcUKLmhhqyctpclsq7QF38pKjZHsGNiSQF5r4FtoKDWabFDl3hzaEQMvT1LHEysw5twgLvpYYb4vbw=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree": ["@typescript-eslint/typescript-estree@6.21.0", "", { "dependencies": { "@typescript-eslint/types": "6.21.0", "@typescript-eslint/visitor-keys": "6.21.0", "debug": "^4.3.4", "globby": "^11.1.0", "is-glob": "^4.0.3", "minimatch": "9.0.3", "semver": "^7.5.4", "ts-api-utils": "^1.0.1" } }, "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], - - "@stock-bot/questdb-client/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], - - "@stock-bot/questdb-client/@typescript-eslint/parser/@typescript-eslint/typescript-estree/ts-api-utils": ["ts-api-utils@1.4.3", "", { "peerDependencies": { "typescript": ">=4.2.0" } }, "sha512-i3eMG77UTMD0hZhgRS562pv83RC6ukSAC2GMNWc+9dieh/+jDM5u5YG+NHX6VNDRHQcHwmsTHctP9LhbC3WxVw=="], - - "@stock-bot/questdb-client/@typescript-eslint/parser/@typescript-eslint/visitor-keys/eslint-visitor-keys": ["eslint-visitor-keys@3.4.3", "", {}, "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag=="], - - "@stock-bot/questdb-client/eslint/file-entry-cache/flat-cache": ["flat-cache@3.2.0", "", { "dependencies": { "flatted": "^3.2.9", "keyv": "^4.5.3", "rimraf": "^3.0.2" } }, "sha512-CYcENa+FtcUKLmhhqyctpclsq7QF38pKjZHsGNiSQF5r4FtoKDWabFDl3hzaEQMvT1LHEysw5twgLvpYYb4vbw=="], + "@stock-bot/questdb/eslint/file-entry-cache/flat-cache": ["flat-cache@3.2.0", "", { "dependencies": { "flatted": "^3.2.9", "keyv": "^4.5.3", "rimraf": "^3.0.2" } }, "sha512-CYcENa+FtcUKLmhhqyctpclsq7QF38pKjZHsGNiSQF5r4FtoKDWabFDl3hzaEQMvT1LHEysw5twgLvpYYb4vbw=="], "@stock-bot/web-app/@typescript-eslint/eslint-plugin/@typescript-eslint/scope-manager/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], @@ -2833,6 +2962,8 @@ "dockerode/tar-fs/tar-stream/readable-stream": ["readable-stream@3.6.2", "", { "dependencies": { "inherits": "^2.0.3", "string_decoder": "^1.1.1", "util-deprecate": "^1.0.1" } }, "sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA=="], + "mongodb-client-encryption/prebuild-install/tar-fs/tar-stream": ["tar-stream@2.2.0", "", { "dependencies": { "bl": "^4.0.3", "end-of-stream": "^1.4.1", "fs-constants": "^1.0.0", "inherits": "^2.0.3", "readable-stream": "^3.1.1" } }, "sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ=="], + "mongodb-memory-server-core/mongodb/mongodb-connection-string-url/@types/whatwg-url": ["@types/whatwg-url@8.2.2", "", { "dependencies": { "@types/node": "*", "@types/webidl-conversions": "*" } }, "sha512-FtQu10RWgn3D9U4aazdwIE2yzphmTJREDqNdODHrbrZmmMqI0vMheC/6NE/J1Yveaj8H+ela+YwWTjq5PGmuhA=="], "mongodb-memory-server-core/mongodb/mongodb-connection-string-url/whatwg-url": ["whatwg-url@11.0.0", "", { "dependencies": { "tr46": "^3.0.0", "webidl-conversions": "^7.0.0" } }, "sha512-RKT8HExMpoYx4igMiVMY83lN6UeITKJlBQ+vR/8ZJ8OCdSiN3RwCq+9gH0+Xzj0+5IrM6i4j/6LuvzbZIQgEcQ=="], @@ -2843,39 +2974,29 @@ "run-applescript/execa/onetime/mimic-fn": ["mimic-fn@2.1.0", "", {}, "sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg=="], - "@mongosh/service-provider-node-driver/kerberos/prebuild-install/tar-fs/tar-stream": ["tar-stream@2.2.0", "", { "dependencies": { "bl": "^4.0.3", "end-of-stream": "^1.4.1", "fs-constants": "^1.0.0", "inherits": "^2.0.3", "readable-stream": "^3.1.1" } }, "sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], + "@stock-bot/mongodb/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], - "@stock-bot/http/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], + "@stock-bot/postgres/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], - "@stock-bot/mongodb-client/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], - - "@stock-bot/postgres-client/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch": ["minimatch@9.0.3", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-RHiac9mvaRw0x3AYRgDC1CxAP7HTcNrrECeA8YYJeWnpo+2Q5CegtZjaotWTWxDG3UeGA1coE05iH1mPjT/2mg=="], - - "@stock-bot/questdb-client/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], + "@stock-bot/questdb/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], "@stock-bot/web-app/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/@typescript-eslint/types": ["@typescript-eslint/types@6.21.0", "", {}, "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg=="], @@ -2885,27 +3006,25 @@ "@stock-bot/web-app/@typescript-eslint/parser/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], + "mongodb-client-encryption/prebuild-install/tar-fs/tar-stream/readable-stream": ["readable-stream@3.6.2", "", { "dependencies": { "inherits": "^2.0.3", "string_decoder": "^1.1.1", "util-deprecate": "^1.0.1" } }, "sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA=="], + + "mongodb-memory-server-core/mongodb/mongodb-connection-string-url/@types/whatwg-url/@types/node": ["@types/node@22.15.32", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-3jigKqgSjsH6gYZv2nEsqdXfZqIFGAV36XYYjf9KGZ3PSG+IhLecqPnI310RvjutyMwifE2hhhNEklOUrvx/wA=="], + "mongodb-memory-server-core/mongodb/mongodb-connection-string-url/whatwg-url/tr46": ["tr46@3.0.0", "", { "dependencies": { "punycode": "^2.1.1" } }, "sha512-l7FvfAHlcmulp8kr+flpQZmVwtu7nfRV7NZujtN0OqES8EL4O4e0qqzL0DC5gAvx/ZC/9lk6rhcUwYvkBnBnYA=="], "pkg-dir/find-up/locate-path/p-locate/p-limit": ["p-limit@2.3.0", "", { "dependencies": { "p-try": "^2.0.0" } }, "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w=="], - "@mongosh/service-provider-node-driver/kerberos/prebuild-install/tar-fs/tar-stream/readable-stream": ["readable-stream@3.6.2", "", { "dependencies": { "inherits": "^2.0.3", "string_decoder": "^1.1.1", "util-deprecate": "^1.0.1" } }, "sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], + "@stock-bot/mongodb/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], - "@stock-bot/http/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], + "@stock-bot/postgres/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], - "@stock-bot/mongodb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], - - "@stock-bot/postgres-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], - - "@stock-bot/questdb-client/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], + "@stock-bot/questdb/@typescript-eslint/eslint-plugin/@typescript-eslint/utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], "@stock-bot/web-app/@typescript-eslint/eslint-plugin/@typescript-eslint/type-utils/@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], diff --git a/config/default.json b/config/default.json deleted file mode 100644 index 3f715ad..0000000 --- a/config/default.json +++ /dev/null @@ -1,98 +0,0 @@ -{ - "name": "stock-bot", - "version": "1.0.0", - "environment": "development", - "service": { - "name": "default-service", - "port": 3000, - "host": "0.0.0.0", - "healthCheckPath": "/health", - "metricsPath": "/metrics", - "shutdownTimeout": 30000, - "cors": { - "enabled": true, - "origin": "*", - "credentials": true - } - }, - "database": { - "postgres": { - "host": "localhost", - "port": 5432, - "database": "trading_bot", - "user": "trading_user", - "password": "trading_pass_dev", - "ssl": false, - "poolSize": 10, - "connectionTimeout": 30000, - "idleTimeout": 10000 - }, - "questdb": { - "host": "localhost", - "ilpPort": 9009, - "httpPort": 9000, - "pgPort": 8812, - "database": "questdb", - "user": "admin", - "password": "quest", - "bufferSize": 65536, - "flushInterval": 1000 - }, - "mongodb": { - "uri": "mongodb://trading_admin:trading_mongo_dev@localhost:27017/stock?authSource=admin", - "host": "localhost", - "port": 27017, - "database": "stock", - "user": "trading_admin", - "password": "trading_mongo_dev", - "authSource": "admin", - "poolSize": 10 - }, - "dragonfly": { - "host": "localhost", - "port": 6379, - "db": 0, - "keyPrefix": "stock-bot:", - "maxRetries": 3, - "retryDelay": 100 - } - }, - "log": { - "level": "info", - "format": "json" - }, - "providers": { - "yahoo": { - "name": "yahoo", - "enabled": true, - "rateLimit": { - "maxRequests": 5, - "windowMs": 60000 - }, - "timeout": 30000, - "baseUrl": "https://query1.finance.yahoo.com" - } - }, - "queue": { - "redis": { - "host": "localhost", - "port": 6379, - "db": 1 - }, - "defaultJobOptions": { - "attempts": 3, - "backoff": { - "type": "exponential", - "delay": 1000 - }, - "removeOnComplete": true, - "removeOnFail": false - } - }, - "features": { - "realtime": true, - "backtesting": true, - "paperTrading": true, - "notifications": false - } -} \ No newline at end of file diff --git a/docker-compose.yml b/docker-compose.yml index 506cb42..a7f3eea 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -214,7 +214,7 @@ services: # Dragonfly - Redis replacement for caching and events - REDIS_PORT=6379 - REDIS_PASSWORD= - REDIS_DB=0 - - REDIS_URL=redis://dragonfly:6379 + - REDIS_URL=redis://dragonfly:6379/0 depends_on: - dragonfly restart: unless-stopped diff --git a/docs/batch-processing-migration.md b/docs/batch-processing-migration.md deleted file mode 100644 index 2c7fcaa..0000000 --- a/docs/batch-processing-migration.md +++ /dev/null @@ -1,176 +0,0 @@ -# Batch Processing Migration Guide - -## ✅ MIGRATION COMPLETED - -The migration from the complex `BatchProcessor` class to the new functional batch processing approach has been **successfully completed**. The old `BatchProcessor` class has been removed entirely. - -## Overview - -The new functional batch processing approach simplified the complex `BatchProcessor` class into simple, composable functions. - -## Key Benefits Achieved - -✅ **90% less code** - From 545 lines to ~200 lines -✅ **Simpler API** - Just function calls instead of class instantiation -✅ **Better performance** - Less overhead and memory usage -✅ **Same functionality** - All features preserved -✅ **Type safe** - Better TypeScript support -✅ **No more payload conflicts** - Single consistent batch system - -## Available Functions - -All batch processing now uses the new functional approach: - -### 1. `processItems()` - Generic processing - -```typescript -import { processItems } from '../utils/batch-helpers'; - -const result = await processItems( - items, - (item, index) => ({ /* transform item */ }), - queueManager, - { - totalDelayMs: 60000, - useBatching: false, - batchSize: 100, - priority: 1 - } -); -``` - -### 2. `processSymbols()` - Stock symbol processing - -```typescript -import { processSymbols } from '../utils/batch-helpers'; - -const result = await processSymbols(['AAPL', 'GOOGL'], queueManager, { - operation: 'live-data', - service: 'market-data', - provider: 'yahoo', - totalDelayMs: 300000, - useBatching: false, - priority: 1, - service: 'market-data', - provider: 'yahoo', - operation: 'live-data' -}); -``` - -### 3. `processBatchJob()` - Worker batch handler - -```typescript -import { processBatchJob } from '../utils/batch-helpers'; - -// In your worker job handler -const result = await processBatchJob(jobData, queueManager); -``` - -## Configuration Mapping - -| Old BatchConfig | New ProcessOptions | Description | -|----------------|-------------------|-------------| -| `items` | First parameter | Items to process | -| `createJobData` | Second parameter | Transform function | -| `queueManager` | Third parameter | Queue instance | -| `totalDelayMs` | `totalDelayMs` | Total processing time | -| `batchSize` | `batchSize` | Items per batch | -| `useBatching` | `useBatching` | Batch vs direct mode | -| `priority` | `priority` | Job priority | -| `removeOnComplete` | `removeOnComplete` | Job cleanup | -| `removeOnFail` | `removeOnFail` | Failed job cleanup | -| `payloadTtlHours` | `ttl` | Cache TTL in seconds | - -## Return Value Changes - -### Before -```typescript -{ - totalItems: number, - jobsCreated: number, - mode: 'direct' | 'batch', - optimized?: boolean, - batchJobsCreated?: number, - // ... other complex fields -} -``` - -### After -```typescript -{ - jobsCreated: number, - mode: 'direct' | 'batch', - totalItems: number, - batchesCreated?: number, - duration: number -} -``` - -## Provider Migration - -### ✅ Current Implementation - -All providers now use the new functional approach: - -```typescript -'process-batch-items': async (payload: any) => { - const { processBatchJob } = await import('../utils/batch-helpers'); - return await processBatchJob(payload, queueManager); -} -``` - -## Testing the New Approach - -Use the new test endpoints: - -```bash -# Test symbol processing -curl -X POST http://localhost:3002/api/test/batch-symbols \ - -H "Content-Type: application/json" \ - -d '{"symbols": ["AAPL", "GOOGL"], "useBatching": false, "totalDelayMs": 10000}' - -# Test custom processing -curl -X POST http://localhost:3002/api/test/batch-custom \ - -H "Content-Type: application/json" \ - -d '{"items": [1,2,3,4,5], "useBatching": true, "totalDelayMs": 15000}' -``` - -## Performance Improvements - -| Metric | Before | After | Improvement | -|--------|--------|-------|-------------| -| Code Lines | 545 | ~200 | 63% reduction | -| Memory Usage | High | Low | ~40% less | -| Initialization Time | ~2-10s | Instant | 100% faster | -| API Complexity | High | Low | Much simpler | -| Type Safety | Medium | High | Better types | - -## ✅ Migration Complete - -The old `BatchProcessor` class has been completely removed. All batch processing now uses the simplified functional approach. - -## Common Issues & Solutions - -### Function Serialization -The new approach serializes processor functions for batch jobs. Avoid: -- Closures with external variables -- Complex function dependencies -- Non-serializable objects - -**Good:** -```typescript -(item, index) => ({ id: item.id, index }) -``` - -**Bad:** -```typescript -const externalVar = 'test'; -(item, index) => ({ id: item.id, external: externalVar }) // Won't work -``` - -### Cache Dependencies -The functional approach automatically handles cache initialization. No need to manually wait for cache readiness. - -## Need Help? - -Check the examples in `apps/data-service/src/examples/batch-processing-examples.ts` for more detailed usage patterns. diff --git a/docs/configuration-standardization.md b/docs/configuration-standardization.md new file mode 100644 index 0000000..d1cef12 --- /dev/null +++ b/docs/configuration-standardization.md @@ -0,0 +1,133 @@ +# Configuration Standardization + +## Overview + +The Stock Bot system now uses a unified configuration approach that standardizes how services receive and use configuration. This eliminates the previous confusion between `StockBotAppConfig` and `AppConfig`, providing a single source of truth for all configuration needs. + +## Key Changes + +### 1. Unified Configuration Schema + +The new `UnifiedAppConfig` schema: +- Provides both nested (backward compatible) and flat (DI-friendly) database configurations +- Automatically standardizes service names to kebab-case +- Handles field name mappings (e.g., `ilpPort` → `influxPort`) +- Ensures all required fields are present for DI system + +### 2. Service Name Standardization + +All service names are now standardized to kebab-case: +- `dataIngestion` → `data-ingestion` +- `dataPipeline` → `data-pipeline` +- `webApi` → `web-api` + +This happens automatically in: +- `initializeStockConfig()` when passing service name +- `ServiceApplication` constructor +- `toUnifiedConfig()` transformation + +### 3. Single Configuration Object + +Services now use a single configuration object (`this.config`) that contains: +- All service-specific settings +- Database configurations (both nested and flat) +- Service metadata including standardized name +- All settings required by the DI system + +## Migration Guide + +### For Service Implementations + +Before: +```typescript +const app = new ServiceApplication( + config, + { + serviceName: 'web-api', + // other options + } +); + +// In container factory +const configWithService = { + ...this.config, + service: { name: this.serviceConfig.serviceName } +}; +``` + +After: +```typescript +const app = new ServiceApplication( + config, // Config already has service.serviceName + { + serviceName: 'web-api', // Still needed for logger + // other options + } +); + +// In container factory +// No manual service name addition needed +this.container = await containerFactory(this.config); +``` + +### For DI Container Usage + +Before: +```typescript +const serviceName = config.service?.name || 'unknown'; +// Had to handle different naming conventions +``` + +After: +```typescript +const serviceName = config.service?.serviceName || config.service?.name || 'unknown'; +// Standardized kebab-case name is always available +``` + +### For Configuration Files + +The configuration structure remains the same, but the system now ensures: +- Service names are standardized automatically +- Database configs are available in both formats +- All required fields are properly mapped + +## Benefits + +1. **Simplicity**: One configuration object with all necessary information +2. **Consistency**: Standardized service naming across the system +3. **Type Safety**: Unified schema provides better TypeScript support +4. **Backward Compatibility**: Old configuration formats still work +5. **Reduced Complexity**: No more manual config transformations + +## Technical Details + +### UnifiedAppConfig Schema + +```typescript +export const unifiedAppSchema = baseAppSchema.extend({ + // Flat database configs for DI system + redis: dragonflyConfigSchema.optional(), + mongodb: mongodbConfigSchema.optional(), + postgres: postgresConfigSchema.optional(), + questdb: questdbConfigSchema.optional(), +}).transform((data) => { + // Auto-standardize service name + // Sync nested and flat configs + // Handle field mappings +}); +``` + +### Service Registry + +The `SERVICE_REGISTRY` now includes aliases for different naming conventions: +```typescript +'web-api': { db: 3, ... }, +'webApi': { db: 3, ... }, // Alias for backward compatibility +``` + +## Future Improvements + +1. Remove service name aliases after full migration +2. Deprecate old configuration formats +3. Add configuration validation at startup +4. Provide migration tooling for existing services \ No newline at end of file diff --git a/docs/enhanced-cache-usage.md b/docs/enhanced-cache-usage.md deleted file mode 100644 index c01a13f..0000000 --- a/docs/enhanced-cache-usage.md +++ /dev/null @@ -1,148 +0,0 @@ -# Enhanced Cache Provider Usage - -The Redis cache provider now supports advanced TTL handling and conditional operations. - -## Basic Usage (Backward Compatible) - -```typescript -import { RedisCache } from '@stock-bot/cache'; - -const cache = new RedisCache({ - keyPrefix: 'trading:', - defaultTTL: 3600 // 1 hour -}); - -// Simple set with TTL (old way - still works) -await cache.set('user:123', userData, 1800); // 30 minutes - -// Simple get -const user = await cache.get('user:123'); -``` - -## Enhanced Set Options - -```typescript -// Preserve existing TTL when updating -await cache.set('user:123', updatedUserData, { preserveTTL: true }); - -// Only set if key exists (update operation) -const oldValue = await cache.set('user:123', newData, { - onlyIfExists: true, - getOldValue: true -}); - -// Only set if key doesn't exist (create operation) -await cache.set('user:456', newUser, { - onlyIfNotExists: true, - ttl: 7200 // 2 hours -}); - -// Get old value when setting new one -const previousData = await cache.set('session:abc', sessionData, { - getOldValue: true, - ttl: 1800 -}); -``` - -## Convenience Methods - -```typescript -// Update value preserving TTL -await cache.update('user:123', updatedUserData); - -// Set only if exists -const updated = await cache.setIfExists('user:123', newData, 3600); - -// Set only if not exists (returns true if created) -const created = await cache.setIfNotExists('user:456', userData); - -// Replace existing key with new TTL -const oldData = await cache.replace('user:123', newData, 7200); - -// Atomic field updates -await cache.updateField('counter:views', (current) => (current || 0) + 1); - -await cache.updateField('user:123', (user) => ({ - ...user, - lastSeen: new Date().toISOString(), - loginCount: (user?.loginCount || 0) + 1 -})); -``` - -## Stock Bot Use Cases - -### 1. Rate Limiting -```typescript -// Only create rate limit if not exists -const rateLimited = await cache.setIfNotExists( - `ratelimit:${userId}:${endpoint}`, - { count: 1, resetTime: Date.now() + 60000 }, - 60 // 1 minute -); - -if (!rateLimited) { - // Increment existing counter - await cache.updateField(`ratelimit:${userId}:${endpoint}`, (data) => ({ - ...data, - count: data.count + 1 - })); -} -``` - -### 2. Session Management -```typescript -// Update session data without changing expiration -await cache.update(`session:${sessionId}`, { - ...sessionData, - lastActivity: Date.now() -}); -``` - -### 3. Cache Warming -```typescript -// Only update existing cached data, don't create new entries -const warmed = await cache.setIfExists(`stock:${symbol}:price`, latestPrice); -if (warmed) { - console.log(`Warmed cache for ${symbol}`); -} -``` - -### 4. Atomic Counters -```typescript -// Thread-safe counter increments -await cache.updateField('metrics:api:calls', (count) => (count || 0) + 1); -await cache.updateField('metrics:errors:500', (count) => (count || 0) + 1); -``` - -### 5. TTL Preservation for Frequently Updated Data -```typescript -// Keep original expiration when updating frequently changing data -await cache.set(`portfolio:${userId}:positions`, positions, { preserveTTL: true }); -``` - -## Error Handling - -The cache provider includes robust error handling: - -```typescript -try { - await cache.set('key', value); -} catch (error) { - // Errors are logged and fallback values returned - // The cache operations are non-blocking -} - -// Check cache health -const isHealthy = await cache.health(); - -// Wait for cache to be ready -await cache.waitForReady(10000); // 10 second timeout -``` - -## Performance Benefits - -1. **Atomic Operations**: `updateField` uses Lua scripts to prevent race conditions -2. **TTL Preservation**: Avoids unnecessary TTL resets on updates -3. **Conditional Operations**: Reduces network round trips -4. **Shared Connections**: Efficient connection pooling -5. **Error Recovery**: Graceful degradation when Redis is unavailable diff --git a/docs/loki-logging.md b/docs/loki-logging.md deleted file mode 100644 index 5ea7241..0000000 --- a/docs/loki-logging.md +++ /dev/null @@ -1,169 +0,0 @@ -# Loki Logging for Stock Bot - -This document outlines how to use the Loki logging system integrated with the Stock Bot platform (Updated June 2025). - -## Overview - -Loki provides centralized logging for all Stock Bot services with: - -1. **Centralized logging** for all microservices -2. **Log aggregation** and filtering by service, level, and custom labels -3. **Grafana integration** for visualization and dashboards -4. **Query capabilities** using LogQL for log analysis -5. **Alert capabilities** for critical issues - -## Getting Started - -### Starting the Logging Stack - -```cmd -# Start the monitoring stack (includes Loki and Grafana) -scripts\docker.ps1 monitoring -``` - -Or start services individually: - -```cmd -# Start Loki service only -docker-compose up -d loki - -# Start Loki and Grafana -docker-compose up -d loki grafana -``` - -### Viewing Logs - -Once started: - -1. Access Grafana at http://localhost:3000 (login with admin/admin) -2. Navigate to the "Stock Bot Logs" dashboard -3. View and query your logs - -## Using the Logger in Your Services - -The Stock Bot logger automatically sends logs to Loki using the updated pattern: - -```typescript -import { getLogger } from '@stock-bot/logger'; - -// Create a logger for your service -const logger = getLogger('your-service-name'); - -// Log at different levels -logger.debug('Detailed information for debugging'); -logger.info('General information about operations'); -logger.warn('Potential issues that don\'t affect operation'); -logger.error('Critical errors that require attention'); - -// Log with structured data (searchable in Loki) -logger.info('Processing trade', { - symbol: 'MSFT', - price: 410.75, - quantity: 50 -}); -``` - -## Configuration Options - -Logger configuration is managed through the `@stock-bot/config` package and can be set in your `.env` file: - -```bash -# Logging configuration -LOG_LEVEL=debug # debug, info, warn, error -LOG_CONSOLE=true # Log to console in addition to Loki -LOKI_HOST=localhost # Loki server hostname -LOKI_PORT=3100 # Loki server port -LOKI_RETENTION_DAYS=30 # Days to retain logs -LOKI_LABELS=environment=development,service=stock-bot # Default labels -LOKI_BATCH_SIZE=100 # Number of logs to batch before sending -LOKI_BATCH_WAIT=5 # Max time to wait before sending logs -``` - -## Useful Loki Queries - -Inside Grafana, you can use these LogQL queries to analyze your logs: - -1. **All logs from a specific service**: - ``` - {service="market-data-gateway"} - ``` - -2. **All error logs across all services**: - ``` - {level="error"} - ``` - -3. **Logs containing specific text**: - ``` - {service="market-data-gateway"} |= "trade" - ``` - -4. **Count of error logs by service over time**: - ``` - sum by(service) (count_over_time({level="error"}[5m])) - ``` - -## Testing the Logging Integration - -Test the logging integration using Bun: - -```cmd -# Run from project root using Bun (current runtime) -bun run tools/test-loki-logging.ts -``` - -## Architecture - -Our logging implementation follows this architecture: - -``` -┌─────────────────┐ ┌─────────────────┐ -│ Trading Services│────►│ @stock-bot/logger│ -└─────────────────┘ │ getLogger() │ - └────────┬────────┘ - │ - ▼ -┌────────────────────────────────────────┐ -│ Loki │ -└────────────────┬───────────────────────┘ - │ - ▼ -┌────────────────────────────────────────┐ -│ Grafana │ -└────────────────────────────────────────┘ -``` - -## Adding New Dashboards - -To create new Grafana dashboards for log visualization: - -1. Build your dashboard in the Grafana UI -2. Export it to JSON -3. Add it to `monitoring/grafana/provisioning/dashboards/json/` -4. Restart the monitoring stack - -## Troubleshooting - -If logs aren't appearing in Grafana: - -1. Run the status check script to verify Loki and Grafana are working: - ```cmd - tools\check-loki-status.bat - ``` - -2. Check that Loki and Grafana containers are running: - ```cmd - docker ps | findstr "loki grafana" - ``` - -3. Verify .env configuration for Loki host and port: - ```cmd - type .env | findstr "LOKI_" - ``` - -4. Ensure your service has the latest @stock-bot/logger package - -5. Check for errors in the Loki container logs: - ```cmd - docker logs stock-bot-loki - ``` diff --git a/docs/mongodb-multi-database-migration.md b/docs/mongodb-multi-database-migration.md deleted file mode 100644 index 30d6069..0000000 --- a/docs/mongodb-multi-database-migration.md +++ /dev/null @@ -1,212 +0,0 @@ -# MongoDB Client Multi-Database Migration Guide - -## Overview -Your MongoDB client has been enhanced to support multiple databases dynamically while maintaining full backward compatibility. - -## Key Features Added - -### 1. **Dynamic Database Switching** -```typescript -// Set default database (all operations will use this unless overridden) -client.setDefaultDatabase('analytics'); - -// Get current default database -const currentDb = client.getDefaultDatabase(); // Returns: 'analytics' -``` - -### 2. **Database Parameter in Methods** -All methods now accept an optional `database` parameter: - -```typescript -// Old way (still works - uses default database) -await client.batchUpsert('symbols', data, 'symbol'); - -// New way (specify database explicitly) -await client.batchUpsert('symbols', data, 'symbol', { database: 'stock' }); -``` - -### 3. **Convenience Methods** -Pre-configured methods for common databases: - -```typescript -// Stock database operations -await client.batchUpsertStock('symbols', data, 'symbol'); - -// Analytics database operations -await client.batchUpsertAnalytics('metrics', data, 'metric_name'); - -// Trading documents database operations -await client.batchUpsertTrading('orders', data, 'order_id'); -``` - -### 4. **Direct Database Access** -```typescript -// Get specific database instances -const stockDb = client.getDatabase('stock'); -const analyticsDb = client.getDatabase('analytics'); - -// Get collections with database override -const collection = client.getCollection('symbols', 'stock'); -``` - -## Migration Steps - -### Step 1: No Changes Required (Backward Compatible) -Your existing code continues to work unchanged: - -```typescript -// This still works exactly as before -const client = MongoDBClient.getInstance(); -await client.connect(); -await client.batchUpsert('exchanges', exchangeData, 'exchange_id'); -``` - -### Step 2: Organize Data by Database (Recommended) -Update your data service to use appropriate databases: - -```typescript -// In your data service initialization -export class DataService { - private mongoClient = MongoDBClient.getInstance(); - - async initialize() { - await this.mongoClient.connect(); - - // Set stock as default for most operations - this.mongoClient.setDefaultDatabase('stock'); - } - - async saveInteractiveBrokersData(exchanges: any[], symbols: any[]) { - // Stock market data goes to 'stock' database (default) - await this.mongoClient.batchUpsert('exchanges', exchanges, 'exchange_id'); - await this.mongoClient.batchUpsert('symbols', symbols, 'symbol'); - } - - async saveAnalyticsData(performance: any[]) { - // Analytics data goes to 'analytics' database - await this.mongoClient.batchUpsert( - 'performance', - performance, - 'date', - { database: 'analytics' } - ); - } -} -``` - -### Step 3: Use Convenience Methods (Optional) -Replace explicit database parameters with convenience methods: - -```typescript -// Instead of: -await client.batchUpsert('symbols', data, 'symbol', { database: 'stock' }); - -// Use: -await client.batchUpsertStock('symbols', data, 'symbol'); -``` - -## Factory Functions -New factory functions are available for easier database management: - -```typescript -import { - connectMongoDB, - setDefaultDatabase, - getCurrentDatabase, - getDatabase -} from '@stock-bot/mongodb-client'; - -// Set default database globally -setDefaultDatabase('analytics'); - -// Get current default -const current = getCurrentDatabase(); - -// Get specific database -const stockDb = getDatabase('stock'); -``` - -## Database Recommendations - -### Stock Database (`stock`) -- Market data (symbols, exchanges, prices) -- Financial instruments -- Market events -- Real-time data - -### Analytics Database (`analytics`) -- Performance metrics -- Calculated indicators -- Reports and dashboards -- Aggregated data - -### Trading Documents Database (`trading_documents`) -- Trade orders and executions -- User portfolios -- Transaction logs -- Audit trails - -## Example: Updating Your Data Service - -```typescript -// Before (still works) -export class DataService { - async saveExchanges(exchanges: any[]) { - const client = MongoDBClient.getInstance(); - await client.batchUpsert('exchanges', exchanges, 'exchange_id'); - } -} - -// After (recommended) -export class DataService { - private mongoClient = MongoDBClient.getInstance(); - - async initialize() { - await this.mongoClient.connect(); - this.mongoClient.setDefaultDatabase('stock'); // Set appropriate default - } - - async saveExchanges(exchanges: any[]) { - // Uses default 'stock' database - await this.mongoClient.batchUpsert('exchanges', exchanges, 'exchange_id'); - - // Or use convenience method - await this.mongoClient.batchUpsertStock('exchanges', exchanges, 'exchange_id'); - } - - async savePerformanceMetrics(metrics: any[]) { - // Save to analytics database - await this.mongoClient.batchUpsertAnalytics('metrics', metrics, 'metric_name'); - } -} -``` - -## Testing -Your existing tests continue to work. For new multi-database features: - -```typescript -import { MongoDBClient } from '@stock-bot/mongodb-client'; - -const client = MongoDBClient.getInstance(); -await client.connect(); - -// Test database switching -client.setDefaultDatabase('test_db'); -expect(client.getDefaultDatabase()).toBe('test_db'); - -// Test explicit database parameter -await client.batchUpsert('test_collection', data, 'id', { database: 'other_db' }); -``` - -## Benefits -1. **Organized Data**: Separate databases for different data types -2. **Better Performance**: Smaller, focused databases -3. **Easier Maintenance**: Clear data boundaries -4. **Scalability**: Can scale databases independently -5. **Backward Compatibility**: No breaking changes - -## Next Steps -1. Update your data service to use appropriate default database -2. Gradually migrate to using specific databases for different data types -3. Consider using convenience methods for cleaner code -4. Update tests to cover multi-database scenarios diff --git a/eslint.config.js b/eslint.config.js index 2644048..f5453f0 100644 --- a/eslint.config.js +++ b/eslint.config.js @@ -50,6 +50,7 @@ export default [ argsIgnorePattern: '^_', varsIgnorePattern: '^_', destructuredArrayIgnorePattern: '^_', + caughtErrorsIgnorePattern: '^_', }, ], '@typescript-eslint/no-explicit-any': 'warn', diff --git a/libs/LIBRARY_STANDARDS.md b/libs/LIBRARY_STANDARDS.md new file mode 100644 index 0000000..5b78cbb --- /dev/null +++ b/libs/LIBRARY_STANDARDS.md @@ -0,0 +1,157 @@ +# Library Standards and Patterns + +This document defines the standardized patterns for all libraries in the @stock-bot ecosystem. + +## Export Patterns + +### Standard: Named Exports Only + +All libraries should use **named exports only**. Default exports have been removed for consistency and better tree-shaking. + +**Example:** +```typescript +// ✅ Good - Named exports +export { createCache } from './cache'; +export type { CacheOptions } from './types'; + +// ❌ Bad - Default export +export default createCache; +``` + +## Initialization Patterns + +Libraries follow different initialization patterns based on their purpose: + +### 1. Singleton with Global State +**Use for:** Global services that should have only one instance (config, logger) + +**Example:** Config library +```typescript +let configInstance: ConfigManager | null = null; + +export function initializeConfig(): AppConfig { + if (!configInstance) { + configInstance = new ConfigManager(); + } + return configInstance.initialize(); +} + +export function getConfig(): AppConfig { + if (!configInstance) { + throw new Error('Config not initialized'); + } + return configInstance.get(); +} +``` + +### 2. Factory with Registry +**Use for:** Services that need instance reuse based on configuration (cache, logger instances) + +**Example:** Cache library +```typescript +const cacheInstances = new Map(); + +export function createCache(options: CacheOptions): CacheProvider { + if (options.shared) { + const key = generateKey(options); + if (cacheInstances.has(key)) { + return cacheInstances.get(key)!; + } + const cache = new RedisCache(options); + cacheInstances.set(key, cache); + return cache; + } + return new RedisCache(options); +} +``` + +### 3. Pure Factory Functions +**Use for:** Services that need creation logic beyond simple instantiation + +**Example:** Event bus with configuration processing +```typescript +export function createEventBus(config: EventBusConfig): EventBus { + // Process config, set defaults, etc. + const processedConfig = { ...defaultConfig, ...config }; + return new EventBus(processedConfig); +} +``` + +**Note:** Simple instantiation doesn't need factories - use direct class instantiation or DI container. + +### 4. Direct Class Exports +**Use for:** Simple utilities or services managed by DI container + +**Example:** MongoDB library +```typescript +export { MongoDBClient } from './client'; +// No factory function - let DI container handle instantiation +``` + +### 5. Singleton Classes +**Use for:** Manager classes that coordinate multiple instances + +**Example:** QueueManager +```typescript +export class QueueManager { + private static instance: QueueManager | null = null; + + static initialize(config: QueueConfig): QueueManager { + if (!QueueManager.instance) { + QueueManager.instance = new QueueManager(config); + } + return QueueManager.instance; + } + + static getInstance(): QueueManager { + if (!QueueManager.instance) { + throw new Error('QueueManager not initialized'); + } + return QueueManager.instance; + } +} +``` + +## Pattern Selection Guide + +Choose the initialization pattern based on these criteria: + +| Pattern | When to Use | Examples | +|---------|-------------|----------| +| **Singleton with Global State** | - One instance per process
- Stateful configuration
- Process-wide settings | config, logger setup | +| **Factory with Registry** | - Multiple instances with same config should share
- Connection pooling
- Resource optimization | cache, logger instances | +| **Pure Factory** | - Complex initialization logic
- Configuration processing needed
- Defaults to apply | event bus (if needed) | +| **Direct Class Export** | - DI container manages lifecycle
- Simple initialization
- No special setup needed | database clients (MongoDB, PostgreSQL, QuestDB), utilities | +| **Singleton Class** | - Coordinates multiple resources
- Central management point
- Graceful shutdown needed | QueueManager, ConnectionManager | + +## Additional Standards + +### Error Handling +- All libraries should throw descriptive errors +- Consider creating custom error classes for domain-specific errors +- Always include context in error messages + +### Configuration +- Accept configuration through constructor/factory parameters +- Validate configuration using Zod schemas +- Provide sensible defaults where appropriate + +### Testing +- All libraries must have unit tests +- Use consistent test file naming: `*.test.ts` +- Mock external dependencies + +### Documentation +- Every library must have a README.md +- Include usage examples +- Document all public APIs with JSDoc + +### TypeScript +- Export all public types +- Use strict TypeScript settings +- Avoid `any` types + +### Dependencies +- Minimize external dependencies +- Use exact versions for critical dependencies +- Document peer dependencies clearly \ No newline at end of file diff --git a/libs/browser/src/browser-pool.ts b/libs/browser/src/browser-pool.ts deleted file mode 100644 index e69de29..0000000 diff --git a/libs/browser/src/index.ts b/libs/browser/src/index.ts deleted file mode 100644 index 96cb4ab..0000000 --- a/libs/browser/src/index.ts +++ /dev/null @@ -1,3 +0,0 @@ -export { Browser } from './browser'; -export { BrowserTabManager } from './tab-manager'; -export type { BrowserOptions, ScrapingResult } from './types'; diff --git a/libs/browser/tsconfig.json b/libs/browser/tsconfig.json deleted file mode 100644 index 2ecd08d..0000000 --- a/libs/browser/tsconfig.json +++ /dev/null @@ -1,12 +0,0 @@ -{ - "extends": "../../tsconfig.lib.json", - "compilerOptions": { - "outDir": "./dist", - "rootDir": "./src" - }, - "include": ["src/**/*"], - "references": [ - { "path": "../logger" }, - { "path": "../http" } - ] -} diff --git a/libs/cache/tsconfig.json b/libs/cache/tsconfig.json deleted file mode 100644 index eae3dc0..0000000 --- a/libs/cache/tsconfig.json +++ /dev/null @@ -1,11 +0,0 @@ -{ - "extends": "../../tsconfig.lib.json", - "compilerOptions": { - "outDir": "./dist", - "rootDir": "./src" - }, - "include": ["src/**/*"], - "references": [ - { "path": "../logger" } - ] -} diff --git a/libs/config/src/schemas/index.ts b/libs/config/src/schemas/index.ts deleted file mode 100644 index eed8827..0000000 --- a/libs/config/src/schemas/index.ts +++ /dev/null @@ -1,98 +0,0 @@ -export * from './base.schema'; -export * from './database.schema'; -export * from './provider.schema'; -export * from './service.schema'; - -import { z } from 'zod'; -import { baseConfigSchema, environmentSchema } from './base.schema'; -import { providerConfigSchema, webshareProviderConfigSchema } from './provider.schema'; -import { httpConfigSchema, queueConfigSchema } from './service.schema'; - -// Flexible service schema with defaults -const flexibleServiceConfigSchema = z.object({ - name: z.string().default('default-service'), - port: z.number().min(1).max(65535).default(3000), - host: z.string().default('0.0.0.0'), - healthCheckPath: z.string().default('/health'), - metricsPath: z.string().default('/metrics'), - shutdownTimeout: z.number().default(30000), - cors: z.object({ - enabled: z.boolean().default(true), - origin: z.union([z.string(), z.array(z.string())]).default('*'), - credentials: z.boolean().default(true), - }).default({}), -}).default({}); - -// Flexible database schema with defaults -const flexibleDatabaseConfigSchema = z.object({ - postgres: z.object({ - host: z.string().default('localhost'), - port: z.number().default(5432), - database: z.string().default('test_db'), - user: z.string().default('test_user'), - password: z.string().default('test_pass'), - ssl: z.boolean().default(false), - poolSize: z.number().min(1).max(100).default(10), - connectionTimeout: z.number().default(30000), - idleTimeout: z.number().default(10000), - }).default({}), - questdb: z.object({ - host: z.string().default('localhost'), - ilpPort: z.number().default(9009), - httpPort: z.number().default(9000), - pgPort: z.number().default(8812), - database: z.string().default('questdb'), - user: z.string().default('admin'), - password: z.string().default('quest'), - bufferSize: z.number().default(65536), - flushInterval: z.number().default(1000), - }).default({}), - mongodb: z.object({ - uri: z.string().url().optional(), - host: z.string().default('localhost'), - port: z.number().default(27017), - database: z.string().default('test_mongo'), - user: z.string().optional(), - password: z.string().optional(), - authSource: z.string().default('admin'), - replicaSet: z.string().optional(), - poolSize: z.number().min(1).max(100).default(10), - }).default({}), - dragonfly: z.object({ - host: z.string().default('localhost'), - port: z.number().default(6379), - password: z.string().optional(), - db: z.number().min(0).max(15).default(0), - keyPrefix: z.string().optional(), - ttl: z.number().optional(), - maxRetries: z.number().default(3), - retryDelay: z.number().default(100), - }).default({}), -}).default({}); - -// Flexible log schema with defaults (renamed from logging) -const flexibleLogConfigSchema = z.object({ - level: z.enum(['trace', 'debug', 'info', 'warn', 'error', 'fatal']).default('info'), - format: z.enum(['json', 'pretty']).default('json'), - hideObject: z.boolean().default(false), - loki: z.object({ - enabled: z.boolean().default(false), - host: z.string().default('localhost'), - port: z.number().default(3100), - labels: z.record(z.string()).default({}), - }).optional(), -}).default({}); - -// Complete application configuration schema -export const appConfigSchema = baseConfigSchema.extend({ - environment: environmentSchema.default('development'), - service: flexibleServiceConfigSchema, - log: flexibleLogConfigSchema, - database: flexibleDatabaseConfigSchema, - queue: queueConfigSchema.optional(), - http: httpConfigSchema.optional(), - providers: providerConfigSchema.optional(), - webshare: webshareProviderConfigSchema.optional(), -}); - -export type AppConfig = z.infer; \ No newline at end of file diff --git a/libs/config/src/schemas/service.schema.ts b/libs/config/src/schemas/service.schema.ts deleted file mode 100644 index 5268c85..0000000 --- a/libs/config/src/schemas/service.schema.ts +++ /dev/null @@ -1,63 +0,0 @@ -import { z } from 'zod'; - -// Common service configuration -export const serviceConfigSchema = z.object({ - name: z.string(), - port: z.number().min(1).max(65535), - host: z.string().default('0.0.0.0'), - healthCheckPath: z.string().default('/health'), - metricsPath: z.string().default('/metrics'), - shutdownTimeout: z.number().default(30000), - cors: z.object({ - enabled: z.boolean().default(true), - origin: z.union([z.string(), z.array(z.string())]).default('*'), - credentials: z.boolean().default(true), - }).default({}), -}); - -// Logging configuration -export const loggingConfigSchema = z.object({ - level: z.enum(['trace', 'debug', 'info', 'warn', 'error', 'fatal']).default('info'), - format: z.enum(['json', 'pretty']).default('json'), - loki: z.object({ - enabled: z.boolean().default(false), - host: z.string().default('localhost'), - port: z.number().default(3100), - labels: z.record(z.string()).default({}), - }).optional(), -}); - -// Queue configuration -export const queueConfigSchema = z.object({ - redis: z.object({ - host: z.string().default('localhost'), - port: z.number().default(6379), - password: z.string().optional(), - db: z.number().default(1), - }), - defaultJobOptions: z.object({ - attempts: z.number().default(3), - backoff: z.object({ - type: z.enum(['exponential', 'fixed']).default('exponential'), - delay: z.number().default(1000), - }).default({}), - removeOnComplete: z.number().default(10), - removeOnFail: z.number().default(5), - }).default({}), -}); - -// HTTP client configuration -export const httpConfigSchema = z.object({ - timeout: z.number().default(30000), - retries: z.number().default(3), - retryDelay: z.number().default(1000), - userAgent: z.string().optional(), - proxy: z.object({ - enabled: z.boolean().default(false), - url: z.string().url().optional(), - auth: z.object({ - username: z.string(), - password: z.string(), - }).optional(), - }).optional(), -}); \ No newline at end of file diff --git a/libs/config/tsconfig.json b/libs/config/tsconfig.json deleted file mode 100644 index e02b16b..0000000 --- a/libs/config/tsconfig.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "extends": "../../tsconfig.lib.json", - "compilerOptions": { - "outDir": "./dist", - "rootDir": "./src" - }, - "include": ["src/**/*"], - "references": [ - ] -} \ No newline at end of file diff --git a/libs/cache/package.json b/libs/core/cache/package.json similarity index 100% rename from libs/cache/package.json rename to libs/core/cache/package.json diff --git a/libs/core/cache/src/cache-factory.ts b/libs/core/cache/src/cache-factory.ts new file mode 100644 index 0000000..f778c0e --- /dev/null +++ b/libs/core/cache/src/cache-factory.ts @@ -0,0 +1,23 @@ +import { NamespacedCache } from './namespaced-cache'; +import type { CacheProvider } from './types'; + +/** + * Factory function to create namespaced caches + * Provides a clean API for services to get their own namespaced cache + */ +export function createNamespacedCache( + cache: CacheProvider | null | undefined, + namespace: string +): CacheProvider | null { + if (!cache) { + return null; + } + return new NamespacedCache(cache, namespace); +} + +/** + * Type guard to check if cache is available + */ +export function isCacheAvailable(cache: any): cache is CacheProvider { + return cache !== null && cache !== undefined && typeof cache.get === 'function'; +} \ No newline at end of file diff --git a/libs/cache/src/connection-manager.ts b/libs/core/cache/src/connection-manager.ts similarity index 92% rename from libs/cache/src/connection-manager.ts rename to libs/core/cache/src/connection-manager.ts index d0d361e..9339f67 100644 --- a/libs/cache/src/connection-manager.ts +++ b/libs/core/cache/src/connection-manager.ts @@ -1,5 +1,4 @@ import Redis from 'ioredis'; -import { getLogger } from '@stock-bot/logger'; import type { RedisConfig } from './types'; interface ConnectionConfig { @@ -7,6 +6,7 @@ interface ConnectionConfig { singleton?: boolean; db?: number; redisConfig: RedisConfig; + logger?: any; } /** @@ -16,7 +16,7 @@ export class RedisConnectionManager { private connections = new Map(); private static sharedConnections = new Map(); private static instance: RedisConnectionManager; - private logger = getLogger('redis-connection-manager'); + private logger: any = console; private static readyConnections = new Set(); // Singleton pattern for the manager itself @@ -33,12 +33,15 @@ export class RedisConnectionManager { * @returns Redis connection instance */ getConnection(config: ConnectionConfig): Redis { - const { name, singleton = false, db, redisConfig } = config; + const { name, singleton = false, db, redisConfig, logger } = config; + if (logger) { + this.logger = logger; + } if (singleton) { // Use shared connection across all instances if (!RedisConnectionManager.sharedConnections.has(name)) { - const connection = this.createConnection(name, redisConfig, db); + const connection = this.createConnection(name, redisConfig, db, logger); RedisConnectionManager.sharedConnections.set(name, connection); this.logger.info(`Created shared Redis connection: ${name}`); } @@ -50,7 +53,7 @@ export class RedisConnectionManager { } else { // Create unique connection per instance const uniqueName = `${name}-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`; - const connection = this.createConnection(uniqueName, redisConfig, db); + const connection = this.createConnection(uniqueName, redisConfig, db, logger); this.connections.set(uniqueName, connection); this.logger.debug(`Created unique Redis connection: ${uniqueName}`); return connection; @@ -60,7 +63,7 @@ export class RedisConnectionManager { /** * Create a new Redis connection with configuration */ - private createConnection(name: string, config: RedisConfig, db?: number): Redis { + private createConnection(name: string, config: RedisConfig, db?: number, logger?: any): Redis { const redisOptions = { host: config.host, port: config.port, @@ -85,26 +88,29 @@ export class RedisConnectionManager { }; const redis = new Redis(redisOptions); + + // Use the provided logger or fall back to instance logger + const log = logger || this.logger; // Setup event handlers redis.on('connect', () => { - this.logger.info(`Redis connection established: ${name}`); + log.info(`Redis connection established: ${name}`); }); redis.on('ready', () => { - this.logger.info(`Redis connection ready: ${name}`); + log.info(`Redis connection ready: ${name}`); }); redis.on('error', err => { - this.logger.error(`Redis connection error for ${name}:`, err); + log.error(`Redis connection error for ${name}:`, err); }); redis.on('close', () => { - this.logger.warn(`Redis connection closed: ${name}`); + log.warn(`Redis connection closed: ${name}`); }); redis.on('reconnecting', () => { - this.logger.warn(`Redis reconnecting: ${name}`); + log.warn(`Redis reconnecting: ${name}`); }); return redis; diff --git a/libs/cache/src/index.ts b/libs/core/cache/src/index.ts similarity index 85% rename from libs/cache/src/index.ts rename to libs/core/cache/src/index.ts index 2594101..4a4e4e3 100644 --- a/libs/cache/src/index.ts +++ b/libs/core/cache/src/index.ts @@ -39,12 +39,17 @@ export function createCache(options: CacheOptions): CacheProvider { // Export types and classes export type { - CacheConfig, CacheKey, CacheOptions, CacheProvider, CacheStats, RedisConfig, SerializationOptions + CacheConfig, + CacheKey, + CacheOptions, + CacheProvider, + CacheStats, + RedisConfig, + SerializationOptions, } from './types'; export { RedisConnectionManager } from './connection-manager'; export { CacheKeyGenerator } from './key-generator'; export { RedisCache } from './redis-cache'; - -// Default export for convenience -export default createCache; +export { NamespacedCache } from './namespaced-cache'; +export { createNamespacedCache, isCacheAvailable } from './cache-factory'; diff --git a/libs/cache/src/key-generator.ts b/libs/core/cache/src/key-generator.ts similarity index 100% rename from libs/cache/src/key-generator.ts rename to libs/core/cache/src/key-generator.ts diff --git a/libs/core/cache/src/namespaced-cache.ts b/libs/core/cache/src/namespaced-cache.ts new file mode 100644 index 0000000..c42ed63 --- /dev/null +++ b/libs/core/cache/src/namespaced-cache.ts @@ -0,0 +1,101 @@ +import type { CacheProvider } from './types'; + +/** + * A cache wrapper that automatically prefixes all keys with a namespace + * Used to provide isolated cache spaces for different services + */ +export class NamespacedCache implements CacheProvider { + private readonly prefix: string; + + constructor( + private readonly cache: CacheProvider, + private readonly namespace: string + ) { + this.prefix = `cache:${namespace}:`; + } + + async get(key: string): Promise { + return this.cache.get(`${this.prefix}${key}`); + } + + async set( + key: string, + value: T, + options?: + | number + | { + ttl?: number; + preserveTTL?: boolean; + onlyIfExists?: boolean; + onlyIfNotExists?: boolean; + getOldValue?: boolean; + } + ): Promise { + return this.cache.set(`${this.prefix}${key}`, value, options); + } + + async del(key: string): Promise { + return this.cache.del(`${this.prefix}${key}`); + } + + async exists(key: string): Promise { + return this.cache.exists(`${this.prefix}${key}`); + } + + async keys(pattern: string = '*'): Promise { + const fullPattern = `${this.prefix}${pattern}`; + const keys = await this.cache.keys(fullPattern); + // Remove the prefix from returned keys for cleaner API + return keys.map(k => k.substring(this.prefix.length)); + } + + async clear(): Promise { + // Clear only keys with this namespace prefix + const keys = await this.cache.keys(`${this.prefix}*`); + if (keys.length > 0) { + await Promise.all(keys.map(key => this.cache.del(key))); + } + } + + + getStats() { + return this.cache.getStats(); + } + + async health(): Promise { + return this.cache.health(); + } + + isReady(): boolean { + return this.cache.isReady(); + } + + async waitForReady(timeout?: number): Promise { + return this.cache.waitForReady(timeout); + } + + async close(): Promise { + // Namespaced cache doesn't own the connection, so we don't close it + // The underlying cache instance should be closed by its owner + } + + getNamespace(): string { + return this.namespace; + } + + getFullPrefix(): string { + return this.prefix; + } + + /** + * Get a value using a raw Redis key (bypassing the namespace prefix) + * Delegates to the underlying cache's getRaw method if available + */ + async getRaw(key: string): Promise { + if (this.cache.getRaw) { + return this.cache.getRaw(key); + } + // Fallback for caches that don't implement getRaw + return null; + } +} \ No newline at end of file diff --git a/libs/cache/src/redis-cache.ts b/libs/core/cache/src/redis-cache.ts similarity index 93% rename from libs/cache/src/redis-cache.ts rename to libs/core/cache/src/redis-cache.ts index 7fcab9e..51b811d 100644 --- a/libs/cache/src/redis-cache.ts +++ b/libs/core/cache/src/redis-cache.ts @@ -1,14 +1,13 @@ import Redis from 'ioredis'; -import { getLogger } from '@stock-bot/logger'; import { RedisConnectionManager } from './connection-manager'; -import { CacheOptions, CacheProvider, CacheStats } from './types'; +import type { CacheOptions, CacheProvider, CacheStats } from './types'; /** * Simplified Redis-based cache provider using connection manager */ export class RedisCache implements CacheProvider { private redis: Redis; - private logger = getLogger('redis-cache'); + private logger: any; private defaultTTL: number; private keyPrefix: string; private enableMetrics: boolean; @@ -29,6 +28,7 @@ export class RedisCache implements CacheProvider { this.defaultTTL = options.ttl ?? 3600; // 1 hour default this.keyPrefix = options.keyPrefix ?? 'cache:'; this.enableMetrics = options.enableMetrics ?? true; + this.logger = options.logger || console; // Use provided logger or console as fallback // Get connection manager instance this.connectionManager = RedisConnectionManager.getInstance(); @@ -47,6 +47,7 @@ export class RedisCache implements CacheProvider { name: `${baseName}-SERVICE`, singleton: options.shared ?? true, // Default to shared connection for cache redisConfig: options.redisConfig, + logger: this.logger, }); // Only setup event handlers for non-shared connections to avoid memory leaks @@ -290,6 +291,29 @@ export class RedisCache implements CacheProvider { ); } + /** + * Get a value using a raw Redis key (bypassing the keyPrefix) + * Useful for accessing cache data from other services with different prefixes + */ + async getRaw(key: string): Promise { + return this.safeExecute( + async () => { + // Use the key directly without adding our prefix + const value = await this.redis.get(key); + if (!value) { + this.updateStats(false); + return null; + } + this.updateStats(true); + const parsed = JSON.parse(value); + this.logger.debug('Cache raw get hit', { key }); + return parsed; + }, + null, + 'getRaw' + ); + } + async keys(pattern: string): Promise { return this.safeExecute( async () => { diff --git a/libs/cache/src/types.ts b/libs/core/cache/src/types.ts similarity index 91% rename from libs/cache/src/types.ts rename to libs/core/cache/src/types.ts index cdaaca2..1e84061 100644 --- a/libs/cache/src/types.ts +++ b/libs/core/cache/src/types.ts @@ -76,6 +76,12 @@ export interface CacheProvider { * Atomically update field with transformation function */ updateField?(key: string, updater: (current: T | null) => T, ttl?: number): Promise; + + /** + * Get a value using a raw Redis key (bypassing the keyPrefix) + * Useful for accessing cache data from other services with different prefixes + */ + getRaw?(key: string): Promise; } export interface CacheOptions { @@ -85,6 +91,7 @@ export interface CacheOptions { name?: string; // Name for connection identification shared?: boolean; // Whether to use shared connection redisConfig: RedisConfig; + logger?: any; // Optional logger instance } export interface CacheStats { diff --git a/libs/core/cache/tsconfig.json b/libs/core/cache/tsconfig.json new file mode 100644 index 0000000..55c59a8 --- /dev/null +++ b/libs/core/cache/tsconfig.json @@ -0,0 +1,10 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "composite": true + }, + "include": ["src/**/*"], + "references": [{ "path": "../../core/logger" }] +} diff --git a/libs/config/.env.example b/libs/core/config/.env.example similarity index 100% rename from libs/config/.env.example rename to libs/core/config/.env.example diff --git a/libs/config/README.md b/libs/core/config/README.md similarity index 100% rename from libs/config/README.md rename to libs/core/config/README.md diff --git a/libs/config/config/default.json b/libs/core/config/config/default.json similarity index 96% rename from libs/config/config/default.json rename to libs/core/config/config/default.json index 1b7310d..10ce440 100644 --- a/libs/config/config/default.json +++ b/libs/core/config/config/default.json @@ -75,8 +75,8 @@ "type": "exponential", "delay": 1000 }, - "removeOnComplete": true, - "removeOnFail": false + "removeOnComplete": 100, + "removeOnFail": 50 } }, "http": { @@ -91,4 +91,4 @@ "apiKey": "", "apiUrl": "https://proxy.webshare.io/api/v2/" } -} \ No newline at end of file +} diff --git a/libs/config/config/development.json b/libs/core/config/config/development.json similarity index 99% rename from libs/config/config/development.json rename to libs/core/config/config/development.json index 839c7e9..991c30c 100644 --- a/libs/config/config/development.json +++ b/libs/core/config/config/development.json @@ -45,4 +45,4 @@ "webmasterId": "" } } -} \ No newline at end of file +} diff --git a/libs/config/config/production.json b/libs/core/config/config/production.json similarity index 99% rename from libs/config/config/production.json rename to libs/core/config/config/production.json index fe7a792..390d5ff 100644 --- a/libs/config/config/production.json +++ b/libs/core/config/config/production.json @@ -29,4 +29,4 @@ "retries": 5, "retryDelay": 2000 } -} \ No newline at end of file +} diff --git a/libs/config/config/test.json b/libs/core/config/config/test.json similarity index 90% rename from libs/config/config/test.json rename to libs/core/config/config/test.json index f362037..85f7ac4 100644 --- a/libs/config/config/test.json +++ b/libs/core/config/config/test.json @@ -31,12 +31,12 @@ }, "defaultJobOptions": { "attempts": 1, - "removeOnComplete": false, - "removeOnFail": false + "removeOnComplete": 100, + "removeOnFail": 50 } }, "http": { "timeout": 5000, "retries": 1 } -} \ No newline at end of file +} diff --git a/libs/config/package.json b/libs/core/config/package.json similarity index 100% rename from libs/config/package.json rename to libs/core/config/package.json diff --git a/libs/config/src/cli.ts b/libs/core/config/src/cli.ts similarity index 88% rename from libs/config/src/cli.ts rename to libs/core/config/src/cli.ts index d3cfced..d4fcae5 100644 --- a/libs/config/src/cli.ts +++ b/libs/core/config/src/cli.ts @@ -1,196 +1,193 @@ -#!/usr/bin/env bun -/* eslint-disable no-console */ -import { parseArgs } from 'util'; -import { join } from 'path'; -import { ConfigManager } from './config-manager'; -import { appConfigSchema } from './schemas'; -import { - validateConfig, - formatValidationResult, - checkDeprecations, - checkRequiredEnvVars, - validateCompleteness -} from './utils/validation'; -import { redactSecrets } from './utils/secrets'; -import type { Environment } from './types'; - -interface CliOptions { - config?: string; - env?: string; - validate?: boolean; - show?: boolean; - check?: boolean; - json?: boolean; - help?: boolean; -} - -const DEPRECATIONS = { - 'service.legacyMode': 'Use service.mode instead', - 'database.redis': 'Use database.dragonfly instead', -}; - -const REQUIRED_PATHS = [ - 'service.name', - 'service.port', - 'database.postgres.host', - 'database.postgres.database', -]; - -const REQUIRED_ENV_VARS = [ - 'NODE_ENV', -]; - -const SECRET_PATHS = [ - 'database.postgres.password', - 'database.mongodb.uri', - 'providers.quoteMedia.apiKey', - 'providers.interactiveBrokers.clientId', -]; - -function printUsage() { - console.log(` -Stock Bot Configuration CLI - -Usage: bun run config-cli [options] - -Options: - --config Path to config directory (default: ./config) - --env Environment to use (development, test, production) - --validate Validate configuration against schema - --show Show current configuration (secrets redacted) - --check Run all configuration checks - --json Output in JSON format - --help Show this help message - -Examples: - # Validate configuration - bun run config-cli --validate - - # Show configuration for production - bun run config-cli --env production --show - - # Run all checks - bun run config-cli --check - - # Output configuration as JSON - bun run config-cli --show --json -`); -} - -async function main() { - const { values } = parseArgs({ - args: process.argv.slice(2), - options: { - config: { type: 'string' }, - env: { type: 'string' }, - validate: { type: 'boolean' }, - show: { type: 'boolean' }, - check: { type: 'boolean' }, - json: { type: 'boolean' }, - help: { type: 'boolean' }, - }, - }) as { values: CliOptions }; - - if (values.help) { - printUsage(); - process.exit(0); - } - - const configPath = values.config || join(process.cwd(), 'config'); - const environment = values.env as Environment; - - try { - const manager = new ConfigManager({ - configPath, - environment, - }); - - const config = await manager.initialize(appConfigSchema); - - if (values.validate) { - const result = validateConfig(config, appConfigSchema); - - if (values.json) { - console.log(JSON.stringify(result, null, 2)); - } else { - console.log(formatValidationResult(result)); - } - - process.exit(result.valid ? 0 : 1); - } - - if (values.show) { - const redacted = redactSecrets(config, SECRET_PATHS); - - if (values.json) { - console.log(JSON.stringify(redacted, null, 2)); - } else { - console.log('Current Configuration:'); - console.log(JSON.stringify(redacted, null, 2)); - } - } - - if (values.check) { - console.log('Running configuration checks...\n'); - - // Schema validation - console.log('1. Schema Validation:'); - const schemaResult = validateConfig(config, appConfigSchema); - console.log(formatValidationResult(schemaResult)); - console.log(); - - // Environment variables - console.log('2. Required Environment Variables:'); - const envResult = checkRequiredEnvVars(REQUIRED_ENV_VARS); - console.log(formatValidationResult(envResult)); - console.log(); - - // Required paths - console.log('3. Required Configuration Paths:'); - const pathResult = validateCompleteness(config, REQUIRED_PATHS); - console.log(formatValidationResult(pathResult)); - console.log(); - - // Deprecations - console.log('4. Deprecation Warnings:'); - const warnings = checkDeprecations(config, DEPRECATIONS); - if (warnings && warnings.length > 0) { - for (const warning of warnings) { - console.log(` ⚠️ ${warning.path}: ${warning.message}`); - } - } else { - console.log(' ✅ No deprecated options found'); - } - console.log(); - - // Overall result - const allValid = schemaResult.valid && envResult.valid && pathResult.valid; - - if (allValid) { - console.log('✅ All configuration checks passed!'); - process.exit(0); - } else { - console.log('❌ Some configuration checks failed'); - process.exit(1); - } - } - - if (!values.validate && !values.show && !values.check) { - console.log('No action specified. Use --help for usage information.'); - process.exit(1); - } - - } catch (error) { - if (values.json) { - console.error(JSON.stringify({ error: String(error) })); - } else { - console.error('Error:', error); - } - process.exit(1); - } -} - -// Run CLI -if (import.meta.main) { - main(); -} \ No newline at end of file +#!/usr/bin/env bun +/* eslint-disable no-console */ +import { join } from 'path'; +import { parseArgs } from 'util'; +import { redactSecrets } from './utils/secrets'; +import { + checkDeprecations, + checkRequiredEnvVars, + formatValidationResult, + validateCompleteness, + validateConfig, +} from './utils/validation'; +import { ConfigManager } from './config-manager'; +import { baseAppSchema } from './schemas'; +import type { Environment } from './types'; + +interface CliOptions { + config?: string; + env?: string; + validate?: boolean; + show?: boolean; + check?: boolean; + json?: boolean; + help?: boolean; +} + +const DEPRECATIONS = { + 'service.legacyMode': 'Use service.mode instead', + 'database.redis': 'Use database.dragonfly instead', +}; + +const REQUIRED_PATHS = [ + 'service.name', + 'service.port', + 'database.postgres.host', + 'database.postgres.database', +]; + +const REQUIRED_ENV_VARS = ['NODE_ENV']; + +const SECRET_PATHS = [ + 'database.postgres.password', + 'database.mongodb.uri', + 'providers.quoteMedia.apiKey', + 'providers.interactiveBrokers.clientId', +]; + +function printUsage() { + console.log(` +Stock Bot Configuration CLI + +Usage: bun run config-cli [options] + +Options: + --config Path to config directory (default: ./config) + --env Environment to use (development, test, production) + --validate Validate configuration against schema + --show Show current configuration (secrets redacted) + --check Run all configuration checks + --json Output in JSON format + --help Show this help message + +Examples: + # Validate configuration + bun run config-cli --validate + + # Show configuration for production + bun run config-cli --env production --show + + # Run all checks + bun run config-cli --check + + # Output configuration as JSON + bun run config-cli --show --json +`); +} + +async function main() { + const { values } = parseArgs({ + args: process.argv.slice(2), + options: { + config: { type: 'string' }, + env: { type: 'string' }, + validate: { type: 'boolean' }, + show: { type: 'boolean' }, + check: { type: 'boolean' }, + json: { type: 'boolean' }, + help: { type: 'boolean' }, + }, + }) as { values: CliOptions }; + + if (values.help) { + printUsage(); + process.exit(0); + } + + const configPath = values.config || join(process.cwd(), 'config'); + const environment = values.env as Environment; + + try { + const manager = new ConfigManager({ + configPath, + environment, + }); + + const config = await manager.initialize(baseAppSchema); + + if (values.validate) { + const result = validateConfig(config, baseAppSchema); + + if (values.json) { + console.log(JSON.stringify(result, null, 2)); + } else { + console.log(formatValidationResult(result)); + } + + process.exit(result.valid ? 0 : 1); + } + + if (values.show) { + const redacted = redactSecrets(config, SECRET_PATHS); + + if (values.json) { + console.log(JSON.stringify(redacted, null, 2)); + } else { + console.log('Current Configuration:'); + console.log(JSON.stringify(redacted, null, 2)); + } + } + + if (values.check) { + console.log('Running configuration checks...\n'); + + // Schema validation + console.log('1. Schema Validation:'); + const schemaResult = validateConfig(config, baseAppSchema); + console.log(formatValidationResult(schemaResult)); + console.log(); + + // Environment variables + console.log('2. Required Environment Variables:'); + const envResult = checkRequiredEnvVars(REQUIRED_ENV_VARS); + console.log(formatValidationResult(envResult)); + console.log(); + + // Required paths + console.log('3. Required Configuration Paths:'); + const pathResult = validateCompleteness(config, REQUIRED_PATHS); + console.log(formatValidationResult(pathResult)); + console.log(); + + // Deprecations + console.log('4. Deprecation Warnings:'); + const warnings = checkDeprecations(config, DEPRECATIONS); + if (warnings && warnings.length > 0) { + for (const warning of warnings) { + console.log(` ⚠️ ${warning.path}: ${warning.message}`); + } + } else { + console.log(' ✅ No deprecated options found'); + } + console.log(); + + // Overall result + const allValid = schemaResult.valid && envResult.valid && pathResult.valid; + + if (allValid) { + console.log('✅ All configuration checks passed!'); + process.exit(0); + } else { + console.log('❌ Some configuration checks failed'); + process.exit(1); + } + } + + if (!values.validate && !values.show && !values.check) { + console.log('No action specified. Use --help for usage information.'); + process.exit(1); + } + } catch (error) { + if (values.json) { + console.error(JSON.stringify({ error: String(error) })); + } else { + console.error('Error:', error); + } + process.exit(1); + } +} + +// Run CLI +if (import.meta.main) { + main(); +} diff --git a/libs/config/src/config-manager.ts b/libs/core/config/src/config-manager.ts similarity index 92% rename from libs/config/src/config-manager.ts rename to libs/core/config/src/config-manager.ts index c2449a9..5da0b44 100644 --- a/libs/config/src/config-manager.ts +++ b/libs/core/config/src/config-manager.ts @@ -3,7 +3,8 @@ import { z } from 'zod'; import { EnvLoader } from './loaders/env.loader'; import { FileLoader } from './loaders/file.loader'; import { ConfigError, ConfigValidationError } from './errors'; -import { +import { getLogger } from '@stock-bot/logger'; +import type { ConfigLoader, ConfigManagerOptions, ConfigSchema, @@ -12,6 +13,7 @@ import { } from './types'; export class ConfigManager> { + private readonly logger = getLogger('config-manager'); private config: T | null = null; private loaders: ConfigLoader[]; private environment: Environment; @@ -73,6 +75,16 @@ export class ConfigManager> { this.config = this.schema.parse(mergedConfig) as T; } catch (error) { if (error instanceof z.ZodError) { + const errorDetails = error.errors.map(err => ({ + path: err.path.join('.'), + message: err.message, + code: err.code, + expected: (err as any).expected, + received: (err as any).received, + })); + + this.logger.error('Configuration validation failed:', errorDetails); + throw new ConfigValidationError('Configuration validation failed', error.errors); } throw error; diff --git a/libs/config/src/errors.ts b/libs/core/config/src/errors.ts similarity index 73% rename from libs/config/src/errors.ts rename to libs/core/config/src/errors.ts index a0d4bee..dd5fa31 100644 --- a/libs/config/src/errors.ts +++ b/libs/core/config/src/errors.ts @@ -6,15 +6,21 @@ export class ConfigError extends Error { } export class ConfigValidationError extends ConfigError { - constructor(message: string, public errors: unknown) { + constructor( + message: string, + public errors: unknown + ) { super(message); this.name = 'ConfigValidationError'; } } export class ConfigLoaderError extends ConfigError { - constructor(message: string, public loader: string) { + constructor( + message: string, + public loader: string + ) { super(`${loader}: ${message}`); this.name = 'ConfigLoaderError'; } -} \ No newline at end of file +} diff --git a/libs/config/src/index.ts b/libs/core/config/src/index.ts similarity index 71% rename from libs/config/src/index.ts rename to libs/core/config/src/index.ts index 1097816..ecf9268 100644 --- a/libs/config/src/index.ts +++ b/libs/core/config/src/index.ts @@ -1,11 +1,13 @@ -// Import necessary types for singleton +// Import necessary types import { EnvLoader } from './loaders/env.loader'; import { FileLoader } from './loaders/file.loader'; import { ConfigManager } from './config-manager'; -import { AppConfig, appConfigSchema } from './schemas'; +import type { BaseAppConfig } from './schemas'; +import { baseAppSchema } from './schemas'; +import { z } from 'zod'; -// Create singleton instance -let configInstance: ConfigManager | null = null; +// Legacy singleton instance for backward compatibility +let configInstance: ConfigManager | null = null; // Synchronously load critical env vars for early initialization function loadCriticalEnvVarsSync(): void { @@ -54,24 +56,6 @@ function loadCriticalEnvVarsSync(): void { // Load critical env vars immediately loadCriticalEnvVarsSync(); -/** - * Initialize the global configuration synchronously. - * - * This loads configuration from all sources in the correct hierarchy: - * 1. Schema defaults (lowest priority) - * 2. default.json - * 3. [environment].json (e.g., development.json) - * 4. .env file values - * 5. process.env values (highest priority) - */ -export function initializeConfig(configPath?: string): AppConfig { - if (!configInstance) { - configInstance = new ConfigManager({ - configPath, - }); - } - return configInstance.initialize(appConfigSchema); -} /** * Initialize configuration for a service in a monorepo. @@ -80,10 +64,10 @@ export function initializeConfig(configPath?: string): AppConfig { * 2. Service-specific config directory (./config) * 3. Environment variables */ -export function initializeServiceConfig(): AppConfig { +export function initializeServiceConfig(): BaseAppConfig { if (!configInstance) { const environment = process.env.NODE_ENV || 'development'; - configInstance = new ConfigManager({ + configInstance = new ConfigManager({ loaders: [ new FileLoader('../../config', environment), // Root config new FileLoader('./config', environment), // Service config @@ -91,13 +75,13 @@ export function initializeServiceConfig(): AppConfig { ], }); } - return configInstance.initialize(appConfigSchema); + return configInstance.initialize(baseAppSchema); } /** * Get the current configuration */ -export function getConfig(): AppConfig { +export function getConfig(): BaseAppConfig { if (!configInstance) { throw new Error('Configuration not initialized. Call initializeConfig() first.'); } @@ -107,7 +91,7 @@ export function getConfig(): AppConfig { /** * Get configuration manager instance */ -export function getConfigManager(): ConfigManager { +export function getConfigManager(): ConfigManager { if (!configInstance) { throw new Error('Configuration not initialized. Call initializeConfig() first.'); } @@ -137,17 +121,10 @@ export function getLogConfig() { return getConfig().log; } -// Deprecated alias for backward compatibility -export function getLoggingConfig() { - return getConfig().log; -} -export function getProviderConfig(provider: string) { - const providers = getConfig().providers; - if (!providers || !(provider in providers)) { - throw new Error(`Provider configuration not found: ${provider}`); - } - return (providers as Record)[provider]; + +export function getQueueConfig() { + return getConfig().queue; } // Export environment helpers @@ -163,6 +140,39 @@ export function isTest(): boolean { return getConfig().environment === 'test'; } +/** + * Generic config builder for creating app-specific configurations + * @param schema - Zod schema for your app config + * @param options - Config manager options + * @returns Initialized config manager instance + */ +export function createAppConfig( + schema: T, + options?: { + configPath?: string; + environment?: 'development' | 'test' | 'production'; + loaders?: any[]; + } +): ConfigManager> { + const manager = new ConfigManager>(options); + return manager; +} + +/** + * Create and initialize app config in one step + */ +export function initializeAppConfig( + schema: T, + options?: { + configPath?: string; + environment?: 'development' | 'test' | 'production'; + loaders?: any[]; + } +): z.infer { + const manager = createAppConfig(schema, options); + return manager.initialize(schema); +} + // Export all schemas export * from './schemas'; diff --git a/libs/config/src/loaders/env.loader.ts b/libs/core/config/src/loaders/env.loader.ts similarity index 98% rename from libs/config/src/loaders/env.loader.ts rename to libs/core/config/src/loaders/env.loader.ts index 3e703f8..ac6a0ee 100644 --- a/libs/config/src/loaders/env.loader.ts +++ b/libs/core/config/src/loaders/env.loader.ts @@ -1,6 +1,6 @@ import { readFileSync } from 'fs'; import { ConfigLoaderError } from '../errors'; -import { ConfigLoader } from '../types'; +import type { ConfigLoader } from '../types'; export interface EnvLoaderOptions { convertCase?: boolean; @@ -28,7 +28,7 @@ export class EnvLoader implements ConfigLoader { load(): Record { try { // Load root .env file - try multiple possible locations - const possiblePaths = ['./.env', '../.env', '../../.env']; + const possiblePaths = ['./.env', '../.env', '../../.env', '../../../.env']; for (const path of possiblePaths) { this.loadEnvFile(path); } diff --git a/libs/config/src/loaders/file.loader.ts b/libs/core/config/src/loaders/file.loader.ts similarity index 97% rename from libs/config/src/loaders/file.loader.ts rename to libs/core/config/src/loaders/file.loader.ts index 0306054..ed3cdf6 100644 --- a/libs/config/src/loaders/file.loader.ts +++ b/libs/core/config/src/loaders/file.loader.ts @@ -1,7 +1,7 @@ import { existsSync, readFileSync } from 'fs'; import { join } from 'path'; import { ConfigLoaderError } from '../errors'; -import { ConfigLoader } from '../types'; +import type { ConfigLoader } from '../types'; export class FileLoader implements ConfigLoader { readonly priority = 50; // Medium priority diff --git a/libs/core/config/src/schemas/__tests__/unified-app.test.ts b/libs/core/config/src/schemas/__tests__/unified-app.test.ts new file mode 100644 index 0000000..aed96fa --- /dev/null +++ b/libs/core/config/src/schemas/__tests__/unified-app.test.ts @@ -0,0 +1,155 @@ +import { describe, expect, it } from 'bun:test'; +import { unifiedAppSchema, toUnifiedConfig, getStandardServiceName } from '../unified-app.schema'; + +describe('UnifiedAppConfig', () => { + describe('getStandardServiceName', () => { + it('should convert camelCase to kebab-case', () => { + expect(getStandardServiceName('dataIngestion')).toBe('data-ingestion'); + expect(getStandardServiceName('dataPipeline')).toBe('data-pipeline'); + expect(getStandardServiceName('webApi')).toBe('web-api'); + }); + + it('should handle already kebab-case names', () => { + expect(getStandardServiceName('data-ingestion')).toBe('data-ingestion'); + expect(getStandardServiceName('web-api')).toBe('web-api'); + }); + + it('should handle single word names', () => { + expect(getStandardServiceName('api')).toBe('api'); + expect(getStandardServiceName('worker')).toBe('worker'); + }); + }); + + describe('unifiedAppSchema transform', () => { + it('should set serviceName from name if not provided', () => { + const config = { + name: 'test-app', + version: '1.0.0', + service: { + name: 'webApi', + port: 3000, + }, + log: { level: 'info' }, + }; + + const result = unifiedAppSchema.parse(config); + expect(result.service.serviceName).toBe('web-api'); + }); + + it('should keep existing serviceName if provided', () => { + const config = { + name: 'test-app', + version: '1.0.0', + service: { + name: 'webApi', + serviceName: 'custom-name', + port: 3000, + }, + log: { level: 'info' }, + }; + + const result = unifiedAppSchema.parse(config); + expect(result.service.serviceName).toBe('custom-name'); + }); + + it('should sync nested and flat database configs', () => { + const config = { + name: 'test-app', + version: '1.0.0', + service: { name: 'test', port: 3000 }, + log: { level: 'info' }, + database: { + postgres: { + host: 'localhost', + port: 5432, + database: 'test', + user: 'user', + password: 'pass', + }, + mongodb: { + uri: 'mongodb://localhost:27017', + database: 'test', + }, + }, + }; + + const result = unifiedAppSchema.parse(config); + + // Should have both nested and flat structure + expect(result.postgres).toBeDefined(); + expect(result.mongodb).toBeDefined(); + expect(result.database?.postgres).toBeDefined(); + expect(result.database?.mongodb).toBeDefined(); + + // Values should match + expect(result.postgres?.host).toBe('localhost'); + expect(result.postgres?.port).toBe(5432); + expect(result.mongodb?.uri).toBe('mongodb://localhost:27017'); + }); + + it('should handle questdb ilpPort to influxPort mapping', () => { + const config = { + name: 'test-app', + version: '1.0.0', + service: { name: 'test', port: 3000 }, + log: { level: 'info' }, + database: { + questdb: { + host: 'localhost', + ilpPort: 9009, + httpPort: 9000, + pgPort: 8812, + database: 'questdb', + }, + }, + }; + + const result = unifiedAppSchema.parse(config); + expect(result.questdb).toBeDefined(); + expect((result.questdb as any).influxPort).toBe(9009); + }); + }); + + describe('toUnifiedConfig', () => { + it('should convert StockBotAppConfig to UnifiedAppConfig', () => { + const stockBotConfig = { + name: 'stock-bot', + version: '1.0.0', + environment: 'development', + service: { + name: 'dataIngestion', + port: 3001, + host: '0.0.0.0', + }, + log: { + level: 'info', + format: 'json', + }, + database: { + postgres: { + enabled: true, + host: 'localhost', + port: 5432, + database: 'stock', + user: 'user', + password: 'pass', + }, + dragonfly: { + enabled: true, + host: 'localhost', + port: 6379, + db: 0, + }, + }, + }; + + const unified = toUnifiedConfig(stockBotConfig); + + expect(unified.service.serviceName).toBe('data-ingestion'); + expect(unified.redis).toBeDefined(); + expect(unified.redis?.host).toBe('localhost'); + expect(unified.postgres).toBeDefined(); + expect(unified.postgres?.host).toBe('localhost'); + }); + }); +}); \ No newline at end of file diff --git a/libs/core/config/src/schemas/base-app.schema.ts b/libs/core/config/src/schemas/base-app.schema.ts new file mode 100644 index 0000000..0167e35 --- /dev/null +++ b/libs/core/config/src/schemas/base-app.schema.ts @@ -0,0 +1,61 @@ +import { z } from 'zod'; +import { environmentSchema } from './base.schema'; +import { + postgresConfigSchema, + mongodbConfigSchema, + questdbConfigSchema, + dragonflyConfigSchema +} from './database.schema'; +import { + serviceConfigSchema, + loggingConfigSchema, + queueConfigSchema, + httpConfigSchema, + webshareConfigSchema, + browserConfigSchema, + proxyConfigSchema +} from './service.schema'; + +/** + * Generic base application schema that can be extended by specific apps + */ +export const baseAppSchema = z.object({ + // Basic app info + name: z.string(), + version: z.string(), + environment: environmentSchema.default('development'), + + // Service configuration + service: serviceConfigSchema, + + // Logging configuration + log: loggingConfigSchema, + + // Database configuration - apps can choose which databases they need + database: z.object({ + postgres: postgresConfigSchema.optional(), + mongodb: mongodbConfigSchema.optional(), + questdb: questdbConfigSchema.optional(), + dragonfly: dragonflyConfigSchema.optional(), + }).optional(), + + // Redis configuration (used for cache and queue) + redis: dragonflyConfigSchema.optional(), + + // Queue configuration + queue: queueConfigSchema.optional(), + + // HTTP client configuration + http: httpConfigSchema.optional(), + + // WebShare proxy configuration + webshare: webshareConfigSchema.optional(), + + // Browser configuration + browser: browserConfigSchema.optional(), + + // Proxy manager configuration + proxy: proxyConfigSchema.optional(), +}); + +export type BaseAppConfig = z.infer; \ No newline at end of file diff --git a/libs/config/src/schemas/base.schema.ts b/libs/core/config/src/schemas/base.schema.ts similarity index 98% rename from libs/config/src/schemas/base.schema.ts rename to libs/core/config/src/schemas/base.schema.ts index 2adb6bc..1695553 100644 --- a/libs/config/src/schemas/base.schema.ts +++ b/libs/core/config/src/schemas/base.schema.ts @@ -7,4 +7,4 @@ export const baseConfigSchema = z.object({ name: z.string().optional(), version: z.string().optional(), debug: z.boolean().default(false), -}); \ No newline at end of file +}); diff --git a/libs/config/src/schemas/database.schema.ts b/libs/core/config/src/schemas/database.schema.ts similarity index 70% rename from libs/config/src/schemas/database.schema.ts rename to libs/core/config/src/schemas/database.schema.ts index d0b1666..47fd1fd 100644 --- a/libs/config/src/schemas/database.schema.ts +++ b/libs/core/config/src/schemas/database.schema.ts @@ -2,6 +2,7 @@ import { z } from 'zod'; // PostgreSQL configuration export const postgresConfigSchema = z.object({ + enabled: z.boolean().default(true), host: z.string().default('localhost'), port: z.number().default(5432), database: z.string(), @@ -15,32 +16,36 @@ export const postgresConfigSchema = z.object({ // QuestDB configuration export const questdbConfigSchema = z.object({ + enabled: z.boolean().default(true), host: z.string().default('localhost'), ilpPort: z.number().default(9009), httpPort: z.number().default(9000), pgPort: z.number().default(8812), database: z.string().default('questdb'), - user: z.string().default('admin'), - password: z.string().default('quest'), + user: z.string().optional(), // No default - QuestDB doesn't require auth by default + password: z.string().optional(), // No default - QuestDB doesn't require auth by default bufferSize: z.number().default(65536), flushInterval: z.number().default(1000), }); // MongoDB configuration export const mongodbConfigSchema = z.object({ - uri: z.string().url().optional(), - host: z.string().default('localhost'), - port: z.number().default(27017), - database: z.string(), + enabled: z.boolean().default(true), + uri: z.string().url(), // URI is required and contains all connection info + database: z.string(), // Database name for reference + poolSize: z.number().min(1).max(100).default(10), + // Optional fields for cases where URI parsing might fail + host: z.string().default('localhost').optional(), + port: z.number().default(27017).optional(), user: z.string().optional(), password: z.string().optional(), - authSource: z.string().default('admin'), + authSource: z.string().default('admin').optional(), replicaSet: z.string().optional(), - poolSize: z.number().min(1).max(100).default(10), }); // Dragonfly/Redis configuration export const dragonflyConfigSchema = z.object({ + enabled: z.boolean().default(true), host: z.string().default('localhost'), port: z.number().default(6379), password: z.string().optional(), @@ -57,4 +62,4 @@ export const databaseConfigSchema = z.object({ questdb: questdbConfigSchema, mongodb: mongodbConfigSchema, dragonfly: dragonflyConfigSchema, -}); \ No newline at end of file +}); diff --git a/libs/core/config/src/schemas/index.ts b/libs/core/config/src/schemas/index.ts new file mode 100644 index 0000000..23cf90d --- /dev/null +++ b/libs/core/config/src/schemas/index.ts @@ -0,0 +1,18 @@ +// Export all schema modules +export * from './base.schema'; +export * from './database.schema'; +export * from './service.schema'; +export * from './base-app.schema'; + +// Export provider schemas temporarily for backward compatibility +// These will be moved to stock-specific config +export * from './provider.schema'; + +// Re-export commonly used schemas for convenience +export { baseAppSchema } from './base-app.schema'; +export type { BaseAppConfig } from './base-app.schema'; + +// Export unified schema for standardized configuration +export { unifiedAppSchema, toUnifiedConfig, getStandardServiceName } from './unified-app.schema'; +export type { UnifiedAppConfig } from './unified-app.schema'; + diff --git a/libs/config/src/schemas/provider.schema.ts b/libs/core/config/src/schemas/provider.schema.ts similarity index 93% rename from libs/config/src/schemas/provider.schema.ts rename to libs/core/config/src/schemas/provider.schema.ts index b4bdbce..62ccf72 100644 --- a/libs/config/src/schemas/provider.schema.ts +++ b/libs/core/config/src/schemas/provider.schema.ts @@ -5,10 +5,12 @@ export const baseProviderConfigSchema = z.object({ name: z.string(), enabled: z.boolean().default(true), priority: z.number().default(0), - rateLimit: z.object({ - maxRequests: z.number().default(100), - windowMs: z.number().default(60000), - }).optional(), + rateLimit: z + .object({ + maxRequests: z.number().default(100), + windowMs: z.number().default(60000), + }) + .optional(), timeout: z.number().default(30000), retries: z.number().default(3), }); @@ -71,4 +73,4 @@ export const providerSchemas = { qm: qmProviderConfigSchema, yahoo: yahooProviderConfigSchema, webshare: webshareProviderConfigSchema, -} as const; \ No newline at end of file +} as const; diff --git a/libs/core/config/src/schemas/service.schema.ts b/libs/core/config/src/schemas/service.schema.ts new file mode 100644 index 0000000..5ff474e --- /dev/null +++ b/libs/core/config/src/schemas/service.schema.ts @@ -0,0 +1,107 @@ +import { z } from 'zod'; + +// Common service configuration +export const serviceConfigSchema = z.object({ + name: z.string(), + serviceName: z.string().optional(), // Standard service name (kebab-case) + port: z.number().min(1).max(65535), + host: z.string().default('0.0.0.0'), + healthCheckPath: z.string().default('/health'), + metricsPath: z.string().default('/metrics'), + shutdownTimeout: z.number().default(30000), + cors: z + .object({ + enabled: z.boolean().default(true), + origin: z.union([z.string(), z.array(z.string())]).default('*'), + credentials: z.boolean().default(true), + }) + .default({}), +}); + +// Logging configuration +export const loggingConfigSchema = z.object({ + level: z.enum(['trace', 'debug', 'info', 'warn', 'error', 'fatal']).default('info'), + format: z.enum(['json', 'pretty']).default('json'), + hideObject: z.boolean().default(false), + loki: z + .object({ + enabled: z.boolean().default(false), + host: z.string().default('localhost'), + port: z.number().default(3100), + labels: z.record(z.string()).default({}), + }) + .optional(), +}); + +// Queue configuration +export const queueConfigSchema = z.object({ + enabled: z.boolean().default(true), + redis: z.object({ + host: z.string().default('localhost'), + port: z.number().default(6379), + password: z.string().optional(), + db: z.number().default(1), + }), + workers: z.number().default(1), + concurrency: z.number().default(1), + enableScheduledJobs: z.boolean().default(true), + delayWorkerStart: z.boolean().default(false), + defaultJobOptions: z + .object({ + attempts: z.number().default(3), + backoff: z + .object({ + type: z.enum(['exponential', 'fixed']).default('exponential'), + delay: z.number().default(1000), + }) + .default({}), + removeOnComplete: z.number().default(100), + removeOnFail: z.number().default(50), + timeout: z.number().optional(), + }) + .default({}), +}); + +// HTTP client configuration +export const httpConfigSchema = z.object({ + timeout: z.number().default(30000), + retries: z.number().default(3), + retryDelay: z.number().default(1000), + userAgent: z.string().optional(), + proxy: z + .object({ + enabled: z.boolean().default(false), + url: z.string().url().optional(), + auth: z + .object({ + username: z.string(), + password: z.string(), + }) + .optional(), + }) + .optional(), +}); + +// WebShare proxy service configuration +export const webshareConfigSchema = z.object({ + apiKey: z.string().optional(), + apiUrl: z.string().default('https://proxy.webshare.io/api/v2/'), + enabled: z.boolean().default(true), +}); + +// Browser configuration +export const browserConfigSchema = z.object({ + headless: z.boolean().default(true), + timeout: z.number().default(30000), +}); + +// Proxy manager configuration +export const proxyConfigSchema = z.object({ + enabled: z.boolean().default(false), + cachePrefix: z.string().default('proxy:'), + ttl: z.number().default(3600), + webshare: z.object({ + apiKey: z.string(), + apiUrl: z.string().default('https://proxy.webshare.io/api/v2/'), + }).optional(), +}); diff --git a/libs/core/config/src/schemas/unified-app.schema.ts b/libs/core/config/src/schemas/unified-app.schema.ts new file mode 100644 index 0000000..9fcf40c --- /dev/null +++ b/libs/core/config/src/schemas/unified-app.schema.ts @@ -0,0 +1,76 @@ +import { z } from 'zod'; +import { baseAppSchema } from './base-app.schema'; +import { + postgresConfigSchema, + mongodbConfigSchema, + questdbConfigSchema, + dragonflyConfigSchema +} from './database.schema'; + +/** + * Unified application configuration schema that provides both nested and flat access + * to database configurations for backward compatibility while maintaining a clean structure + */ +export const unifiedAppSchema = baseAppSchema.extend({ + // Flat database configs for DI system (these take precedence) + redis: dragonflyConfigSchema.optional(), + mongodb: mongodbConfigSchema.optional(), + postgres: postgresConfigSchema.optional(), + questdb: questdbConfigSchema.optional(), +}).transform((data) => { + // Ensure service.serviceName is set from service.name if not provided + if (data.service && !data.service.serviceName) { + data.service.serviceName = data.service.name.replace(/([A-Z])/g, '-$1').toLowerCase().replace(/^-/, ''); + } + + // If flat configs exist, ensure they're also in the nested database object + if (data.redis || data.mongodb || data.postgres || data.questdb) { + data.database = { + ...data.database, + dragonfly: data.redis || data.database?.dragonfly, + mongodb: data.mongodb || data.database?.mongodb, + postgres: data.postgres || data.database?.postgres, + questdb: data.questdb || data.database?.questdb, + }; + } + + // If nested configs exist but flat ones don't, copy them to flat structure + if (data.database) { + if (data.database.dragonfly && !data.redis) { + data.redis = data.database.dragonfly; + } + if (data.database.mongodb && !data.mongodb) { + data.mongodb = data.database.mongodb; + } + if (data.database.postgres && !data.postgres) { + data.postgres = data.database.postgres; + } + if (data.database.questdb && !data.questdb) { + // Handle the ilpPort -> influxPort mapping for DI system + const questdbConfig = { ...data.database.questdb }; + if ('ilpPort' in questdbConfig && !('influxPort' in questdbConfig)) { + (questdbConfig as any).influxPort = questdbConfig.ilpPort; + } + data.questdb = questdbConfig; + } + } + + return data; +}); + +export type UnifiedAppConfig = z.infer; + +/** + * Helper to convert StockBotAppConfig to UnifiedAppConfig + */ +export function toUnifiedConfig(config: any): UnifiedAppConfig { + return unifiedAppSchema.parse(config); +} + +/** + * Helper to get standardized service name + */ +export function getStandardServiceName(serviceName: string): string { + // Convert camelCase to kebab-case + return serviceName.replace(/([A-Z])/g, '-$1').toLowerCase().replace(/^-/, ''); +} \ No newline at end of file diff --git a/libs/config/src/types/index.ts b/libs/core/config/src/types/index.ts similarity index 100% rename from libs/config/src/types/index.ts rename to libs/core/config/src/types/index.ts diff --git a/libs/config/src/utils/secrets.ts b/libs/core/config/src/utils/secrets.ts similarity index 89% rename from libs/config/src/utils/secrets.ts rename to libs/core/config/src/utils/secrets.ts index b4c57a2..d12fc86 100644 --- a/libs/config/src/utils/secrets.ts +++ b/libs/core/config/src/utils/secrets.ts @@ -1,183 +1,178 @@ -import { z } from 'zod'; - -/** - * Secret value wrapper to prevent accidental logging - */ -export class SecretValue { - private readonly value: T; - private readonly masked: string; - - constructor(value: T, mask: string = '***') { - this.value = value; - this.masked = mask; - } - - /** - * Get the actual secret value - * @param reason - Required reason for accessing the secret - */ - reveal(reason: string): T { - if (!reason) { - throw new Error('Reason required for revealing secret value'); - } - return this.value; - } - - /** - * Get masked representation - */ - toString(): string { - return this.masked; - } - - /** - * Prevent JSON serialization of actual value - */ - toJSON(): string { - return this.masked; - } - - /** - * Check if value matches without revealing it - */ - equals(other: T): boolean { - return this.value === other; - } - - /** - * Transform the secret value - */ - map(fn: (value: T) => R, reason: string): SecretValue { - return new SecretValue(fn(this.reveal(reason))); - } -} - -/** - * Zod schema for secret values - */ -export const secretSchema = (_schema: T) => { - return z.custom>>( - (val) => val instanceof SecretValue, - { - message: 'Expected SecretValue instance', - } - ); -}; - -/** - * Transform string to SecretValue in Zod schema - */ -export const secretStringSchema = z - .string() - .transform((val) => new SecretValue(val)); - -/** - * Create a secret value - */ -export function secret(value: T, mask?: string): SecretValue { - return new SecretValue(value, mask); -} - -/** - * Check if a value is a secret - */ -export function isSecret(value: unknown): value is SecretValue { - return value instanceof SecretValue; -} - -/** - * Redact secrets from an object - */ -export function redactSecrets>( - obj: T, - secretPaths: string[] = [] -): T { - const result = { ...obj }; - - // Redact known secret paths - for (const path of secretPaths) { - const keys = path.split('.'); - let current: any = result; - - for (let i = 0; i < keys.length - 1; i++) { - const key = keys[i]; - if (key && current[key] && typeof current[key] === 'object') { - current = current[key]; - } else { - break; - } - } - - const lastKey = keys[keys.length - 1]; - if (current && lastKey && lastKey in current) { - current[lastKey] = '***REDACTED***'; - } - } - - // Recursively redact SecretValue instances - function redactSecretValues(obj: any): any { - if (obj === null || obj === undefined) { - return obj; - } - - if (isSecret(obj)) { - return obj.toString(); - } - - if (Array.isArray(obj)) { - return obj.map(redactSecretValues); - } - - if (typeof obj === 'object') { - const result: any = {}; - for (const [key, value] of Object.entries(obj)) { - result[key] = redactSecretValues(value); - } - return result; - } - - return obj; - } - - return redactSecretValues(result); -} - -/** - * Environment variable names that should be treated as secrets - */ -export const COMMON_SECRET_PATTERNS = [ - /password/i, - /secret/i, - /key/i, - /token/i, - /credential/i, - /private/i, - /auth/i, - /api[-_]?key/i, -]; - -/** - * Check if an environment variable name indicates a secret - */ -export function isSecretEnvVar(name: string): boolean { - return COMMON_SECRET_PATTERNS.some(pattern => pattern.test(name)); -} - -/** - * Wrap environment variables that look like secrets - */ -export function wrapSecretEnvVars( - env: Record -): Record { - const result: Record = {}; - - for (const [key, value] of Object.entries(env)) { - if (value !== undefined && isSecretEnvVar(key)) { - result[key] = new SecretValue(value, `***${key}***`); - } else { - result[key] = value; - } - } - - return result; -} \ No newline at end of file +import { z } from 'zod'; + +/** + * Secret value wrapper to prevent accidental logging + */ +export class SecretValue { + private readonly value: T; + private readonly masked: string; + + constructor(value: T, mask: string = '***') { + this.value = value; + this.masked = mask; + } + + /** + * Get the actual secret value + * @param reason - Required reason for accessing the secret + */ + reveal(reason: string): T { + if (!reason) { + throw new Error('Reason required for revealing secret value'); + } + return this.value; + } + + /** + * Get masked representation + */ + toString(): string { + return this.masked; + } + + /** + * Prevent JSON serialization of actual value + */ + toJSON(): string { + return this.masked; + } + + /** + * Check if value matches without revealing it + */ + equals(other: T): boolean { + return this.value === other; + } + + /** + * Transform the secret value + */ + map(fn: (value: T) => R, reason: string): SecretValue { + return new SecretValue(fn(this.reveal(reason))); + } +} + +/** + * Zod schema for secret values + */ +export const secretSchema = (_schema: T) => { + return z.custom>>(val => val instanceof SecretValue, { + message: 'Expected SecretValue instance', + }); +}; + +/** + * Transform string to SecretValue in Zod schema + */ +export const secretStringSchema = z.string().transform(val => new SecretValue(val)); + +/** + * Create a secret value + */ +export function secret(value: T, mask?: string): SecretValue { + return new SecretValue(value, mask); +} + +/** + * Check if a value is a secret + */ +export function isSecret(value: unknown): value is SecretValue { + return value instanceof SecretValue; +} + +/** + * Redact secrets from an object + */ +export function redactSecrets>( + obj: T, + secretPaths: string[] = [] +): T { + const result = { ...obj }; + + // Redact known secret paths + for (const path of secretPaths) { + const keys = path.split('.'); + let current: any = result; + + for (let i = 0; i < keys.length - 1; i++) { + const key = keys[i]; + if (key && current[key] && typeof current[key] === 'object') { + current = current[key]; + } else { + break; + } + } + + const lastKey = keys[keys.length - 1]; + if (current && lastKey && lastKey in current) { + current[lastKey] = '***REDACTED***'; + } + } + + // Recursively redact SecretValue instances + function redactSecretValues(obj: any): any { + if (obj === null || obj === undefined) { + return obj; + } + + if (isSecret(obj)) { + return obj.toString(); + } + + if (Array.isArray(obj)) { + return obj.map(redactSecretValues); + } + + if (typeof obj === 'object') { + const result: any = {}; + for (const [key, value] of Object.entries(obj)) { + result[key] = redactSecretValues(value); + } + return result; + } + + return obj; + } + + return redactSecretValues(result); +} + +/** + * Environment variable names that should be treated as secrets + */ +export const COMMON_SECRET_PATTERNS = [ + /password/i, + /secret/i, + /key/i, + /token/i, + /credential/i, + /private/i, + /auth/i, + /api[-_]?key/i, +]; + +/** + * Check if an environment variable name indicates a secret + */ +export function isSecretEnvVar(name: string): boolean { + return COMMON_SECRET_PATTERNS.some(pattern => pattern.test(name)); +} + +/** + * Wrap environment variables that look like secrets + */ +export function wrapSecretEnvVars( + env: Record +): Record { + const result: Record = {}; + + for (const [key, value] of Object.entries(env)) { + if (value !== undefined && isSecretEnvVar(key)) { + result[key] = new SecretValue(value, `***${key}***`); + } else { + result[key] = value; + } + } + + return result; +} diff --git a/libs/config/src/utils/validation.ts b/libs/core/config/src/utils/validation.ts similarity index 88% rename from libs/config/src/utils/validation.ts rename to libs/core/config/src/utils/validation.ts index 5b61b82..ece59c7 100644 --- a/libs/config/src/utils/validation.ts +++ b/libs/core/config/src/utils/validation.ts @@ -1,195 +1,188 @@ -import { z } from 'zod'; - -export interface ValidationResult { - valid: boolean; - errors?: Array<{ - path: string; - message: string; - expected?: string; - received?: string; - }>; - warnings?: Array<{ - path: string; - message: string; - }>; -} - -/** - * Validate configuration against a schema - */ -export function validateConfig( - config: unknown, - schema: z.ZodSchema -): ValidationResult { - try { - schema.parse(config); - return { valid: true }; - } catch (error) { - if (error instanceof z.ZodError) { - const errors = error.errors.map(err => ({ - path: err.path.join('.'), - message: err.message, - expected: 'expected' in err ? String(err.expected) : undefined, - received: 'received' in err ? String(err.received) : undefined, - })); - - return { valid: false, errors }; - } - - throw error; - } -} - -/** - * Check for deprecated configuration options - */ -export function checkDeprecations( - config: Record, - deprecations: Record -): ValidationResult['warnings'] { - const warnings: ValidationResult['warnings'] = []; - - function checkObject(obj: Record, path: string[] = []): void { - for (const [key, value] of Object.entries(obj)) { - const currentPath = [...path, key]; - const pathStr = currentPath.join('.'); - - if (pathStr in deprecations) { - const deprecationMessage = deprecations[pathStr]; - if (deprecationMessage) { - warnings?.push({ - path: pathStr, - message: deprecationMessage, - }); - } - } - - if (value && typeof value === 'object' && !Array.isArray(value)) { - checkObject(value as Record, currentPath); - } - } - } - - checkObject(config); - return warnings; -} - -/** - * Check for required environment variables - */ -export function checkRequiredEnvVars( - required: string[] -): ValidationResult { - const errors: ValidationResult['errors'] = []; - - for (const envVar of required) { - if (!process.env[envVar]) { - errors.push({ - path: `env.${envVar}`, - message: `Required environment variable ${envVar} is not set`, - }); - } - } - - return { - valid: errors.length === 0, - errors: errors.length > 0 ? errors : undefined, - }; -} - -/** - * Validate configuration completeness - */ -export function validateCompleteness( - config: Record, - required: string[] -): ValidationResult { - const errors: ValidationResult['errors'] = []; - - for (const path of required) { - const keys = path.split('.'); - let current: any = config; - let found = true; - - for (const key of keys) { - if (current && typeof current === 'object' && key in current) { - current = current[key]; - } else { - found = false; - break; - } - } - - if (!found || current === undefined || current === null) { - errors.push({ - path, - message: `Required configuration value is missing`, - }); - } - } - - return { - valid: errors.length === 0, - errors: errors.length > 0 ? errors : undefined, - }; -} - -/** - * Format validation result for display - */ -export function formatValidationResult(result: ValidationResult): string { - const lines: string[] = []; - - if (result.valid) { - lines.push('✅ Configuration is valid'); - } else { - lines.push('❌ Configuration validation failed'); - } - - if (result.errors && result.errors.length > 0) { - lines.push('\nErrors:'); - for (const error of result.errors) { - lines.push(` - ${error.path}: ${error.message}`); - if (error.expected && error.received) { - lines.push(` Expected: ${error.expected}, Received: ${error.received}`); - } - } - } - - if (result.warnings && result.warnings.length > 0) { - lines.push('\nWarnings:'); - for (const warning of result.warnings) { - lines.push(` - ${warning.path}: ${warning.message}`); - } - } - - return lines.join('\n'); -} - -/** - * Create a strict schema that doesn't allow extra properties - */ -export function createStrictSchema( - shape: T -): z.ZodObject { - return z.object(shape).strict(); -} - -/** - * Merge multiple schemas - */ -export function mergeSchemas( - ...schemas: T -): z.ZodIntersection { - if (schemas.length < 2) { - throw new Error('At least two schemas required for merge'); - } - - let result = schemas[0]!.and(schemas[1]!); - - for (let i = 2; i < schemas.length; i++) { - result = result.and(schemas[i]!) as any; - } - - return result as any; -} \ No newline at end of file +import { z } from 'zod'; + +export interface ValidationResult { + valid: boolean; + errors?: Array<{ + path: string; + message: string; + expected?: string; + received?: string; + }>; + warnings?: Array<{ + path: string; + message: string; + }>; +} + +/** + * Validate configuration against a schema + */ +export function validateConfig(config: unknown, schema: z.ZodSchema): ValidationResult { + try { + schema.parse(config); + return { valid: true }; + } catch (error) { + if (error instanceof z.ZodError) { + const errors = error.errors.map(err => ({ + path: err.path.join('.'), + message: err.message, + expected: 'expected' in err ? String(err.expected) : undefined, + received: 'received' in err ? String(err.received) : undefined, + })); + + return { valid: false, errors }; + } + + throw error; + } +} + +/** + * Check for deprecated configuration options + */ +export function checkDeprecations( + config: Record, + deprecations: Record +): ValidationResult['warnings'] { + const warnings: ValidationResult['warnings'] = []; + + function checkObject(obj: Record, path: string[] = []): void { + for (const [key, value] of Object.entries(obj)) { + const currentPath = [...path, key]; + const pathStr = currentPath.join('.'); + + if (pathStr in deprecations) { + const deprecationMessage = deprecations[pathStr]; + if (deprecationMessage) { + warnings?.push({ + path: pathStr, + message: deprecationMessage, + }); + } + } + + if (value && typeof value === 'object' && !Array.isArray(value)) { + checkObject(value as Record, currentPath); + } + } + } + + checkObject(config); + return warnings; +} + +/** + * Check for required environment variables + */ +export function checkRequiredEnvVars(required: string[]): ValidationResult { + const errors: ValidationResult['errors'] = []; + + for (const envVar of required) { + if (!process.env[envVar]) { + errors.push({ + path: `env.${envVar}`, + message: `Required environment variable ${envVar} is not set`, + }); + } + } + + return { + valid: errors.length === 0, + errors: errors.length > 0 ? errors : undefined, + }; +} + +/** + * Validate configuration completeness + */ +export function validateCompleteness( + config: Record, + required: string[] +): ValidationResult { + const errors: ValidationResult['errors'] = []; + + for (const path of required) { + const keys = path.split('.'); + let current: any = config; + let found = true; + + for (const key of keys) { + if (current && typeof current === 'object' && key in current) { + current = current[key]; + } else { + found = false; + break; + } + } + + if (!found || current === undefined || current === null) { + errors.push({ + path, + message: `Required configuration value is missing`, + }); + } + } + + return { + valid: errors.length === 0, + errors: errors.length > 0 ? errors : undefined, + }; +} + +/** + * Format validation result for display + */ +export function formatValidationResult(result: ValidationResult): string { + const lines: string[] = []; + + if (result.valid) { + lines.push('✅ Configuration is valid'); + } else { + lines.push('❌ Configuration validation failed'); + } + + if (result.errors && result.errors.length > 0) { + lines.push('\nErrors:'); + for (const error of result.errors) { + lines.push(` - ${error.path}: ${error.message}`); + if (error.expected && error.received) { + lines.push(` Expected: ${error.expected}, Received: ${error.received}`); + } + } + } + + if (result.warnings && result.warnings.length > 0) { + lines.push('\nWarnings:'); + for (const warning of result.warnings) { + lines.push(` - ${warning.path}: ${warning.message}`); + } + } + + return lines.join('\n'); +} + +/** + * Create a strict schema that doesn't allow extra properties + */ +export function createStrictSchema(shape: T): z.ZodObject { + return z.object(shape).strict(); +} + +/** + * Merge multiple schemas + */ +export function mergeSchemas( + ...schemas: T +): z.ZodIntersection { + if (schemas.length < 2) { + throw new Error('At least two schemas required for merge'); + } + + let result = schemas[0]!.and(schemas[1]!); + + for (let i = 2; i < schemas.length; i++) { + result = result.and(schemas[i]!) as any; + } + + return result as any; +} diff --git a/libs/config/test/config-manager.test.ts b/libs/core/config/test/config-manager.test.ts similarity index 86% rename from libs/config/test/config-manager.test.ts rename to libs/core/config/test/config-manager.test.ts index bce9edb..62f04e7 100644 --- a/libs/config/test/config-manager.test.ts +++ b/libs/core/config/test/config-manager.test.ts @@ -1,215 +1,221 @@ -import { describe, test, expect, beforeEach } from 'bun:test'; -import { z } from 'zod'; -import { ConfigManager } from '../src/config-manager'; -import { ConfigLoader } from '../src/types'; -import { ConfigValidationError } from '../src/errors'; - -// Mock loader for testing -class MockLoader implements ConfigLoader { - priority = 0; - - constructor( - private data: Record, - public override priority: number = 0 - ) {} - - async load(): Promise> { - return this.data; - } -} - -// Test schema -const testSchema = z.object({ - app: z.object({ - name: z.string(), - version: z.string(), - port: z.number().positive(), - }), - database: z.object({ - host: z.string(), - port: z.number(), - }), - environment: z.enum(['development', 'test', 'production']), -}); - -type TestConfig = z.infer; - -describe('ConfigManager', () => { - let manager: ConfigManager; - - beforeEach(() => { - manager = new ConfigManager({ - loaders: [ - new MockLoader({ - app: { - name: 'test-app', - version: '1.0.0', - port: 3000, - }, - database: { - host: 'localhost', - port: 5432, - }, - }), - ], - environment: 'test', - }); - }); - - test('should initialize configuration', async () => { - const config = await manager.initialize(testSchema); - - expect(config.app.name).toBe('test-app'); - expect(config.app.version).toBe('1.0.0'); - expect(config.environment).toBe('test'); - }); - - test('should merge multiple loaders by priority', async () => { - manager = new ConfigManager({ - loaders: [ - new MockLoader({ app: { name: 'base', port: 3000 } }, 0), - new MockLoader({ app: { name: 'override', version: '2.0.0' } }, 10), - new MockLoader({ database: { host: 'prod-db' } }, 5), - ], - environment: 'test', - }); - - const config = await manager.initialize(); - - expect(config.app.name).toBe('override'); - expect(config.app.version).toBe('2.0.0'); - expect(config.app.port).toBe(3000); - expect(config.database.host).toBe('prod-db'); - }); - - test('should validate configuration with schema', async () => { - manager = new ConfigManager({ - loaders: [ - new MockLoader({ - app: { - name: 'test-app', - version: '1.0.0', - port: 'invalid', // Should be number - }, - }), - ], - }); - - await expect(manager.initialize(testSchema)).rejects.toThrow(ConfigValidationError); - }); - - test('should get configuration value by path', async () => { - await manager.initialize(testSchema); - - expect(manager.getValue('app.name')).toBe('test-app'); - expect(manager.getValue('database.port')).toBe(5432); - }); - - test('should check if configuration path exists', async () => { - await manager.initialize(testSchema); - - expect(manager.has('app.name')).toBe(true); - expect(manager.has('app.nonexistent')).toBe(false); - }); - - test('should update configuration at runtime', async () => { - await manager.initialize(testSchema); - - manager.set({ - app: { - name: 'updated-app', - }, - }); - - const config = manager.get(); - expect(config.app.name).toBe('updated-app'); - expect(config.app.version).toBe('1.0.0'); // Should preserve other values - }); - - test('should validate updates against schema', async () => { - await manager.initialize(testSchema); - - expect(() => { - manager.set({ - app: { - port: 'invalid' as any, - }, - }); - }).toThrow(ConfigValidationError); - }); - - test('should reset configuration', async () => { - await manager.initialize(testSchema); - manager.reset(); - - expect(() => manager.get()).toThrow('Configuration not initialized'); - }); - - test('should create typed getter', async () => { - await manager.initialize(testSchema); - - const appSchema = z.object({ - app: z.object({ - name: z.string(), - version: z.string(), - }), - }); - - const getAppConfig = manager.createTypedGetter(appSchema); - const appConfig = getAppConfig(); - - expect(appConfig.app.name).toBe('test-app'); - }); - - test('should detect environment correctly', () => { - const originalEnv = process.env.NODE_ENV; - - process.env.NODE_ENV = 'production'; - const prodManager = new ConfigManager({ loaders: [] }); - expect(prodManager.getEnvironment()).toBe('production'); - - process.env.NODE_ENV = 'test'; - const testManager = new ConfigManager({ loaders: [] }); - expect(testManager.getEnvironment()).toBe('test'); - - process.env.NODE_ENV = originalEnv; - }); - - test('should handle deep merge correctly', async () => { - manager = new ConfigManager({ - loaders: [ - new MockLoader({ - app: { - settings: { - feature1: true, - feature2: false, - nested: { - value: 'base', - }, - }, - }, - }, 0), - new MockLoader({ - app: { - settings: { - feature2: true, - feature3: true, - nested: { - value: 'override', - extra: 'new', - }, - }, - }, - }, 10), - ], - }); - - const config = await manager.initialize(); - - expect(config.app.settings.feature1).toBe(true); - expect(config.app.settings.feature2).toBe(true); - expect(config.app.settings.feature3).toBe(true); - expect(config.app.settings.nested.value).toBe('override'); - expect(config.app.settings.nested.extra).toBe('new'); - }); -}); \ No newline at end of file +import { beforeEach, describe, expect, test } from 'bun:test'; +import { z } from 'zod'; +import { ConfigManager } from '../src/config-manager'; +import { ConfigValidationError } from '../src/errors'; +import { ConfigLoader } from '../src/types'; + +// Mock loader for testing +class MockLoader implements ConfigLoader { + priority = 0; + + constructor( + private data: Record, + public override priority: number = 0 + ) {} + + async load(): Promise> { + return this.data; + } +} + +// Test schema +const testSchema = z.object({ + app: z.object({ + name: z.string(), + version: z.string(), + port: z.number().positive(), + }), + database: z.object({ + host: z.string(), + port: z.number(), + }), + environment: z.enum(['development', 'test', 'production']), +}); + +type TestConfig = z.infer; + +describe('ConfigManager', () => { + let manager: ConfigManager; + + beforeEach(() => { + manager = new ConfigManager({ + loaders: [ + new MockLoader({ + app: { + name: 'test-app', + version: '1.0.0', + port: 3000, + }, + database: { + host: 'localhost', + port: 5432, + }, + }), + ], + environment: 'test', + }); + }); + + test('should initialize configuration', async () => { + const config = await manager.initialize(testSchema); + + expect(config.app.name).toBe('test-app'); + expect(config.app.version).toBe('1.0.0'); + expect(config.environment).toBe('test'); + }); + + test('should merge multiple loaders by priority', async () => { + manager = new ConfigManager({ + loaders: [ + new MockLoader({ app: { name: 'base', port: 3000 } }, 0), + new MockLoader({ app: { name: 'override', version: '2.0.0' } }, 10), + new MockLoader({ database: { host: 'prod-db' } }, 5), + ], + environment: 'test', + }); + + const config = await manager.initialize(); + + expect(config.app.name).toBe('override'); + expect(config.app.version).toBe('2.0.0'); + expect(config.app.port).toBe(3000); + expect(config.database.host).toBe('prod-db'); + }); + + test('should validate configuration with schema', async () => { + manager = new ConfigManager({ + loaders: [ + new MockLoader({ + app: { + name: 'test-app', + version: '1.0.0', + port: 'invalid', // Should be number + }, + }), + ], + }); + + await expect(manager.initialize(testSchema)).rejects.toThrow(ConfigValidationError); + }); + + test('should get configuration value by path', async () => { + await manager.initialize(testSchema); + + expect(manager.getValue('app.name')).toBe('test-app'); + expect(manager.getValue('database.port')).toBe(5432); + }); + + test('should check if configuration path exists', async () => { + await manager.initialize(testSchema); + + expect(manager.has('app.name')).toBe(true); + expect(manager.has('app.nonexistent')).toBe(false); + }); + + test('should update configuration at runtime', async () => { + await manager.initialize(testSchema); + + manager.set({ + app: { + name: 'updated-app', + }, + }); + + const config = manager.get(); + expect(config.app.name).toBe('updated-app'); + expect(config.app.version).toBe('1.0.0'); // Should preserve other values + }); + + test('should validate updates against schema', async () => { + await manager.initialize(testSchema); + + expect(() => { + manager.set({ + app: { + port: 'invalid' as any, + }, + }); + }).toThrow(ConfigValidationError); + }); + + test('should reset configuration', async () => { + await manager.initialize(testSchema); + manager.reset(); + + expect(() => manager.get()).toThrow('Configuration not initialized'); + }); + + test('should create typed getter', async () => { + await manager.initialize(testSchema); + + const appSchema = z.object({ + app: z.object({ + name: z.string(), + version: z.string(), + }), + }); + + const getAppConfig = manager.createTypedGetter(appSchema); + const appConfig = getAppConfig(); + + expect(appConfig.app.name).toBe('test-app'); + }); + + test('should detect environment correctly', () => { + const originalEnv = process.env.NODE_ENV; + + process.env.NODE_ENV = 'production'; + const prodManager = new ConfigManager({ loaders: [] }); + expect(prodManager.getEnvironment()).toBe('production'); + + process.env.NODE_ENV = 'test'; + const testManager = new ConfigManager({ loaders: [] }); + expect(testManager.getEnvironment()).toBe('test'); + + process.env.NODE_ENV = originalEnv; + }); + + test('should handle deep merge correctly', async () => { + manager = new ConfigManager({ + loaders: [ + new MockLoader( + { + app: { + settings: { + feature1: true, + feature2: false, + nested: { + value: 'base', + }, + }, + }, + }, + 0 + ), + new MockLoader( + { + app: { + settings: { + feature2: true, + feature3: true, + nested: { + value: 'override', + extra: 'new', + }, + }, + }, + }, + 10 + ), + ], + }); + + const config = await manager.initialize(); + + expect(config.app.settings.feature1).toBe(true); + expect(config.app.settings.feature2).toBe(true); + expect(config.app.settings.feature3).toBe(true); + expect(config.app.settings.nested.value).toBe('override'); + expect(config.app.settings.nested.extra).toBe('new'); + }); +}); diff --git a/libs/config/test/dynamic-location.test.ts b/libs/core/config/test/dynamic-location.test.ts similarity index 91% rename from libs/config/test/dynamic-location.test.ts rename to libs/core/config/test/dynamic-location.test.ts index b632d3d..9bbfa02 100644 --- a/libs/config/test/dynamic-location.test.ts +++ b/libs/core/config/test/dynamic-location.test.ts @@ -1,9 +1,7 @@ -import { describe, test, expect, beforeEach, afterEach } from 'bun:test'; +import { existsSync, mkdirSync, rmSync, writeFileSync } from 'fs'; import { join } from 'path'; -import { mkdirSync, writeFileSync, rmSync, existsSync } from 'fs'; +import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; import { ConfigManager } from '../src/config-manager'; -import { FileLoader } from '../src/loaders/file.loader'; -import { EnvLoader } from '../src/loaders/env.loader'; import { initializeConfig, initializeServiceConfig, resetConfig } from '../src/index'; import { appConfigSchema } from '../src/schemas'; @@ -23,33 +21,33 @@ describe('Dynamic Location Config Loading', () => { if (existsSync(TEST_ROOT)) { rmSync(TEST_ROOT, { recursive: true, force: true }); } - + // Reset config singleton resetConfig(); - + // Create test directory structure setupTestScenarios(); }); - + afterEach(() => { // Clean up test directories if (existsSync(TEST_ROOT)) { rmSync(TEST_ROOT, { recursive: true, force: true }); } - + // Reset config singleton resetConfig(); }); test('should load config from monorepo root', async () => { const originalCwd = process.cwd(); - + try { // Change to monorepo root process.chdir(SCENARIOS.monorepoRoot); - + const config = await initializeConfig(); - + expect(config.name).toBe('monorepo-root'); expect(config.version).toBe('1.0.0'); expect(config.database.postgres.host).toBe('localhost'); @@ -60,13 +58,13 @@ describe('Dynamic Location Config Loading', () => { test('should load config from app service directory', async () => { const originalCwd = process.cwd(); - + try { // Change to app service directory process.chdir(SCENARIOS.appService); - + const config = await initializeServiceConfig(); - + // Should inherit from root + override with service config expect(config.name).toBe('test-service'); // Overridden by service expect(config.version).toBe('1.0.0'); // From root @@ -79,13 +77,13 @@ describe('Dynamic Location Config Loading', () => { test('should load config from lib directory', async () => { const originalCwd = process.cwd(); - + try { // Change to lib directory process.chdir(SCENARIOS.libService); - + const config = await initializeServiceConfig(); - + // Should inherit from root + override with lib config expect(config.name).toBe('test-lib'); // Overridden by lib expect(config.version).toBe('2.0.0'); // Overridden by lib @@ -98,13 +96,13 @@ describe('Dynamic Location Config Loading', () => { test('should load config from deeply nested service', async () => { const originalCwd = process.cwd(); - + try { // Change to nested service directory process.chdir(SCENARIOS.nestedService); - + const config = await initializeServiceConfig(); - + // Should inherit from root + override with nested service config expect(config.name).toBe('deep-service'); // Overridden by nested service // NOTE: Version inheritance doesn't work for deeply nested services (3+ levels) @@ -119,13 +117,13 @@ describe('Dynamic Location Config Loading', () => { test('should load config from standalone project', async () => { const originalCwd = process.cwd(); - + try { // Change to standalone directory process.chdir(SCENARIOS.standalone); - + const config = await initializeConfig(); - + expect(config.name).toBe('standalone-app'); expect(config.version).toBe('0.1.0'); expect(config.database.postgres.host).toBe('standalone-db'); @@ -136,16 +134,16 @@ describe('Dynamic Location Config Loading', () => { test('should handle missing config files gracefully', async () => { const originalCwd = process.cwd(); - + try { // Change to directory with no config files const emptyDir = join(TEST_ROOT, 'empty'); mkdirSync(emptyDir, { recursive: true }); process.chdir(emptyDir); - + // Should not throw but use defaults and env vars const config = await initializeConfig(); - + // Should have default values from schema expect(config.environment).toBe('test'); // Tests run with NODE_ENV=test expect(typeof config.service).toBe('object'); @@ -157,18 +155,18 @@ describe('Dynamic Location Config Loading', () => { test('should prioritize environment variables over file configs', async () => { const originalCwd = process.cwd(); const originalEnv = { ...process.env }; - + try { // Set environment variables process.env.NAME = 'env-override'; process.env.VERSION = '3.0.0'; process.env.DATABASE_POSTGRES_HOST = 'env-db'; - + process.chdir(SCENARIOS.appService); - + resetConfig(); // Reset to test env override const config = await initializeServiceConfig(); - + // Environment variables should override file configs expect(config.name).toBe('env-override'); expect(config.version).toBe('3.0.0'); @@ -181,18 +179,18 @@ describe('Dynamic Location Config Loading', () => { test('should work with custom config paths', async () => { const originalCwd = process.cwd(); - + try { process.chdir(SCENARIOS.monorepoRoot); - + // Initialize with custom config path resetConfig(); const manager = new ConfigManager({ - configPath: join(SCENARIOS.appService, 'config') + configPath: join(SCENARIOS.appService, 'config'), }); - + const config = await manager.initialize(appConfigSchema); - + // Should load from the custom path expect(config.name).toBe('test-service'); expect(config.service.port).toBe(4000); @@ -217,7 +215,7 @@ function setupTestScenarios() { version: '1.0.0', service: { name: 'monorepo-root', - port: 3000 + port: 3000, }, database: { postgres: { @@ -225,32 +223,32 @@ function setupTestScenarios() { port: 5432, database: 'test_db', user: 'test_user', - password: 'test_pass' + password: 'test_pass', }, questdb: { host: 'localhost', - ilpPort: 9009 + ilpPort: 9009, }, mongodb: { host: 'localhost', port: 27017, - database: 'test_mongo' + database: 'test_mongo', }, dragonfly: { host: 'localhost', - port: 6379 - } + port: 6379, + }, }, logging: { - level: 'info' - } + level: 'info', + }, }; - + writeFileSync( join(SCENARIOS.monorepoRoot, 'config', 'development.json'), JSON.stringify(rootConfig, null, 2) ); - + writeFileSync( join(SCENARIOS.monorepoRoot, 'config', 'test.json'), JSON.stringify(rootConfig, null, 2) @@ -261,20 +259,20 @@ function setupTestScenarios() { name: 'test-service', database: { postgres: { - host: 'service-db' - } + host: 'service-db', + }, }, service: { name: 'test-service', - port: 4000 - } + port: 4000, + }, }; - + writeFileSync( join(SCENARIOS.appService, 'config', 'development.json'), JSON.stringify(appServiceConfig, null, 2) ); - + writeFileSync( join(SCENARIOS.appService, 'config', 'test.json'), JSON.stringify(appServiceConfig, null, 2) @@ -286,15 +284,15 @@ function setupTestScenarios() { version: '2.0.0', service: { name: 'test-lib', - port: 5000 - } + port: 5000, + }, }; - + writeFileSync( join(SCENARIOS.libService, 'config', 'development.json'), JSON.stringify(libServiceConfig, null, 2) ); - + writeFileSync( join(SCENARIOS.libService, 'config', 'test.json'), JSON.stringify(libServiceConfig, null, 2) @@ -305,20 +303,20 @@ function setupTestScenarios() { name: 'deep-service', database: { postgres: { - host: 'deep-db' - } + host: 'deep-db', + }, }, service: { name: 'deep-service', - port: 6000 - } + port: 6000, + }, }; - + writeFileSync( join(SCENARIOS.nestedService, 'config', 'development.json'), JSON.stringify(nestedServiceConfig, null, 2) ); - + writeFileSync( join(SCENARIOS.nestedService, 'config', 'test.json'), JSON.stringify(nestedServiceConfig, null, 2) @@ -330,7 +328,7 @@ function setupTestScenarios() { version: '0.1.0', service: { name: 'standalone-app', - port: 7000 + port: 7000, }, database: { postgres: { @@ -338,32 +336,32 @@ function setupTestScenarios() { port: 5432, database: 'standalone_db', user: 'standalone_user', - password: 'standalone_pass' + password: 'standalone_pass', }, questdb: { host: 'localhost', - ilpPort: 9009 + ilpPort: 9009, }, mongodb: { host: 'localhost', port: 27017, - database: 'standalone_mongo' + database: 'standalone_mongo', }, dragonfly: { host: 'localhost', - port: 6379 - } + port: 6379, + }, }, logging: { - level: 'debug' - } + level: 'debug', + }, }; - + writeFileSync( join(SCENARIOS.standalone, 'config', 'development.json'), JSON.stringify(standaloneConfig, null, 2) ); - + writeFileSync( join(SCENARIOS.standalone, 'config', 'test.json'), JSON.stringify(standaloneConfig, null, 2) @@ -383,4 +381,4 @@ DEBUG=true APP_EXTRA_FEATURE=enabled ` ); -} \ No newline at end of file +} diff --git a/libs/config/test/edge-cases.test.ts b/libs/core/config/test/edge-cases.test.ts similarity index 85% rename from libs/config/test/edge-cases.test.ts rename to libs/core/config/test/edge-cases.test.ts index bae771a..df6976e 100644 --- a/libs/config/test/edge-cases.test.ts +++ b/libs/core/config/test/edge-cases.test.ts @@ -1,12 +1,12 @@ -import { describe, test, expect, beforeEach, afterEach } from 'bun:test'; +import { chmodSync, existsSync, mkdirSync, rmSync, writeFileSync } from 'fs'; import { join } from 'path'; -import { mkdirSync, writeFileSync, rmSync, existsSync, chmodSync } from 'fs'; +import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; import { ConfigManager } from '../src/config-manager'; -import { FileLoader } from '../src/loaders/file.loader'; +import { ConfigValidationError } from '../src/errors'; +import { initializeConfig, resetConfig } from '../src/index'; import { EnvLoader } from '../src/loaders/env.loader'; -import { initializeConfig, initializeServiceConfig, resetConfig } from '../src/index'; +import { FileLoader } from '../src/loaders/file.loader'; import { appConfigSchema } from '../src/schemas'; -import { ConfigError, ConfigValidationError } from '../src/errors'; const TEST_DIR = join(__dirname, 'edge-case-tests'); @@ -17,9 +17,9 @@ describe('Edge Cases and Error Handling', () => { beforeEach(() => { originalEnv = { ...process.env }; originalCwd = process.cwd(); - + resetConfig(); - + if (existsSync(TEST_DIR)) { rmSync(TEST_DIR, { recursive: true, force: true }); } @@ -30,7 +30,7 @@ describe('Edge Cases and Error Handling', () => { process.env = originalEnv; process.chdir(originalCwd); resetConfig(); - + if (existsSync(TEST_DIR)) { rmSync(TEST_DIR, { recursive: true, force: true }); } @@ -39,7 +39,7 @@ describe('Edge Cases and Error Handling', () => { test('should handle missing .env files gracefully', async () => { // No .env file exists const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); // Should not throw even without .env file @@ -50,15 +50,12 @@ describe('Edge Cases and Error Handling', () => { test('should handle corrupted JSON config files', async () => { const configDir = join(TEST_DIR, 'config'); mkdirSync(configDir, { recursive: true }); - + // Create corrupted JSON file - writeFileSync( - join(configDir, 'development.json'), - '{ "app": { "name": "test", invalid json }' - ); + writeFileSync(join(configDir, 'development.json'), '{ "app": { "name": "test", invalid json }'); const manager = new ConfigManager({ - loaders: [new FileLoader(configDir, 'development')] + loaders: [new FileLoader(configDir, 'development')], }); // Should throw error for invalid JSON @@ -67,9 +64,9 @@ describe('Edge Cases and Error Handling', () => { test('should handle missing config directories', async () => { const nonExistentDir = join(TEST_DIR, 'nonexistent'); - + const manager = new ConfigManager({ - loaders: [new FileLoader(nonExistentDir, 'development')] + loaders: [new FileLoader(nonExistentDir, 'development')], }); // Should not throw, should return empty config @@ -80,16 +77,16 @@ describe('Edge Cases and Error Handling', () => { test('should handle permission denied on config files', async () => { const configDir = join(TEST_DIR, 'config'); mkdirSync(configDir, { recursive: true }); - + const configFile = join(configDir, 'development.json'); writeFileSync(configFile, JSON.stringify({ app: { name: 'test' } })); - + // Make file unreadable (this might not work on all systems) try { chmodSync(configFile, 0o000); - + const manager = new ConfigManager({ - loaders: [new FileLoader(configDir, 'development')] + loaders: [new FileLoader(configDir, 'development')], }); // Should handle permission error gracefully @@ -109,26 +106,23 @@ describe('Edge Cases and Error Handling', () => { // This tests deep merge with potential circular references const configDir = join(TEST_DIR, 'config'); mkdirSync(configDir, { recursive: true }); - + writeFileSync( join(configDir, 'development.json'), JSON.stringify({ app: { name: 'test', settings: { - ref: 'settings' - } - } + ref: 'settings', + }, + }, }) ); process.env.APP_SETTINGS_NESTED_VALUE = 'deep-value'; const manager = new ConfigManager({ - loaders: [ - new FileLoader(configDir, 'development'), - new EnvLoader('') - ] + loaders: [new FileLoader(configDir, 'development'), new EnvLoader('')], }); const config = await manager.initialize(appConfigSchema); @@ -138,13 +132,13 @@ describe('Edge Cases and Error Handling', () => { test('should handle extremely deep nesting in environment variables', async () => { // Test very deep nesting process.env.LEVEL1_LEVEL2_LEVEL3_LEVEL4_LEVEL5_VALUE = 'deep-value'; - + const manager = new ConfigManager({ - loaders: [new EnvLoader('', { nestedDelimiter: '_' })] + loaders: [new EnvLoader('', { nestedDelimiter: '_' })], }); const config = await manager.initialize(); - + // Should create nested structure expect((config as any).level1?.level2?.level3?.level4?.level5?.value).toBe('deep-value'); }); @@ -152,15 +146,15 @@ describe('Edge Cases and Error Handling', () => { test('should handle conflicting data types in config merging', async () => { const configDir = join(TEST_DIR, 'config'); mkdirSync(configDir, { recursive: true }); - + // File config has object writeFileSync( join(configDir, 'development.json'), JSON.stringify({ database: { host: 'localhost', - port: 5432 - } + port: 5432, + }, }) ); @@ -168,14 +162,11 @@ describe('Edge Cases and Error Handling', () => { process.env.DATABASE = 'simple-string'; const manager = new ConfigManager({ - loaders: [ - new FileLoader(configDir, 'development'), - new EnvLoader('') - ] + loaders: [new FileLoader(configDir, 'development'), new EnvLoader('')], }); const config = await manager.initialize(appConfigSchema); - + // Environment variable should win expect(config.database).toBe('simple-string'); }); @@ -184,15 +175,15 @@ describe('Edge Cases and Error Handling', () => { // Create multiple config setups in different directories const dir1 = join(TEST_DIR, 'dir1'); const dir2 = join(TEST_DIR, 'dir2'); - + mkdirSync(join(dir1, 'config'), { recursive: true }); mkdirSync(join(dir2, 'config'), { recursive: true }); - + writeFileSync( join(dir1, 'config', 'development.json'), JSON.stringify({ app: { name: 'dir1-app' } }) ); - + writeFileSync( join(dir2, 'config', 'development.json'), JSON.stringify({ app: { name: 'dir2-app' } }) @@ -229,13 +220,13 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}} ); process.chdir(TEST_DIR); - + const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); - const config = await manager.initialize(); - + const _config = await manager.initialize(); + // Should handle valid entries expect(process.env.VALID_KEY).toBe('valid_value'); expect(process.env.KEY_WITH_QUOTES).toBe('quoted value'); @@ -245,12 +236,12 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}} test('should handle empty config files', async () => { const configDir = join(TEST_DIR, 'config'); mkdirSync(configDir, { recursive: true }); - + // Create empty JSON file writeFileSync(join(configDir, 'development.json'), '{}'); - + const manager = new ConfigManager({ - loaders: [new FileLoader(configDir, 'development')] + loaders: [new FileLoader(configDir, 'development')], }); const config = await manager.initialize(appConfigSchema); @@ -260,7 +251,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}} test('should handle config initialization without schema', async () => { const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); // Initialize without schema @@ -271,7 +262,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}} test('should handle accessing config before initialization', () => { const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); // Should throw error when accessing uninitialized config @@ -282,15 +273,15 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}} test('should handle invalid config paths in getValue', async () => { const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); - const config = await manager.initialize(appConfigSchema); - + const _config = await manager.initialize(appConfigSchema); + // Should throw for invalid paths expect(() => manager.getValue('nonexistent.path')).toThrow('Configuration key not found'); expect(() => manager.getValue('app.nonexistent')).toThrow('Configuration key not found'); - + // Should work for valid paths expect(() => manager.getValue('environment')).not.toThrow(); }); @@ -301,11 +292,11 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}} process.env.EMPTY_VALUE = ''; const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); const config = await manager.initialize(); - + expect((config as any).null_value).toBe(null); expect((config as any).undefined_value).toBe(undefined); expect((config as any).empty_value).toBe(''); @@ -318,7 +309,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}} process.env.SERVICE_PORT = 'not-a-number'; // This should cause validation to fail const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); await expect(manager.initialize(appConfigSchema)).rejects.toThrow(ConfigValidationError); @@ -326,7 +317,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}} test('should handle config updates with invalid schema', async () => { const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); await manager.initialize(appConfigSchema); @@ -335,8 +326,8 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}} expect(() => { manager.set({ service: { - port: 'invalid-port' as any - } + port: 'invalid-port' as any, + }, }); }).toThrow(ConfigValidationError); }); @@ -344,7 +335,7 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}} test('should handle loader priority conflicts', async () => { const configDir = join(TEST_DIR, 'config'); mkdirSync(configDir, { recursive: true }); - + writeFileSync( join(configDir, 'development.json'), JSON.stringify({ app: { name: 'file-config' } }) @@ -356,12 +347,12 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}} const manager = new ConfigManager({ loaders: [ new FileLoader(configDir, 'development'), // priority 50 - new EnvLoader('') // priority 100 - ] + new EnvLoader(''), // priority 100 + ], }); const config = await manager.initialize(appConfigSchema); - + // Environment should win due to higher priority expect(config.app.name).toBe('env-config'); }); @@ -369,16 +360,16 @@ JSON_VALUE={"key": "value", "nested": {"array": [1, 2, 3]}} test('should handle readonly environment variables', async () => { // Some system environment variables might be readonly const originalPath = process.env.PATH; - + // This should not cause the loader to fail const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); const config = await manager.initialize(); expect(config).toBeDefined(); - + // PATH should not be modified expect(process.env.PATH).toBe(originalPath); }); -}); \ No newline at end of file +}); diff --git a/libs/config/test/index.test.ts b/libs/core/config/test/index.test.ts similarity index 88% rename from libs/config/test/index.test.ts rename to libs/core/config/test/index.test.ts index 215bb64..bc509f1 100644 --- a/libs/config/test/index.test.ts +++ b/libs/core/config/test/index.test.ts @@ -1,208 +1,202 @@ -import { describe, test, expect, beforeEach, afterEach } from 'bun:test'; -import { writeFileSync, mkdirSync, rmSync } from 'fs'; -import { join } from 'path'; -import { - initializeConfig, - getConfig, - getConfigManager, - resetConfig, - getDatabaseConfig, - getServiceConfig, - getLoggingConfig, - getProviderConfig, - isDevelopment, - isProduction, - isTest, -} from '../src'; - -describe('Config Module', () => { - const testConfigDir = join(process.cwd(), 'test-config-module'); - const originalEnv = { ...process.env }; - - beforeEach(() => { - resetConfig(); - mkdirSync(testConfigDir, { recursive: true }); - - // Create test configuration files - const config = { - name: 'test-app', - version: '1.0.0', - service: { - name: 'test-service', - port: 3000, - }, - database: { - postgres: { - host: 'localhost', - port: 5432, - database: 'testdb', - user: 'testuser', - password: 'testpass', - }, - questdb: { - host: 'localhost', - httpPort: 9000, - pgPort: 8812, - }, - mongodb: { - host: 'localhost', - port: 27017, - database: 'testdb', - }, - dragonfly: { - host: 'localhost', - port: 6379, - }, - }, - logging: { - level: 'info', - format: 'json', - }, - providers: { - yahoo: { - enabled: true, - rateLimit: 5, - }, - qm: { - enabled: false, - apiKey: 'test-key', - }, - }, - environment: 'test', - }; - - writeFileSync( - join(testConfigDir, 'default.json'), - JSON.stringify(config, null, 2) - ); - }); - - afterEach(() => { - resetConfig(); - rmSync(testConfigDir, { recursive: true, force: true }); - process.env = { ...originalEnv }; - }); - - test('should initialize configuration', async () => { - const config = await initializeConfig(testConfigDir); - - expect(config.app.name).toBe('test-app'); - expect(config.service.port).toBe(3000); - expect(config.environment).toBe('test'); - }); - - test('should get configuration after initialization', async () => { - await initializeConfig(testConfigDir); - const config = getConfig(); - - expect(config.app.name).toBe('test-app'); - expect(config.database.postgres.host).toBe('localhost'); - }); - - test('should throw if getting config before initialization', () => { - expect(() => getConfig()).toThrow('Configuration not initialized'); - }); - - test('should get config manager instance', async () => { - await initializeConfig(testConfigDir); - const manager = getConfigManager(); - - expect(manager).toBeDefined(); - expect(manager.get().app.name).toBe('test-app'); - }); - - test('should get database configuration', async () => { - await initializeConfig(testConfigDir); - const dbConfig = getDatabaseConfig(); - - expect(dbConfig.postgres.host).toBe('localhost'); - expect(dbConfig.questdb.httpPort).toBe(9000); - expect(dbConfig.mongodb.database).toBe('testdb'); - }); - - test('should get service configuration', async () => { - await initializeConfig(testConfigDir); - const serviceConfig = getServiceConfig(); - - expect(serviceConfig.name).toBe('test-service'); - expect(serviceConfig.port).toBe(3000); - }); - - test('should get logging configuration', async () => { - await initializeConfig(testConfigDir); - const loggingConfig = getLoggingConfig(); - - expect(loggingConfig.level).toBe('info'); - expect(loggingConfig.format).toBe('json'); - }); - - test('should get provider configuration', async () => { - await initializeConfig(testConfigDir); - - const yahooConfig = getProviderConfig('yahoo'); - expect(yahooConfig.enabled).toBe(true); - expect(yahooConfig.rateLimit).toBe(5); - - const qmConfig = getProviderConfig('quoteMedia'); - expect(qmConfig.enabled).toBe(false); - expect(qmConfig.apiKey).toBe('test-key'); - }); - - test('should throw for non-existent provider', async () => { - await initializeConfig(testConfigDir); - - expect(() => getProviderConfig('nonexistent')).toThrow( - 'Provider configuration not found: nonexistent' - ); - }); - - test('should check environment correctly', async () => { - await initializeConfig(testConfigDir); - - expect(isTest()).toBe(true); - expect(isDevelopment()).toBe(false); - expect(isProduction()).toBe(false); - }); - - test('should handle environment overrides', async () => { - process.env.NODE_ENV = 'production'; - process.env.STOCKBOT_APP__NAME = 'env-override-app'; - process.env.STOCKBOT_DATABASE__POSTGRES__HOST = 'prod-db'; - - const prodConfig = { - database: { - postgres: { - host: 'prod-host', - port: 5432, - }, - }, - }; - - writeFileSync( - join(testConfigDir, 'production.json'), - JSON.stringify(prodConfig, null, 2) - ); - - const config = await initializeConfig(testConfigDir); - - expect(config.environment).toBe('production'); - expect(config.app.name).toBe('env-override-app'); - expect(config.database.postgres.host).toBe('prod-db'); - expect(isProduction()).toBe(true); - }); - - test('should reset configuration', async () => { - await initializeConfig(testConfigDir); - expect(() => getConfig()).not.toThrow(); - - resetConfig(); - expect(() => getConfig()).toThrow('Configuration not initialized'); - }); - - test('should maintain singleton instance', async () => { - const config1 = await initializeConfig(testConfigDir); - const config2 = await initializeConfig(testConfigDir); - - expect(config1).toBe(config2); - }); -}); \ No newline at end of file +import { mkdirSync, rmSync, writeFileSync } from 'fs'; +import { join } from 'path'; +import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; +import { + getConfig, + getConfigManager, + getDatabaseConfig, + getLoggingConfig, + getProviderConfig, + getServiceConfig, + initializeConfig, + isDevelopment, + isProduction, + isTest, + resetConfig, +} from '../src'; + +describe('Config Module', () => { + const testConfigDir = join(process.cwd(), 'test-config-module'); + const originalEnv = { ...process.env }; + + beforeEach(() => { + resetConfig(); + mkdirSync(testConfigDir, { recursive: true }); + + // Create test configuration files + const config = { + name: 'test-app', + version: '1.0.0', + service: { + name: 'test-service', + port: 3000, + }, + database: { + postgres: { + host: 'localhost', + port: 5432, + database: 'testdb', + user: 'testuser', + password: 'testpass', + }, + questdb: { + host: 'localhost', + httpPort: 9000, + pgPort: 8812, + }, + mongodb: { + host: 'localhost', + port: 27017, + database: 'testdb', + }, + dragonfly: { + host: 'localhost', + port: 6379, + }, + }, + logging: { + level: 'info', + format: 'json', + }, + providers: { + yahoo: { + enabled: true, + rateLimit: 5, + }, + qm: { + enabled: false, + apiKey: 'test-key', + }, + }, + environment: 'test', + }; + + writeFileSync(join(testConfigDir, 'default.json'), JSON.stringify(config, null, 2)); + }); + + afterEach(() => { + resetConfig(); + rmSync(testConfigDir, { recursive: true, force: true }); + process.env = { ...originalEnv }; + }); + + test('should initialize configuration', async () => { + const config = await initializeConfig(testConfigDir); + + expect(config.app.name).toBe('test-app'); + expect(config.service.port).toBe(3000); + expect(config.environment).toBe('test'); + }); + + test('should get configuration after initialization', async () => { + await initializeConfig(testConfigDir); + const config = getConfig(); + + expect(config.app.name).toBe('test-app'); + expect(config.database.postgres.host).toBe('localhost'); + }); + + test('should throw if getting config before initialization', () => { + expect(() => getConfig()).toThrow('Configuration not initialized'); + }); + + test('should get config manager instance', async () => { + await initializeConfig(testConfigDir); + const manager = getConfigManager(); + + expect(manager).toBeDefined(); + expect(manager.get().app.name).toBe('test-app'); + }); + + test('should get database configuration', async () => { + await initializeConfig(testConfigDir); + const dbConfig = getDatabaseConfig(); + + expect(dbConfig.postgres.host).toBe('localhost'); + expect(dbConfig.questdb.httpPort).toBe(9000); + expect(dbConfig.mongodb.database).toBe('testdb'); + }); + + test('should get service configuration', async () => { + await initializeConfig(testConfigDir); + const serviceConfig = getServiceConfig(); + + expect(serviceConfig.name).toBe('test-service'); + expect(serviceConfig.port).toBe(3000); + }); + + test('should get logging configuration', async () => { + await initializeConfig(testConfigDir); + const loggingConfig = getLoggingConfig(); + + expect(loggingConfig.level).toBe('info'); + expect(loggingConfig.format).toBe('json'); + }); + + test('should get provider configuration', async () => { + await initializeConfig(testConfigDir); + + const yahooConfig = getProviderConfig('yahoo'); + expect(yahooConfig.enabled).toBe(true); + expect(yahooConfig.rateLimit).toBe(5); + + const qmConfig = getProviderConfig('quoteMedia'); + expect(qmConfig.enabled).toBe(false); + expect(qmConfig.apiKey).toBe('test-key'); + }); + + test('should throw for non-existent provider', async () => { + await initializeConfig(testConfigDir); + + expect(() => getProviderConfig('nonexistent')).toThrow( + 'Provider configuration not found: nonexistent' + ); + }); + + test('should check environment correctly', async () => { + await initializeConfig(testConfigDir); + + expect(isTest()).toBe(true); + expect(isDevelopment()).toBe(false); + expect(isProduction()).toBe(false); + }); + + test('should handle environment overrides', async () => { + process.env.NODE_ENV = 'production'; + process.env.STOCKBOT_APP__NAME = 'env-override-app'; + process.env.STOCKBOT_DATABASE__POSTGRES__HOST = 'prod-db'; + + const prodConfig = { + database: { + postgres: { + host: 'prod-host', + port: 5432, + }, + }, + }; + + writeFileSync(join(testConfigDir, 'production.json'), JSON.stringify(prodConfig, null, 2)); + + const config = await initializeConfig(testConfigDir); + + expect(config.environment).toBe('production'); + expect(config.app.name).toBe('env-override-app'); + expect(config.database.postgres.host).toBe('prod-db'); + expect(isProduction()).toBe(true); + }); + + test('should reset configuration', async () => { + await initializeConfig(testConfigDir); + expect(() => getConfig()).not.toThrow(); + + resetConfig(); + expect(() => getConfig()).toThrow('Configuration not initialized'); + }); + + test('should maintain singleton instance', async () => { + const config1 = await initializeConfig(testConfigDir); + const config2 = await initializeConfig(testConfigDir); + + expect(config1).toBe(config2); + }); +}); diff --git a/libs/config/test/loaders.test.ts b/libs/core/config/test/loaders.test.ts similarity index 77% rename from libs/config/test/loaders.test.ts rename to libs/core/config/test/loaders.test.ts index 40a484c..3f51003 100644 --- a/libs/config/test/loaders.test.ts +++ b/libs/core/config/test/loaders.test.ts @@ -1,181 +1,166 @@ -import { describe, test, expect, beforeEach, afterEach } from 'bun:test'; -import { writeFileSync, mkdirSync, rmSync } from 'fs'; -import { join } from 'path'; -import { EnvLoader } from '../src/loaders/env.loader'; -import { FileLoader } from '../src/loaders/file.loader'; - -describe('EnvLoader', () => { - const originalEnv = { ...process.env }; - - afterEach(() => { - // Restore original environment - process.env = { ...originalEnv }; - }); - - test('should load environment variables with prefix', async () => { - process.env.TEST_APP_NAME = 'env-app'; - process.env.TEST_APP_VERSION = '1.0.0'; - process.env.TEST_DATABASE_HOST = 'env-host'; - process.env.TEST_DATABASE_PORT = '5432'; - process.env.OTHER_VAR = 'should-not-load'; - - const loader = new EnvLoader('TEST_', { convertCase: false, nestedDelimiter: null }); - const config = await loader.load(); - - expect(config.APP_NAME).toBe('env-app'); - expect(config.APP_VERSION).toBe('1.0.0'); - expect(config.DATABASE_HOST).toBe('env-host'); - expect(config.DATABASE_PORT).toBe(5432); // Should be parsed as number - expect(config.OTHER_VAR).toBeUndefined(); - }); - - test('should convert snake_case to camelCase', async () => { - process.env.TEST_DATABASE_CONNECTION_STRING = 'postgres://localhost'; - process.env.TEST_API_KEY_SECRET = 'secret123'; - - const loader = new EnvLoader('TEST_', { convertCase: true }); - const config = await loader.load(); - - expect(config.databaseConnectionString).toBe('postgres://localhost'); - expect(config.apiKeySecret).toBe('secret123'); - }); - - test('should parse JSON values', async () => { - process.env.TEST_SETTINGS = '{"feature": true, "limit": 100}'; - process.env.TEST_NUMBERS = '[1, 2, 3]'; - - const loader = new EnvLoader('TEST_', { parseJson: true }); - const config = await loader.load(); - - expect(config.SETTINGS).toEqual({ feature: true, limit: 100 }); - expect(config.NUMBERS).toEqual([1, 2, 3]); - }); - - test('should parse boolean and number values', async () => { - process.env.TEST_ENABLED = 'true'; - process.env.TEST_DISABLED = 'false'; - process.env.TEST_PORT = '3000'; - process.env.TEST_RATIO = '0.75'; - - const loader = new EnvLoader('TEST_', { parseValues: true }); - const config = await loader.load(); - - expect(config.ENABLED).toBe(true); - expect(config.DISABLED).toBe(false); - expect(config.PORT).toBe(3000); - expect(config.RATIO).toBe(0.75); - }); - - test('should handle nested object structure', async () => { - process.env.TEST_APP__NAME = 'nested-app'; - process.env.TEST_APP__SETTINGS__ENABLED = 'true'; - process.env.TEST_DATABASE__HOST = 'localhost'; - - const loader = new EnvLoader('TEST_', { - parseValues: true, - nestedDelimiter: '__' - }); - const config = await loader.load(); - - expect(config.APP).toEqual({ - NAME: 'nested-app', - SETTINGS: { - ENABLED: true - } - }); - expect(config.DATABASE).toEqual({ - HOST: 'localhost' - }); - }); -}); - -describe('FileLoader', () => { - const testDir = join(process.cwd(), 'test-config'); - - beforeEach(() => { - mkdirSync(testDir, { recursive: true }); - }); - - afterEach(() => { - rmSync(testDir, { recursive: true, force: true }); - }); - - test('should load JSON configuration file', async () => { - const config = { - app: { name: 'file-app', version: '1.0.0' }, - database: { host: 'localhost', port: 5432 } - }; - - writeFileSync( - join(testDir, 'default.json'), - JSON.stringify(config, null, 2) - ); - - const loader = new FileLoader(testDir); - const loaded = await loader.load(); - - expect(loaded).toEqual(config); - }); - - test('should load environment-specific configuration', async () => { - const defaultConfig = { - app: { name: 'app', port: 3000 }, - database: { host: 'localhost' } - }; - - const prodConfig = { - app: { port: 8080 }, - database: { host: 'prod-db' } - }; - - writeFileSync( - join(testDir, 'default.json'), - JSON.stringify(defaultConfig, null, 2) - ); - - writeFileSync( - join(testDir, 'production.json'), - JSON.stringify(prodConfig, null, 2) - ); - - const loader = new FileLoader(testDir, 'production'); - const loaded = await loader.load(); - - expect(loaded).toEqual({ - app: { name: 'app', port: 8080 }, - database: { host: 'prod-db' } - }); - }); - - test('should handle missing configuration files gracefully', async () => { - const loader = new FileLoader(testDir); - const loaded = await loader.load(); - - expect(loaded).toEqual({}); - }); - - test('should throw on invalid JSON', async () => { - writeFileSync( - join(testDir, 'default.json'), - 'invalid json content' - ); - - const loader = new FileLoader(testDir); - - await expect(loader.load()).rejects.toThrow(); - }); - - test('should support custom configuration', async () => { - const config = { custom: 'value' }; - - writeFileSync( - join(testDir, 'custom.json'), - JSON.stringify(config, null, 2) - ); - - const loader = new FileLoader(testDir); - const loaded = await loader.loadFile('custom.json'); - - expect(loaded).toEqual(config); - }); -}); \ No newline at end of file +import { mkdirSync, rmSync, writeFileSync } from 'fs'; +import { join } from 'path'; +import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; +import { EnvLoader } from '../src/loaders/env.loader'; +import { FileLoader } from '../src/loaders/file.loader'; + +describe('EnvLoader', () => { + const originalEnv = { ...process.env }; + + afterEach(() => { + // Restore original environment + process.env = { ...originalEnv }; + }); + + test('should load environment variables with prefix', async () => { + process.env.TEST_APP_NAME = 'env-app'; + process.env.TEST_APP_VERSION = '1.0.0'; + process.env.TEST_DATABASE_HOST = 'env-host'; + process.env.TEST_DATABASE_PORT = '5432'; + process.env.OTHER_VAR = 'should-not-load'; + + const loader = new EnvLoader('TEST_', { convertCase: false, nestedDelimiter: null }); + const config = await loader.load(); + + expect(config.APP_NAME).toBe('env-app'); + expect(config.APP_VERSION).toBe('1.0.0'); + expect(config.DATABASE_HOST).toBe('env-host'); + expect(config.DATABASE_PORT).toBe(5432); // Should be parsed as number + expect(config.OTHER_VAR).toBeUndefined(); + }); + + test('should convert snake_case to camelCase', async () => { + process.env.TEST_DATABASE_CONNECTION_STRING = 'postgres://localhost'; + process.env.TEST_API_KEY_SECRET = 'secret123'; + + const loader = new EnvLoader('TEST_', { convertCase: true }); + const config = await loader.load(); + + expect(config.databaseConnectionString).toBe('postgres://localhost'); + expect(config.apiKeySecret).toBe('secret123'); + }); + + test('should parse JSON values', async () => { + process.env.TEST_SETTINGS = '{"feature": true, "limit": 100}'; + process.env.TEST_NUMBERS = '[1, 2, 3]'; + + const loader = new EnvLoader('TEST_', { parseJson: true }); + const config = await loader.load(); + + expect(config.SETTINGS).toEqual({ feature: true, limit: 100 }); + expect(config.NUMBERS).toEqual([1, 2, 3]); + }); + + test('should parse boolean and number values', async () => { + process.env.TEST_ENABLED = 'true'; + process.env.TEST_DISABLED = 'false'; + process.env.TEST_PORT = '3000'; + process.env.TEST_RATIO = '0.75'; + + const loader = new EnvLoader('TEST_', { parseValues: true }); + const config = await loader.load(); + + expect(config.ENABLED).toBe(true); + expect(config.DISABLED).toBe(false); + expect(config.PORT).toBe(3000); + expect(config.RATIO).toBe(0.75); + }); + + test('should handle nested object structure', async () => { + process.env.TEST_APP__NAME = 'nested-app'; + process.env.TEST_APP__SETTINGS__ENABLED = 'true'; + process.env.TEST_DATABASE__HOST = 'localhost'; + + const loader = new EnvLoader('TEST_', { + parseValues: true, + nestedDelimiter: '__', + }); + const config = await loader.load(); + + expect(config.APP).toEqual({ + NAME: 'nested-app', + SETTINGS: { + ENABLED: true, + }, + }); + expect(config.DATABASE).toEqual({ + HOST: 'localhost', + }); + }); +}); + +describe('FileLoader', () => { + const testDir = join(process.cwd(), 'test-config'); + + beforeEach(() => { + mkdirSync(testDir, { recursive: true }); + }); + + afterEach(() => { + rmSync(testDir, { recursive: true, force: true }); + }); + + test('should load JSON configuration file', async () => { + const config = { + app: { name: 'file-app', version: '1.0.0' }, + database: { host: 'localhost', port: 5432 }, + }; + + writeFileSync(join(testDir, 'default.json'), JSON.stringify(config, null, 2)); + + const loader = new FileLoader(testDir); + const loaded = await loader.load(); + + expect(loaded).toEqual(config); + }); + + test('should load environment-specific configuration', async () => { + const defaultConfig = { + app: { name: 'app', port: 3000 }, + database: { host: 'localhost' }, + }; + + const prodConfig = { + app: { port: 8080 }, + database: { host: 'prod-db' }, + }; + + writeFileSync(join(testDir, 'default.json'), JSON.stringify(defaultConfig, null, 2)); + + writeFileSync(join(testDir, 'production.json'), JSON.stringify(prodConfig, null, 2)); + + const loader = new FileLoader(testDir, 'production'); + const loaded = await loader.load(); + + expect(loaded).toEqual({ + app: { name: 'app', port: 8080 }, + database: { host: 'prod-db' }, + }); + }); + + test('should handle missing configuration files gracefully', async () => { + const loader = new FileLoader(testDir); + const loaded = await loader.load(); + + expect(loaded).toEqual({}); + }); + + test('should throw on invalid JSON', async () => { + writeFileSync(join(testDir, 'default.json'), 'invalid json content'); + + const loader = new FileLoader(testDir); + + await expect(loader.load()).rejects.toThrow(); + }); + + test('should support custom configuration', async () => { + const config = { custom: 'value' }; + + writeFileSync(join(testDir, 'custom.json'), JSON.stringify(config, null, 2)); + + const loader = new FileLoader(testDir); + const loaded = await loader.loadFile('custom.json'); + + expect(loaded).toEqual(config); + }); +}); diff --git a/libs/config/test/provider-config.test.ts b/libs/core/config/test/provider-config.test.ts similarity index 89% rename from libs/config/test/provider-config.test.ts rename to libs/core/config/test/provider-config.test.ts index 444aeec..0ed2365 100644 --- a/libs/config/test/provider-config.test.ts +++ b/libs/core/config/test/provider-config.test.ts @@ -1,11 +1,11 @@ -import { describe, test, expect, beforeEach, afterEach } from 'bun:test'; +import { existsSync, mkdirSync, rmSync, writeFileSync } from 'fs'; +import { join } from 'path'; +import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; import { ConfigManager } from '../src/config-manager'; +import { getProviderConfig, resetConfig } from '../src/index'; import { EnvLoader } from '../src/loaders/env.loader'; import { FileLoader } from '../src/loaders/file.loader'; import { appConfigSchema } from '../src/schemas'; -import { resetConfig, getProviderConfig } from '../src/index'; -import { join } from 'path'; -import { mkdirSync, writeFileSync, rmSync, existsSync } from 'fs'; const TEST_DIR = join(__dirname, 'provider-tests'); @@ -15,10 +15,10 @@ describe('Provider Configuration Tests', () => { beforeEach(() => { // Save original environment originalEnv = { ...process.env }; - + // Reset config singleton resetConfig(); - + // Clean up test directory if (existsSync(TEST_DIR)) { rmSync(TEST_DIR, { recursive: true, force: true }); @@ -29,7 +29,7 @@ describe('Provider Configuration Tests', () => { afterEach(() => { // Restore original environment process.env = originalEnv; - + // Clean up resetConfig(); if (existsSync(TEST_DIR)) { @@ -44,7 +44,7 @@ describe('Provider Configuration Tests', () => { process.env.WEBSHARE_ENABLED = 'true'; const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); const config = await manager.initialize(appConfigSchema); @@ -64,7 +64,7 @@ describe('Provider Configuration Tests', () => { process.env.EOD_PRIORITY = '10'; const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); const config = await manager.initialize(appConfigSchema); @@ -88,7 +88,7 @@ describe('Provider Configuration Tests', () => { process.env.IB_PRIORITY = '5'; const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); const config = await manager.initialize(appConfigSchema); @@ -113,7 +113,7 @@ describe('Provider Configuration Tests', () => { process.env.QM_PRIORITY = '15'; const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); const config = await manager.initialize(appConfigSchema); @@ -136,7 +136,7 @@ describe('Provider Configuration Tests', () => { process.env.YAHOO_PRIORITY = '20'; const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); const config = await manager.initialize(appConfigSchema); @@ -153,27 +153,31 @@ describe('Provider Configuration Tests', () => { // Create a config file const configDir = join(TEST_DIR, 'config'); mkdirSync(configDir, { recursive: true }); - + writeFileSync( join(configDir, 'development.json'), - JSON.stringify({ - providers: { - eod: { - name: 'EOD Historical Data', - apiKey: 'file-eod-key', - baseUrl: 'https://file.eod.com/api', - tier: 'free', - enabled: false, - priority: 1 + JSON.stringify( + { + providers: { + eod: { + name: 'EOD Historical Data', + apiKey: 'file-eod-key', + baseUrl: 'https://file.eod.com/api', + tier: 'free', + enabled: false, + priority: 1, + }, + yahoo: { + name: 'Yahoo Finance', + baseUrl: 'https://file.yahoo.com', + enabled: true, + priority: 2, + }, }, - yahoo: { - name: 'Yahoo Finance', - baseUrl: 'https://file.yahoo.com', - enabled: true, - priority: 2 - } - } - }, null, 2) + }, + null, + 2 + ) ); // Set environment variables that should override file config @@ -183,10 +187,7 @@ describe('Provider Configuration Tests', () => { process.env.YAHOO_PRIORITY = '25'; const manager = new ConfigManager({ - loaders: [ - new FileLoader(configDir, 'development'), - new EnvLoader('') - ] + loaders: [new FileLoader(configDir, 'development'), new EnvLoader('')], }); const config = await manager.initialize(appConfigSchema); @@ -211,7 +212,7 @@ describe('Provider Configuration Tests', () => { process.env.IB_GATEWAY_PORT = 'not-a-number'; // Should be a number const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); // Should throw validation error @@ -226,7 +227,7 @@ describe('Provider Configuration Tests', () => { process.env.WEBSHARE_ENABLED = 'true'; const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); await manager.initialize(appConfigSchema); @@ -241,7 +242,9 @@ describe('Provider Configuration Tests', () => { expect((webshareConfig as any).apiKey).toBe('test-webshare-key'); // Test non-existent provider - expect(() => getProviderConfig('nonexistent')).toThrow('Provider configuration not found: nonexistent'); + expect(() => getProviderConfig('nonexistent')).toThrow( + 'Provider configuration not found: nonexistent' + ); }); test('should handle boolean string parsing correctly', async () => { @@ -253,7 +256,7 @@ describe('Provider Configuration Tests', () => { process.env.WEBSHARE_ENABLED = 'yes'; // Should be treated as string, not boolean const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); const config = await manager.initialize(appConfigSchema); @@ -272,7 +275,7 @@ describe('Provider Configuration Tests', () => { process.env.IB_GATEWAY_CLIENT_ID = '999'; const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); const config = await manager.initialize(appConfigSchema); @@ -300,9 +303,9 @@ YAHOO_BASE_URL=https://env-file.yahoo.com const originalCwd = process.cwd(); try { process.chdir(TEST_DIR); - + const manager = new ConfigManager({ - loaders: [new EnvLoader('')] + loaders: [new EnvLoader('')], }); const config = await manager.initialize(appConfigSchema); @@ -317,4 +320,4 @@ YAHOO_BASE_URL=https://env-file.yahoo.com process.chdir(originalCwd); } }); -}); \ No newline at end of file +}); diff --git a/libs/config/test/real-usage.test.ts b/libs/core/config/test/real-usage.test.ts similarity index 75% rename from libs/config/test/real-usage.test.ts rename to libs/core/config/test/real-usage.test.ts index 1111fa0..e8f42b0 100644 --- a/libs/config/test/real-usage.test.ts +++ b/libs/core/config/test/real-usage.test.ts @@ -1,18 +1,17 @@ -import { describe, test, expect, beforeEach, afterEach } from 'bun:test'; +import { existsSync, mkdirSync, rmSync, writeFileSync } from 'fs'; import { join } from 'path'; -import { mkdirSync, writeFileSync, rmSync, existsSync } from 'fs'; -import { - initializeConfig, - initializeServiceConfig, - getConfig, +import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; +import { + getConfig, getDatabaseConfig, - getServiceConfig, getLoggingConfig, getProviderConfig, + getServiceConfig, + initializeServiceConfig, isDevelopment, isProduction, isTest, - resetConfig + resetConfig, } from '../src/index'; const TEST_DIR = join(__dirname, 'real-usage-tests'); @@ -24,13 +23,13 @@ describe('Real Usage Scenarios', () => { beforeEach(() => { originalEnv = { ...process.env }; originalCwd = process.cwd(); - + resetConfig(); - + if (existsSync(TEST_DIR)) { rmSync(TEST_DIR, { recursive: true, force: true }); } - + setupRealUsageScenarios(); }); @@ -38,34 +37,34 @@ describe('Real Usage Scenarios', () => { process.env = originalEnv; process.chdir(originalCwd); resetConfig(); - + if (existsSync(TEST_DIR)) { rmSync(TEST_DIR, { recursive: true, force: true }); } }); - test('should work like real data-service usage', async () => { - const dataServiceDir = join(TEST_DIR, 'apps', 'data-service'); + test('should work like real data-ingestion usage', async () => { + const dataServiceDir = join(TEST_DIR, 'apps', 'data-ingestion'); process.chdir(dataServiceDir); - // Simulate how data-service would initialize config + // Simulate how data-ingestion would initialize config const config = await initializeServiceConfig(); - // Test typical data-service config access patterns - expect(config.app.name).toBe('data-service'); + // Test typical data-ingestion config access patterns + expect(config.app.name).toBe('data-ingestion'); expect(config.service.port).toBe(3001); - + // Test database config access const dbConfig = getDatabaseConfig(); expect(dbConfig.postgres.host).toBe('localhost'); expect(dbConfig.postgres.port).toBe(5432); expect(dbConfig.questdb.host).toBe('localhost'); - + // Test provider access const yahooConfig = getProviderConfig('yahoo'); expect(yahooConfig).toBeDefined(); expect((yahooConfig as any).enabled).toBe(true); - + // Test environment helpers expect(isDevelopment()).toBe(true); expect(isProduction()).toBe(false); @@ -79,11 +78,11 @@ describe('Real Usage Scenarios', () => { expect(config.app.name).toBe('web-api'); expect(config.service.port).toBe(4000); - + // Web API should have access to all the same configs const serviceConfig = getServiceConfig(); expect(serviceConfig.name).toBe('web-api'); - + const loggingConfig = getLoggingConfig(); expect(loggingConfig.level).toBe('info'); }); @@ -97,7 +96,7 @@ describe('Real Usage Scenarios', () => { // Libraries should inherit from root config expect(config.app.name).toBe('cache-lib'); expect(config.app.version).toBe('1.0.0'); // From root - + // Should have access to cache config const dbConfig = getDatabaseConfig(); expect(dbConfig.dragonfly).toBeDefined(); @@ -107,8 +106,8 @@ describe('Real Usage Scenarios', () => { test('should handle production environment correctly', async () => { process.env.NODE_ENV = 'production'; - - const dataServiceDir = join(TEST_DIR, 'apps', 'data-service'); + + const dataServiceDir = join(TEST_DIR, 'apps', 'data-ingestion'); process.chdir(dataServiceDir); resetConfig(); @@ -116,15 +115,15 @@ describe('Real Usage Scenarios', () => { expect(config.environment).toBe('production'); expect(config.logging.level).toBe('warn'); // Production should use different log level - + expect(isProduction()).toBe(true); expect(isDevelopment()).toBe(false); }); test('should handle test environment correctly', async () => { process.env.NODE_ENV = 'test'; - - const dataServiceDir = join(TEST_DIR, 'apps', 'data-service'); + + const dataServiceDir = join(TEST_DIR, 'apps', 'data-ingestion'); process.chdir(dataServiceDir); resetConfig(); @@ -132,7 +131,7 @@ describe('Real Usage Scenarios', () => { expect(config.environment).toBe('test'); expect(config.logging.level).toBe('debug'); // Test should use debug level - + expect(isTest()).toBe(true); expect(isDevelopment()).toBe(false); }); @@ -144,33 +143,35 @@ describe('Real Usage Scenarios', () => { process.env.EOD_API_KEY = 'prod-eod-key'; process.env.SERVICE_PORT = '8080'; - const dataServiceDir = join(TEST_ROOT, 'apps', 'data-service'); + const dataServiceDir = join(TEST_ROOT, 'apps', 'data-ingestion'); process.chdir(dataServiceDir); resetConfig(); - const config = await initializeServiceConfig(); + const _config = await initializeServiceConfig(); // Environment variables should override file configs const dbConfig = getDatabaseConfig(); expect(dbConfig.postgres.host).toBe('prod-db.example.com'); expect(dbConfig.postgres.port).toBe(5433); - + const serviceConfig = getServiceConfig(); expect(serviceConfig.port).toBe(8080); - + const eodConfig = getProviderConfig('eod'); expect((eodConfig as any).apiKey).toBe('prod-eod-key'); }); test('should handle missing provider configurations gracefully', async () => { - const dataServiceDir = join(TEST_DIR, 'apps', 'data-service'); + const dataServiceDir = join(TEST_DIR, 'apps', 'data-ingestion'); process.chdir(dataServiceDir); - const config = await initializeServiceConfig(); + const _config = await initializeServiceConfig(); // Should throw for non-existent providers - expect(() => getProviderConfig('nonexistent')).toThrow('Provider configuration not found: nonexistent'); - + expect(() => getProviderConfig('nonexistent')).toThrow( + 'Provider configuration not found: nonexistent' + ); + // Should work for providers that exist but might not be configured // (they should have defaults from schema) const yahooConfig = getProviderConfig('yahoo'); @@ -178,22 +179,22 @@ describe('Real Usage Scenarios', () => { }); test('should support dynamic config access patterns', async () => { - const dataServiceDir = join(TEST_DIR, 'apps', 'data-service'); + const dataServiceDir = join(TEST_DIR, 'apps', 'data-ingestion'); process.chdir(dataServiceDir); - const config = await initializeServiceConfig(); - + const _config = await initializeServiceConfig(); + // Test various access patterns used in real applications const configManager = (await import('../src/index')).getConfigManager(); - + // Direct path access - expect(configManager.getValue('app.name')).toBe('data-service'); + expect(configManager.getValue('app.name')).toBe('data-ingestion'); expect(configManager.getValue('service.port')).toBe(3001); - + // Check if paths exist expect(configManager.has('app.name')).toBe(true); expect(configManager.has('nonexistent.path')).toBe(false); - + // Typed access const port = configManager.getValue('service.port'); expect(typeof port).toBe('number'); @@ -201,44 +202,44 @@ describe('Real Usage Scenarios', () => { }); test('should handle config updates at runtime', async () => { - const dataServiceDir = join(TEST_DIR, 'apps', 'data-service'); + const dataServiceDir = join(TEST_DIR, 'apps', 'data-ingestion'); process.chdir(dataServiceDir); await initializeServiceConfig(); const configManager = (await import('../src/index')).getConfigManager(); - + // Update config at runtime (useful for testing) configManager.set({ service: { - port: 9999 - } + port: 9999, + }, }); - + const updatedConfig = getConfig(); expect(updatedConfig.service.port).toBe(9999); - + // Other values should be preserved - expect(updatedConfig.app.name).toBe('data-service'); + expect(updatedConfig.app.name).toBe('data-ingestion'); }); test('should work across multiple service initializations', async () => { // Simulate multiple services in the same process (like tests) - + // First service - const dataServiceDir = join(TEST_DIR, 'apps', 'data-service'); + const dataServiceDir = join(TEST_DIR, 'apps', 'data-ingestion'); process.chdir(dataServiceDir); - + let config = await initializeServiceConfig(); - expect(config.app.name).toBe('data-service'); - - // Reset and switch to another service + expect(config.app.name).toBe('data-ingestion'); + + // Reset and switch to another service resetConfig(); const webApiDir = join(TEST_DIR, 'apps', 'web-api'); process.chdir(webApiDir); - + config = await initializeServiceConfig(); expect(config.app.name).toBe('web-api'); - + // Each service should get its own config expect(config.service.port).toBe(4000); // web-api port }); @@ -249,7 +250,7 @@ const TEST_ROOT = TEST_DIR; function setupRealUsageScenarios() { const scenarios = { root: TEST_ROOT, - dataService: join(TEST_ROOT, 'apps', 'data-service'), + dataService: join(TEST_ROOT, 'apps', 'data-ingestion'), webApi: join(TEST_ROOT, 'apps', 'web-api'), cacheLib: join(TEST_ROOT, 'libs', 'cache'), }; @@ -264,7 +265,7 @@ function setupRealUsageScenarios() { development: { app: { name: 'stock-bot-monorepo', - version: '1.0.0' + version: '1.0.0', }, database: { postgres: { @@ -272,116 +273,125 @@ function setupRealUsageScenarios() { port: 5432, database: 'trading_bot', username: 'trading_user', - password: 'trading_pass_dev' + password: 'trading_pass_dev', }, questdb: { host: 'localhost', port: 9009, - database: 'questdb' + database: 'questdb', }, mongodb: { host: 'localhost', port: 27017, - database: 'stock' + database: 'stock', }, dragonfly: { host: 'localhost', - port: 6379 - } + port: 6379, + }, }, logging: { level: 'info', - format: 'json' + format: 'json', }, providers: { yahoo: { name: 'Yahoo Finance', enabled: true, priority: 1, - baseUrl: 'https://query1.finance.yahoo.com' + baseUrl: 'https://query1.finance.yahoo.com', }, eod: { name: 'EOD Historical Data', enabled: false, priority: 2, apiKey: 'demo-api-key', - baseUrl: 'https://eodhistoricaldata.com/api' - } - } + baseUrl: 'https://eodhistoricaldata.com/api', + }, + }, }, production: { logging: { - level: 'warn' + level: 'warn', }, database: { postgres: { host: 'prod-postgres.internal', - port: 5432 - } - } + port: 5432, + }, + }, }, test: { logging: { - level: 'debug' + level: 'debug', }, database: { postgres: { - database: 'trading_bot_test' - } - } - } + database: 'trading_bot_test', + }, + }, + }, }; Object.entries(rootConfigs).forEach(([env, config]) => { - writeFileSync( - join(scenarios.root, 'config', `${env}.json`), - JSON.stringify(config, null, 2) - ); + writeFileSync(join(scenarios.root, 'config', `${env}.json`), JSON.stringify(config, null, 2)); }); // Data service config writeFileSync( join(scenarios.dataService, 'config', 'development.json'), - JSON.stringify({ - app: { - name: 'data-service' + JSON.stringify( + { + app: { + name: 'data-ingestion', + }, + service: { + name: 'data-ingestion', + port: 3001, + workers: 2, + }, }, - service: { - name: 'data-service', - port: 3001, - workers: 2 - } - }, null, 2) + null, + 2 + ) ); // Web API config writeFileSync( join(scenarios.webApi, 'config', 'development.json'), - JSON.stringify({ - app: { - name: 'web-api' + JSON.stringify( + { + app: { + name: 'web-api', + }, + service: { + name: 'web-api', + port: 4000, + cors: { + origin: ['http://localhost:3000', 'http://localhost:4200'], + }, + }, }, - service: { - name: 'web-api', - port: 4000, - cors: { - origin: ['http://localhost:3000', 'http://localhost:4200'] - } - } - }, null, 2) + null, + 2 + ) ); // Cache lib config writeFileSync( join(scenarios.cacheLib, 'config', 'development.json'), - JSON.stringify({ - app: { - name: 'cache-lib' + JSON.stringify( + { + app: { + name: 'cache-lib', + }, + service: { + name: 'cache-lib', + }, }, - service: { - name: 'cache-lib' - } - }, null, 2) + null, + 2 + ) ); // Root .env file @@ -402,4 +412,4 @@ WEBSHARE_API_KEY=demo-webshare-key DATA_SERVICE_RATE_LIMIT=1000 ` ); -} \ No newline at end of file +} diff --git a/libs/core/config/tsconfig.json b/libs/core/config/tsconfig.json new file mode 100644 index 0000000..9405533 --- /dev/null +++ b/libs/core/config/tsconfig.json @@ -0,0 +1,10 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "composite": true + }, + "include": ["src/**/*"], + "references": [] +} diff --git a/libs/config/turbo.json b/libs/core/config/turbo.json similarity index 100% rename from libs/config/turbo.json rename to libs/core/config/turbo.json diff --git a/libs/core/di/package.json b/libs/core/di/package.json new file mode 100644 index 0000000..0b65964 --- /dev/null +++ b/libs/core/di/package.json @@ -0,0 +1,18 @@ +{ + "name": "@stock-bot/di", + "version": "1.0.0", + "main": "./src/index.ts", + "types": "./src/index.ts", + "scripts": { + "build": "tsc", + "clean": "rm -rf dist" + }, + "dependencies": { + "@stock-bot/config": "workspace:*", + "@stock-bot/logger": "workspace:*", + "@stock-bot/types": "workspace:*" + }, + "devDependencies": { + "@types/pg": "^8.10.7" + } +} diff --git a/libs/core/di/src/awilix-container.ts b/libs/core/di/src/awilix-container.ts new file mode 100644 index 0000000..21ba2a5 --- /dev/null +++ b/libs/core/di/src/awilix-container.ts @@ -0,0 +1,64 @@ +/** + * Awilix DI Container Setup + * Creates a decoupled, reusable dependency injection container + */ + +import type { Browser } from '@stock-bot/browser'; +import type { CacheProvider } from '@stock-bot/cache'; +import type { BaseAppConfig as StockBotAppConfig } from '@stock-bot/config'; +import type { IServiceContainer } from '@stock-bot/types'; +import type { Logger } from '@stock-bot/logger'; +import type { MongoDBClient } from '@stock-bot/mongodb'; +import type { PostgreSQLClient } from '@stock-bot/postgres'; +import type { ProxyManager } from '@stock-bot/proxy'; +import type { QuestDBClient } from '@stock-bot/questdb'; +import type { QueueManager } from '@stock-bot/queue'; +import { type AwilixContainer } from 'awilix'; +import type { AppConfig } from './config/schemas'; + +// Re-export for backward compatibility +export type { AppConfig }; + +/** + * Service type definitions for type-safe resolution + */ +export interface ServiceDefinitions { + // Configuration + config: AppConfig; + logger: Logger; + + // Core services + cache: CacheProvider | null; + proxyManager: ProxyManager | null; + browser: Browser; + queueManager: QueueManager | null; + + // Database clients + mongoClient: MongoDBClient | null; + postgresClient: PostgreSQLClient | null; + questdbClient: QuestDBClient | null; + + // Aggregate service container + serviceContainer: IServiceContainer; +} + + + +// Export typed container +export type ServiceContainer = AwilixContainer; +export type ServiceCradle = ServiceDefinitions; + +/** + * Service-specific options for container creation + */ +export interface ServiceContainerOptions { + enableQuestDB?: boolean; + enableMongoDB?: boolean; + enablePostgres?: boolean; + enableCache?: boolean; + enableQueue?: boolean; + enableBrowser?: boolean; + enableProxy?: boolean; +} + + diff --git a/libs/core/di/src/config/schemas/index.ts b/libs/core/di/src/config/schemas/index.ts new file mode 100644 index 0000000..bb6f6e6 --- /dev/null +++ b/libs/core/di/src/config/schemas/index.ts @@ -0,0 +1,30 @@ +import { z } from 'zod'; +import { redisConfigSchema } from './redis.schema'; +import { mongodbConfigSchema } from './mongodb.schema'; +import { postgresConfigSchema } from './postgres.schema'; +import { questdbConfigSchema } from './questdb.schema'; +import { proxyConfigSchema, browserConfigSchema, queueConfigSchema } from './service.schema'; + +export const appConfigSchema = z.object({ + redis: redisConfigSchema, + mongodb: mongodbConfigSchema, + postgres: postgresConfigSchema, + questdb: questdbConfigSchema.optional(), + proxy: proxyConfigSchema.optional(), + browser: browserConfigSchema.optional(), + queue: queueConfigSchema.optional(), + service: z.object({ + name: z.string(), + serviceName: z.string().optional(), // Standard kebab-case service name + port: z.number().optional(), + }).optional(), +}); + +export type AppConfig = z.infer; + +// Re-export individual schemas and types +export * from './redis.schema'; +export * from './mongodb.schema'; +export * from './postgres.schema'; +export * from './questdb.schema'; +export * from './service.schema'; \ No newline at end of file diff --git a/libs/core/di/src/config/schemas/mongodb.schema.ts b/libs/core/di/src/config/schemas/mongodb.schema.ts new file mode 100644 index 0000000..b05cee5 --- /dev/null +++ b/libs/core/di/src/config/schemas/mongodb.schema.ts @@ -0,0 +1,9 @@ +import { z } from 'zod'; + +export const mongodbConfigSchema = z.object({ + enabled: z.boolean().optional().default(true), + uri: z.string(), + database: z.string(), +}); + +export type MongoDBConfig = z.infer; \ No newline at end of file diff --git a/libs/core/di/src/config/schemas/postgres.schema.ts b/libs/core/di/src/config/schemas/postgres.schema.ts new file mode 100644 index 0000000..ecb3e93 --- /dev/null +++ b/libs/core/di/src/config/schemas/postgres.schema.ts @@ -0,0 +1,12 @@ +import { z } from 'zod'; + +export const postgresConfigSchema = z.object({ + enabled: z.boolean().optional().default(true), + host: z.string().default('localhost'), + port: z.number().default(5432), + database: z.string(), + user: z.string(), + password: z.string(), +}); + +export type PostgresConfig = z.infer; \ No newline at end of file diff --git a/libs/core/di/src/config/schemas/questdb.schema.ts b/libs/core/di/src/config/schemas/questdb.schema.ts new file mode 100644 index 0000000..cff9160 --- /dev/null +++ b/libs/core/di/src/config/schemas/questdb.schema.ts @@ -0,0 +1,12 @@ +import { z } from 'zod'; + +export const questdbConfigSchema = z.object({ + enabled: z.boolean().optional().default(true), + host: z.string().default('localhost'), + httpPort: z.number().optional().default(9000), + pgPort: z.number().optional().default(8812), + influxPort: z.number().optional().default(9009), + database: z.string().optional().default('questdb'), +}); + +export type QuestDBConfig = z.infer; \ No newline at end of file diff --git a/libs/core/di/src/config/schemas/redis.schema.ts b/libs/core/di/src/config/schemas/redis.schema.ts new file mode 100644 index 0000000..79b057f --- /dev/null +++ b/libs/core/di/src/config/schemas/redis.schema.ts @@ -0,0 +1,12 @@ +import { z } from 'zod'; + +export const redisConfigSchema = z.object({ + enabled: z.boolean().optional().default(true), + host: z.string().default('localhost'), + port: z.number().default(6379), + password: z.string().optional(), + username: z.string().optional(), + db: z.number().optional().default(0), +}); + +export type RedisConfig = z.infer; \ No newline at end of file diff --git a/libs/core/di/src/config/schemas/service.schema.ts b/libs/core/di/src/config/schemas/service.schema.ts new file mode 100644 index 0000000..47420af --- /dev/null +++ b/libs/core/di/src/config/schemas/service.schema.ts @@ -0,0 +1,38 @@ +import { z } from 'zod'; + +export const proxyConfigSchema = z.object({ + enabled: z.boolean().default(false), + cachePrefix: z.string().optional().default('proxy:'), + ttl: z.number().optional().default(3600), + webshare: z.object({ + apiKey: z.string(), + apiUrl: z.string().default('https://proxy.webshare.io/api/v2/'), + }).optional(), +}); + +export const browserConfigSchema = z.object({ + headless: z.boolean().optional().default(true), + timeout: z.number().optional().default(30000), +}); + +export const queueConfigSchema = z.object({ + enabled: z.boolean().optional().default(true), + workers: z.number().optional().default(1), + concurrency: z.number().optional().default(1), + enableScheduledJobs: z.boolean().optional().default(true), + delayWorkerStart: z.boolean().optional().default(false), + defaultJobOptions: z.object({ + attempts: z.number().default(3), + backoff: z.object({ + type: z.enum(['exponential', 'fixed']).default('exponential'), + delay: z.number().default(1000), + }).default({}), + removeOnComplete: z.number().default(100), + removeOnFail: z.number().default(50), + timeout: z.number().optional(), + }).optional().default({}), +}); + +export type ProxyConfig = z.infer; +export type BrowserConfig = z.infer; +export type QueueConfig = z.infer; \ No newline at end of file diff --git a/libs/core/di/src/container/README.md b/libs/core/di/src/container/README.md new file mode 100644 index 0000000..f22d082 --- /dev/null +++ b/libs/core/di/src/container/README.md @@ -0,0 +1,106 @@ +# DI Container - Modular Structure + +## Overview + +The DI container has been refactored into a modular structure for better organization and maintainability. + +## Directory Structure + +``` +├── container/ # Core container logic +│ ├── builder.ts # Fluent API for building containers +│ ├── factory.ts # Factory functions (legacy compatibility) +│ └── types.ts # Type definitions +├── registrations/ # Service registration modules +│ ├── core.ts # Core services (config, logger) +│ ├── cache.ts # Cache services +│ ├── database.ts # Database clients +│ └── service.ts # Application services +├── config/ # Configuration management +│ └── schemas/ # Zod schemas for validation +├── factories/ # Service factories +│ └── cache.factory.ts # Cache factory utilities +└── utils/ # Utilities + └── lifecycle.ts # Service lifecycle management +``` + +## Usage Examples + +### Using the Builder Pattern (Recommended) + +```typescript +import { ServiceContainerBuilder } from '@stock-bot/di'; + +// Create container with fluent API +const container = await new ServiceContainerBuilder() + .withConfig({ + redis: { host: 'localhost', port: 6379 }, + mongodb: { uri: 'mongodb://localhost', database: 'mydb' }, + postgres: { host: 'localhost', database: 'mydb', user: 'user', password: 'pass' } + }) + .enableService('enableQueue', false) // Disable queue service + .enableService('enableBrowser', false) // Disable browser service + .build(); + +// Services are automatically initialized +const cache = container.cradle.cache; +const mongoClient = container.cradle.mongoClient; +``` + +### Creating Namespaced Caches + +```typescript +import { CacheFactory } from '@stock-bot/di'; + +// Create a cache for a specific service +const serviceCache = CacheFactory.createCacheForService(container, 'myservice'); + +// Create a cache for a handler +const handlerCache = CacheFactory.createCacheForHandler(container, 'myhandler'); + +// Create a cache with custom prefix +const customCache = CacheFactory.createCacheWithPrefix(container, 'custom'); +``` + +### Manual Service Lifecycle + +```typescript +import { ServiceContainerBuilder, ServiceLifecycleManager } from '@stock-bot/di'; + +// Create container without auto-initialization +const container = await new ServiceContainerBuilder() + .withConfig(config) + .skipInitialization() + .build(); + +// Manually initialize services +const lifecycle = new ServiceLifecycleManager(); +await lifecycle.initializeServices(container); + +// ... use services ... + +// Manually shutdown services +await lifecycle.shutdownServices(container); +``` + +### Legacy API (Backward Compatible) + +```typescript +import { createServiceContainerFromConfig } from '@stock-bot/di'; + +// Old way still works +const container = createServiceContainerFromConfig(appConfig, { + enableQueue: true, + enableCache: true +}); + +// Manual initialization required with legacy API +await initializeServices(container); +``` + +## Migration Guide + +1. Replace direct container creation with `ServiceContainerBuilder` +2. Use `CacheFactory` instead of manually creating `NamespacedCache` +3. Let the builder handle service initialization automatically +4. Use typed configuration schemas for better validation \ No newline at end of file diff --git a/libs/core/di/src/container/builder.ts b/libs/core/di/src/container/builder.ts new file mode 100644 index 0000000..abb20e3 --- /dev/null +++ b/libs/core/di/src/container/builder.ts @@ -0,0 +1,180 @@ +import { createContainer, InjectionMode, asFunction, type AwilixContainer } from 'awilix'; +import type { BaseAppConfig as StockBotAppConfig, UnifiedAppConfig } from '@stock-bot/config'; +import { appConfigSchema, type AppConfig } from '../config/schemas'; +import { toUnifiedConfig } from '@stock-bot/config'; +import { + registerCoreServices, + registerCacheServices, + registerDatabaseServices, + registerApplicationServices +} from '../registrations'; +import { ServiceLifecycleManager } from '../utils/lifecycle'; +import type { ServiceDefinitions, ContainerBuildOptions } from './types'; + +export class ServiceContainerBuilder { + private config: Partial = {}; + private unifiedConfig: UnifiedAppConfig | null = null; + private options: ContainerBuildOptions = { + enableCache: true, + enableQueue: true, + enableMongoDB: true, + enablePostgres: true, + enableQuestDB: true, + enableBrowser: true, + enableProxy: true, + skipInitialization: false, + initializationTimeout: 30000, + }; + + withConfig(config: AppConfig | StockBotAppConfig | UnifiedAppConfig): this { + // Convert to unified config format + this.unifiedConfig = toUnifiedConfig(config); + this.config = this.transformStockBotConfig(this.unifiedConfig); + return this; + } + + withOptions(options: Partial): this { + Object.assign(this.options, options); + return this; + } + + enableService(service: keyof Omit, enabled = true): this { + this.options[service] = enabled; + return this; + } + + skipInitialization(skip = true): this { + this.options.skipInitialization = skip; + return this; + } + + async build(): Promise> { + // Validate and prepare config + const validatedConfig = this.prepareConfig(); + + // Create container + const container = createContainer({ + injectionMode: InjectionMode.PROXY, + strict: true, + }); + + // Register services + this.registerServices(container, validatedConfig); + + // Initialize services if not skipped + if (!this.options.skipInitialization) { + const lifecycleManager = new ServiceLifecycleManager(); + await lifecycleManager.initializeServices(container, this.options.initializationTimeout); + } + + return container; + } + + private prepareConfig(): AppConfig { + const finalConfig = this.applyServiceOptions(this.config); + return appConfigSchema.parse(finalConfig); + } + + private applyServiceOptions(config: Partial): AppConfig { + // Ensure questdb config has the right field names for DI + const questdbConfig = config.questdb ? { + ...config.questdb, + influxPort: (config.questdb as any).influxPort || (config.questdb as any).ilpPort || 9009, + } : { + enabled: true, + host: 'localhost', + httpPort: 9000, + pgPort: 8812, + influxPort: 9009, + database: 'questdb', + }; + + return { + redis: config.redis || { + enabled: this.options.enableCache ?? true, + host: 'localhost', + port: 6379, + db: 0, + }, + mongodb: config.mongodb || { + enabled: this.options.enableMongoDB ?? true, + uri: '', + database: '', + }, + postgres: config.postgres || { + enabled: this.options.enablePostgres ?? true, + host: 'localhost', + port: 5432, + database: 'postgres', + user: 'postgres', + password: 'postgres', + }, + questdb: this.options.enableQuestDB ? questdbConfig : undefined, + proxy: this.options.enableProxy ? (config.proxy || { enabled: false, cachePrefix: 'proxy:', ttl: 3600 }) : undefined, + browser: this.options.enableBrowser ? (config.browser || { headless: true, timeout: 30000 }) : undefined, + queue: this.options.enableQueue ? (config.queue || { + enabled: true, + workers: 1, + concurrency: 1, + enableScheduledJobs: true, + delayWorkerStart: false, + defaultJobOptions: { + attempts: 3, + backoff: { type: 'exponential' as const, delay: 1000 }, + removeOnComplete: 100, + removeOnFail: 50, + } + }) : undefined, + service: config.service, + }; + } + + private registerServices(container: AwilixContainer, config: AppConfig): void { + registerCoreServices(container, config); + registerCacheServices(container, config); + registerDatabaseServices(container, config); + registerApplicationServices(container, config); + + // Register service container aggregate + container.register({ + serviceContainer: asFunction(({ + config: _config, logger, cache, globalCache, proxyManager, browser, + queueManager, mongoClient, postgresClient, questdbClient + }) => ({ + logger, + cache, + globalCache, + proxy: proxyManager, // Map proxyManager to proxy + browser, + queue: queueManager, // Map queueManager to queue + mongodb: mongoClient, // Map mongoClient to mongodb + postgres: postgresClient, // Map postgresClient to postgres + questdb: questdbClient, // Map questdbClient to questdb + })).singleton(), + }); + } + + private transformStockBotConfig(config: UnifiedAppConfig): Partial { + // Unified config already has flat structure, just extract what we need + // Handle questdb field name mapping + const questdb = config.questdb ? { + enabled: config.questdb.enabled || true, + host: config.questdb.host || 'localhost', + httpPort: config.questdb.httpPort || 9000, + pgPort: config.questdb.pgPort || 8812, + influxPort: (config.questdb as any).influxPort || (config.questdb as any).ilpPort || 9009, + database: config.questdb.database || 'questdb', + } : undefined; + + return { + redis: config.redis, + mongodb: config.mongodb, + postgres: config.postgres, + questdb, + queue: config.queue, + browser: config.browser, + proxy: config.proxy, + service: config.service, + }; + } +} \ No newline at end of file diff --git a/libs/core/di/src/container/factory.ts b/libs/core/di/src/container/factory.ts new file mode 100644 index 0000000..d045742 --- /dev/null +++ b/libs/core/di/src/container/factory.ts @@ -0,0 +1,20 @@ +import type { AwilixContainer } from 'awilix'; +import type { BaseAppConfig as StockBotAppConfig } from '@stock-bot/config'; +import { ServiceContainerBuilder } from './builder'; +import type { ServiceDefinitions, ServiceContainerOptions } from './types'; + + +/** + * Modern async factory for creating service containers + */ +export async function createServiceContainerAsync( + config: StockBotAppConfig, + options: ServiceContainerOptions = {} +): Promise> { + const builder = new ServiceContainerBuilder(); + return builder + .withConfig(config) + .withOptions(options) + .build(); +} + diff --git a/libs/core/di/src/container/types.ts b/libs/core/di/src/container/types.ts new file mode 100644 index 0000000..70fe371 --- /dev/null +++ b/libs/core/di/src/container/types.ts @@ -0,0 +1,48 @@ +import type { Browser } from '@stock-bot/browser'; +import type { CacheProvider } from '@stock-bot/cache'; +import type { IServiceContainer } from '@stock-bot/types'; +import type { Logger } from '@stock-bot/logger'; +import type { MongoDBClient } from '@stock-bot/mongodb'; +import type { PostgreSQLClient } from '@stock-bot/postgres'; +import type { ProxyManager } from '@stock-bot/proxy'; +import type { QuestDBClient } from '@stock-bot/questdb'; +import type { SmartQueueManager } from '@stock-bot/queue'; +import type { AppConfig } from '../config/schemas'; + +export interface ServiceDefinitions { + // Configuration + config: AppConfig; + logger: Logger; + + // Core services + cache: CacheProvider | null; + globalCache: CacheProvider | null; + proxyManager: ProxyManager | null; + browser: Browser; + queueManager: SmartQueueManager | null; + + // Database clients + mongoClient: MongoDBClient | null; + postgresClient: PostgreSQLClient | null; + questdbClient: QuestDBClient | null; + + // Aggregate service container + serviceContainer: IServiceContainer; +} + +export type ServiceCradle = ServiceDefinitions; + +export interface ServiceContainerOptions { + enableQuestDB?: boolean; + enableMongoDB?: boolean; + enablePostgres?: boolean; + enableCache?: boolean; + enableQueue?: boolean; + enableBrowser?: boolean; + enableProxy?: boolean; +} + +export interface ContainerBuildOptions extends ServiceContainerOptions { + skipInitialization?: boolean; + initializationTimeout?: number; +} \ No newline at end of file diff --git a/libs/core/di/src/factories/cache.factory.ts b/libs/core/di/src/factories/cache.factory.ts new file mode 100644 index 0000000..819bfb3 --- /dev/null +++ b/libs/core/di/src/factories/cache.factory.ts @@ -0,0 +1,44 @@ +import type { AwilixContainer } from 'awilix'; +import { NamespacedCache, type CacheProvider } from '@stock-bot/cache'; +import type { ServiceDefinitions } from '../container/types'; + +export class CacheFactory { + static createNamespacedCache( + baseCache: CacheProvider, + namespace: string + ): NamespacedCache { + return new NamespacedCache(baseCache, namespace); + } + + static createCacheForService( + container: AwilixContainer, + serviceName: string + ): CacheProvider | null { + const baseCache = container.cradle.cache; + if (!baseCache) {return null;} + + return this.createNamespacedCache(baseCache, serviceName); + } + + static createCacheForHandler( + container: AwilixContainer, + handlerName: string + ): CacheProvider | null { + const baseCache = container.cradle.cache; + if (!baseCache) {return null;} + + return this.createNamespacedCache(baseCache, `handler:${handlerName}`); + } + + static createCacheWithPrefix( + container: AwilixContainer, + prefix: string + ): CacheProvider | null { + const baseCache = container.cradle.cache; + if (!baseCache) {return null;} + + // Remove 'cache:' prefix if already included + const cleanPrefix = prefix.replace(/^cache:/, ''); + return this.createNamespacedCache(baseCache, cleanPrefix); + } +} \ No newline at end of file diff --git a/libs/core/di/src/factories/index.ts b/libs/core/di/src/factories/index.ts new file mode 100644 index 0000000..83df8b5 --- /dev/null +++ b/libs/core/di/src/factories/index.ts @@ -0,0 +1 @@ +export { CacheFactory } from './cache.factory'; \ No newline at end of file diff --git a/libs/core/di/src/index.ts b/libs/core/di/src/index.ts new file mode 100644 index 0000000..a00bf2d --- /dev/null +++ b/libs/core/di/src/index.ts @@ -0,0 +1,38 @@ +// Export all dependency injection components +export * from './operation-context'; +export * from './pool-size-calculator'; +export * from './types'; + +// Re-export IServiceContainer from types for convenience +export type { IServiceContainer } from '@stock-bot/types'; + +// Type exports from awilix-container +export { + type AppConfig, + type ServiceCradle, + type ServiceContainer, + type ServiceContainerOptions, +} from './awilix-container'; + +// New modular structure exports +export * from './container/types'; +export { ServiceContainerBuilder } from './container/builder'; +export { + createServiceContainerAsync +} from './container/factory'; + +// Configuration exports +export * from './config/schemas'; + +// Factory exports +export * from './factories'; + +// Utility exports +export { ServiceLifecycleManager } from './utils/lifecycle'; + +// Service application framework +export { + ServiceApplication, + type ServiceApplicationConfig, + type ServiceLifecycleHooks, +} from './service-application'; diff --git a/libs/core/di/src/operation-context.ts b/libs/core/di/src/operation-context.ts new file mode 100644 index 0000000..6aba882 --- /dev/null +++ b/libs/core/di/src/operation-context.ts @@ -0,0 +1,144 @@ +/** + * OperationContext - Unified context for handler operations + */ + +import { getLogger, type Logger } from '@stock-bot/logger'; + +interface ServiceResolver { + resolve(serviceName: string): T; + resolveAsync(serviceName: string): Promise; +} + +export interface OperationContextOptions { + handlerName: string; + operationName: string; + parentLogger?: Logger; + container?: ServiceResolver; + metadata?: Record; + traceId?: string; +} + +export class OperationContext { + public readonly logger: Logger; + public readonly traceId: string; + public readonly metadata: Record; + private readonly container?: ServiceResolver; + private readonly startTime: Date; + + constructor(options: OperationContextOptions) { + this.container = options.container; + this.metadata = options.metadata || {}; + this.traceId = options.traceId || this.generateTraceId(); + this.startTime = new Date(); + + this.logger = + options.parentLogger || + getLogger(`${options.handlerName}:${options.operationName}`, { + traceId: this.traceId, + metadata: this.metadata, + }); + } + + /** + * Creates a new OperationContext with automatic resource management + */ + static create( + handlerName: string, + operationName: string, + options: { + container?: ServiceResolver; + parentLogger?: Logger; + metadata?: Record; + traceId?: string; + } = {} + ): OperationContext { + return new OperationContext({ + handlerName, + operationName, + ...options, + }); + } + + /** + * Resolve a service from the container + */ + resolve(serviceName: string): T { + if (!this.container) { + throw new Error('No service container available'); + } + return this.container.resolve(serviceName); + } + + /** + * Resolve a service asynchronously from the container + */ + async resolveAsync(serviceName: string): Promise { + if (!this.container) { + throw new Error('No service container available'); + } + return this.container.resolveAsync(serviceName); + } + + /** + * Add metadata to the context + */ + addMetadata(key: string, value: any): void { + this.metadata[key] = value; + } + + /** + * Get execution time in milliseconds + */ + getExecutionTime(): number { + return Date.now() - this.startTime.getTime(); + } + + /** + * Log operation completion with metrics + */ + logCompletion(success: boolean, error?: Error): void { + const executionTime = this.getExecutionTime(); + + if (success) { + this.logger.info('Operation completed successfully', { + executionTime, + metadata: this.metadata, + }); + } else { + this.logger.error('Operation failed', { + executionTime, + error: error?.message, + stack: error?.stack, + metadata: this.metadata, + }); + } + } + + /** + * Cleanup method + */ + async dispose(): Promise { + this.logCompletion(true); + } + + /** + * Create child context + */ + createChild(operationName: string, metadata?: Record): OperationContext { + return new OperationContext({ + handlerName: 'child', + operationName, + parentLogger: this.logger, + container: this.container, + traceId: this.traceId, + metadata: { ...this.metadata, ...metadata }, + }); + } + + /** + * Generate a unique trace ID + */ + private generateTraceId(): string { + return `${Date.now()}-${Math.random().toString(36).substr(2, 9)}`; + } +} diff --git a/libs/core/di/src/pool-size-calculator.ts b/libs/core/di/src/pool-size-calculator.ts new file mode 100644 index 0000000..53654e2 --- /dev/null +++ b/libs/core/di/src/pool-size-calculator.ts @@ -0,0 +1,82 @@ +import type { ConnectionPoolConfig } from './types'; + +export interface PoolSizeRecommendation { + min: number; + max: number; + idle: number; +} + +export class PoolSizeCalculator { + private static readonly DEFAULT_SIZES: Record = { + // Service-level defaults + 'data-ingestion': { min: 5, max: 50, idle: 10 }, + 'data-pipeline': { min: 3, max: 30, idle: 5 }, + 'processing-service': { min: 2, max: 20, idle: 3 }, + 'web-api': { min: 2, max: 10, idle: 2 }, + 'portfolio-service': { min: 2, max: 15, idle: 3 }, + 'strategy-service': { min: 3, max: 25, idle: 5 }, + 'execution-service': { min: 2, max: 10, idle: 2 }, + + // Handler-level defaults + 'batch-import': { min: 10, max: 100, idle: 20 }, + 'real-time': { min: 2, max: 10, idle: 3 }, + analytics: { min: 5, max: 30, idle: 10 }, + reporting: { min: 3, max: 20, idle: 5 }, + }; + + static calculate( + serviceName: string, + handlerName?: string, + customConfig?: Partial + ): PoolSizeRecommendation { + // Check for custom configuration first + if (customConfig?.minConnections && customConfig?.maxConnections) { + return { + min: customConfig.minConnections, + max: customConfig.maxConnections, + idle: Math.floor((customConfig.minConnections + customConfig.maxConnections) / 4), + }; + } + + // Try handler-specific sizes first, then service-level + const key = handlerName || serviceName; + const recommendation = this.DEFAULT_SIZES[key] || this.DEFAULT_SIZES[serviceName]; + + if (recommendation) { + return { ...recommendation }; + } + + // Fall back to generic defaults + return { + min: 2, + max: 10, + idle: 3, + }; + } + + static getOptimalPoolSize( + expectedConcurrency: number, + averageQueryTimeMs: number, + targetLatencyMs: number + ): number { + // Little's Law: L = λ * W + // L = number of connections needed + // λ = arrival rate (requests per second) + // W = average time in system (seconds) + + const requestsPerSecond = expectedConcurrency; + const averageTimeInSystem = averageQueryTimeMs / 1000; + + const minConnections = Math.ceil(requestsPerSecond * averageTimeInSystem); + + // Add buffer for burst traffic (20% overhead) + const recommendedSize = Math.ceil(minConnections * 1.2); + + // Ensure we meet target latency + const latencyBasedSize = Math.ceil( + expectedConcurrency * (averageQueryTimeMs / targetLatencyMs) + ); + + return Math.max(recommendedSize, latencyBasedSize, 2); // Minimum 2 connections + } +} diff --git a/libs/core/di/src/registrations/cache.registration.ts b/libs/core/di/src/registrations/cache.registration.ts new file mode 100644 index 0000000..aa9c44c --- /dev/null +++ b/libs/core/di/src/registrations/cache.registration.ts @@ -0,0 +1,43 @@ +import { asFunction, asValue, type AwilixContainer } from 'awilix'; +import type { AppConfig } from '../config/schemas'; +import type { ServiceDefinitions } from '../container/types'; + +export function registerCacheServices( + container: AwilixContainer, + config: AppConfig +): void { + if (config.redis.enabled) { + container.register({ + cache: asFunction(({ logger }) => { + const { createServiceCache } = require('@stock-bot/queue'); + // Get standardized service name from config + const serviceName = config.service?.serviceName || config.service?.name || 'unknown'; + + // Create service-specific cache that uses the service's Redis DB + return createServiceCache(serviceName, { + host: config.redis.host, + port: config.redis.port, + password: config.redis.password, + db: config.redis.db, // This will be overridden by ServiceCache + }, { logger }); + }).singleton(), + + // Also provide global cache for shared data + globalCache: asFunction(({ logger }) => { + const { createServiceCache } = require('@stock-bot/queue'); + const serviceName = config.service?.serviceName || config.service?.name || 'unknown'; + + return createServiceCache(serviceName, { + host: config.redis.host, + port: config.redis.port, + password: config.redis.password, + }, { global: true, logger }); + }).singleton(), + }); + } else { + container.register({ + cache: asValue(null), + globalCache: asValue(null), + }); + } +} \ No newline at end of file diff --git a/libs/core/di/src/registrations/core.registration.ts b/libs/core/di/src/registrations/core.registration.ts new file mode 100644 index 0000000..26600c8 --- /dev/null +++ b/libs/core/di/src/registrations/core.registration.ts @@ -0,0 +1,14 @@ +import { asValue, type AwilixContainer } from 'awilix'; +import { getLogger } from '@stock-bot/logger'; +import type { AppConfig } from '../config/schemas'; +import type { ServiceDefinitions } from '../container/types'; + +export function registerCoreServices( + container: AwilixContainer, + config: AppConfig +): void { + container.register({ + config: asValue(config), + logger: asValue(getLogger('di-container')), + }); +} \ No newline at end of file diff --git a/libs/core/di/src/registrations/database.registration.ts b/libs/core/di/src/registrations/database.registration.ts new file mode 100644 index 0000000..e479213 --- /dev/null +++ b/libs/core/di/src/registrations/database.registration.ts @@ -0,0 +1,82 @@ +import { MongoDBClient } from '@stock-bot/mongodb'; +import { PostgreSQLClient } from '@stock-bot/postgres'; +import { QuestDBClient } from '@stock-bot/questdb'; +import { asFunction, asValue, type AwilixContainer } from 'awilix'; +import type { AppConfig } from '../config/schemas'; +import type { ServiceDefinitions } from '../container/types'; + +export function registerDatabaseServices( + container: AwilixContainer, + config: AppConfig +): void { + // MongoDB + if (config.mongodb.enabled) { + container.register({ + mongoClient: asFunction(({ logger }) => { + // Parse MongoDB URI to extract components + const uriMatch = config.mongodb.uri.match(/mongodb:\/\/(?:([^:]+):([^@]+)@)?([^:/]+):(\d+)\/([^?]+)(?:\?authSource=(.+))?/); + const mongoConfig = { + host: uriMatch?.[3] || 'localhost', + port: parseInt(uriMatch?.[4] || '27017'), + database: config.mongodb.database, + username: uriMatch?.[1], + password: uriMatch?.[2] ? String(uriMatch?.[2]) : undefined, + authSource: uriMatch?.[6] || 'admin', + uri: config.mongodb.uri, + }; + return new MongoDBClient(mongoConfig, logger); + }).singleton(), + }); + } else { + container.register({ + mongoClient: asValue(null), + }); + } + + // PostgreSQL + if (config.postgres.enabled) { + container.register({ + postgresClient: asFunction(({ logger }) => { + const pgConfig = { + host: config.postgres.host, + port: config.postgres.port, + database: config.postgres.database, + username: config.postgres.user, + password: String(config.postgres.password), // Ensure password is a string + }; + + logger.debug('PostgreSQL config:', { + ...pgConfig, + password: pgConfig.password ? '***' : 'NO_PASSWORD', + }); + return new PostgreSQLClient(pgConfig, logger); + }).singleton(), + }); + } else { + container.register({ + postgresClient: asValue(null), + }); + } + + // QuestDB + if (config.questdb?.enabled) { + container.register({ + questdbClient: asFunction(({ logger }) => { + return new QuestDBClient( + { + host: config.questdb!.host, + httpPort: config.questdb!.httpPort, + pgPort: config.questdb!.pgPort, + influxPort: config.questdb!.influxPort, + database: config.questdb!.database, + }, + logger + ); + }).singleton(), + }); + } else { + container.register({ + questdbClient: asValue(null), + }); + } +} \ No newline at end of file diff --git a/libs/core/di/src/registrations/index.ts b/libs/core/di/src/registrations/index.ts new file mode 100644 index 0000000..db37593 --- /dev/null +++ b/libs/core/di/src/registrations/index.ts @@ -0,0 +1,4 @@ +export { registerCoreServices } from './core.registration'; +export { registerCacheServices } from './cache.registration'; +export { registerDatabaseServices } from './database.registration'; +export { registerApplicationServices } from './service.registration'; \ No newline at end of file diff --git a/libs/core/di/src/registrations/service.registration.ts b/libs/core/di/src/registrations/service.registration.ts new file mode 100644 index 0000000..6fe294f --- /dev/null +++ b/libs/core/di/src/registrations/service.registration.ts @@ -0,0 +1,91 @@ +import { asClass, asFunction, asValue, type AwilixContainer } from 'awilix'; +import { Browser } from '@stock-bot/browser'; +import { ProxyManager } from '@stock-bot/proxy'; +import { NamespacedCache } from '@stock-bot/cache'; +import type { AppConfig } from '../config/schemas'; +import type { ServiceDefinitions } from '../container/types'; + +export function registerApplicationServices( + container: AwilixContainer, + config: AppConfig +): void { + // Browser + if (config.browser) { + container.register({ + browser: asClass(Browser) + .singleton() + .inject(() => ({ + options: { + headless: config.browser!.headless, + timeout: config.browser!.timeout, + }, + })), + }); + } else { + container.register({ + browser: asValue(null as any), // Required field + }); + } + + // Proxy Manager + if (config.proxy && config.redis.enabled) { + container.register({ + proxyManager: asFunction(({ logger }) => { + // Create a separate cache instance for proxy with global prefix + const { createCache } = require('@stock-bot/cache'); + const proxyCache = createCache({ + redisConfig: { + host: config.redis.host, + port: config.redis.port, + password: config.redis.password, + db: 1, // Use cache DB (usually DB 1) + }, + keyPrefix: 'cache:proxy:', + ttl: 86400, // 24 hours default + enableMetrics: true, + logger, + }); + + const proxyManager = new ProxyManager(proxyCache, config.proxy, logger); + + // Note: Initialization will be handled by the lifecycle manager + return proxyManager; + }).singleton(), + }); + } else { + container.register({ + proxyManager: asValue(null), + }); + } + + // Queue Manager + if (config.queue?.enabled && config.redis.enabled) { + container.register({ + queueManager: asFunction(({ logger }) => { + const { SmartQueueManager } = require('@stock-bot/queue'); + const queueConfig = { + serviceName: config.service?.serviceName || config.service?.name || 'unknown', + redis: { + host: config.redis.host, + port: config.redis.port, + password: config.redis.password, + db: config.redis.db, + }, + defaultQueueOptions: { + workers: config.queue!.workers || 1, + concurrency: config.queue!.concurrency || 1, + defaultJobOptions: config.queue!.defaultJobOptions, + }, + enableScheduledJobs: config.queue!.enableScheduledJobs ?? true, + delayWorkerStart: config.queue!.delayWorkerStart ?? false, + autoDiscoverHandlers: true, + }; + return new SmartQueueManager(queueConfig, logger); + }).singleton(), + }); + } else { + container.register({ + queueManager: asValue(null), + }); + } +} \ No newline at end of file diff --git a/libs/core/di/src/service-application.ts b/libs/core/di/src/service-application.ts new file mode 100644 index 0000000..1b2ce3c --- /dev/null +++ b/libs/core/di/src/service-application.ts @@ -0,0 +1,426 @@ +/** + * ServiceApplication - Common service initialization and lifecycle management + * Encapsulates common patterns for Hono-based microservices + */ + +import { Hono } from 'hono'; +import { cors } from 'hono/cors'; +import { getLogger, setLoggerConfig, shutdownLoggers, type Logger } from '@stock-bot/logger'; +import { Shutdown } from '@stock-bot/shutdown'; +import type { BaseAppConfig as StockBotAppConfig, UnifiedAppConfig } from '@stock-bot/config'; +import { toUnifiedConfig } from '@stock-bot/config'; +import type { IServiceContainer } from '@stock-bot/types'; +import type { ServiceContainer } from './awilix-container'; + +/** + * Configuration for ServiceApplication + */ +export interface ServiceApplicationConfig { + /** Service name for logging and identification */ + serviceName: string; + + /** CORS configuration - if not provided, uses permissive defaults */ + corsConfig?: Parameters[0]; + + /** Whether to enable handler initialization */ + enableHandlers?: boolean; + + /** Whether to enable scheduled job creation */ + enableScheduledJobs?: boolean; + + /** Custom shutdown timeout in milliseconds */ + shutdownTimeout?: number; + + /** Service metadata for info endpoint */ + serviceMetadata?: { + version?: string; + description?: string; + endpoints?: Record; + }; + + /** Whether to add a basic info endpoint at root */ + addInfoEndpoint?: boolean; +} + +/** + * Lifecycle hooks for service customization + */ +export interface ServiceLifecycleHooks { + /** Called after container is created but before routes */ + onContainerReady?: (container: IServiceContainer) => Promise | void; + + /** Called after app is created but before routes are mounted */ + onAppReady?: (app: Hono, container: IServiceContainer) => Promise | void; + + /** Called after routes are mounted but before server starts */ + onBeforeStart?: (app: Hono, container: IServiceContainer) => Promise | void; + + /** Called after successful server startup */ + onStarted?: (port: number) => Promise | void; + + /** Called during shutdown before cleanup */ + onBeforeShutdown?: () => Promise | void; +} + +/** + * ServiceApplication - Manages the complete lifecycle of a microservice + */ +export class ServiceApplication { + private config: UnifiedAppConfig; + private serviceConfig: ServiceApplicationConfig; + private hooks: ServiceLifecycleHooks; + private logger: Logger; + + private container: ServiceContainer | null = null; + private serviceContainer: IServiceContainer | null = null; + private app: Hono | null = null; + private server: ReturnType | null = null; + private shutdown: Shutdown; + + constructor( + config: StockBotAppConfig | UnifiedAppConfig, + serviceConfig: ServiceApplicationConfig, + hooks: ServiceLifecycleHooks = {} + ) { + // Convert to unified config + this.config = toUnifiedConfig(config); + + // Ensure service name is set in config + if (!this.config.service.serviceName) { + this.config.service.serviceName = serviceConfig.serviceName; + } + + this.serviceConfig = { + shutdownTimeout: 15000, + enableHandlers: false, + enableScheduledJobs: false, + addInfoEndpoint: true, + ...serviceConfig, + }; + this.hooks = hooks; + + // Initialize logger configuration + this.configureLogger(); + this.logger = getLogger(this.serviceConfig.serviceName); + + // Initialize shutdown manager + this.shutdown = Shutdown.getInstance({ + timeout: this.serviceConfig.shutdownTimeout + }); + } + + /** + * Configure logger based on application config + */ + private configureLogger(): void { + if (this.config.log) { + setLoggerConfig({ + logLevel: this.config.log.level, + logConsole: true, + logFile: false, + environment: this.config.environment, + hideObject: this.config.log.hideObject, + }); + } + } + + /** + * Create and configure Hono application with CORS + */ + private createApp(): Hono { + const app = new Hono(); + + // Add CORS middleware with service-specific or default configuration + const corsConfig = this.serviceConfig.corsConfig || { + origin: '*', + allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS', 'PATCH'], + allowHeaders: ['Content-Type', 'Authorization'], + credentials: false, + }; + + app.use('*', cors(corsConfig)); + + // Add basic info endpoint if enabled + if (this.serviceConfig.addInfoEndpoint) { + const metadata = this.serviceConfig.serviceMetadata || {}; + app.get('/', c => { + return c.json({ + name: this.serviceConfig.serviceName, + version: metadata.version || '1.0.0', + description: metadata.description, + status: 'running', + timestamp: new Date().toISOString(), + endpoints: metadata.endpoints || {}, + }); + }); + } + + return app; + } + + /** + * Register graceful shutdown handlers + */ + private registerShutdownHandlers(): void { + // Priority 1: Queue system (highest priority) + if (this.serviceConfig.enableScheduledJobs) { + this.shutdown.onShutdownHigh(async () => { + this.logger.info('Shutting down queue system...'); + try { + const queueManager = this.container?.resolve('queueManager'); + if (queueManager) { + await queueManager.shutdown(); + } + this.logger.info('Queue system shut down'); + } catch (error) { + this.logger.error('Error shutting down queue system', { error }); + } + }, 'Queue System'); + } + + // Priority 1: HTTP Server (high priority) + this.shutdown.onShutdownHigh(async () => { + if (this.server) { + this.logger.info('Stopping HTTP server...'); + try { + this.server.stop(); + this.logger.info('HTTP server stopped'); + } catch (error) { + this.logger.error('Error stopping HTTP server', { error }); + } + } + }, 'HTTP Server'); + + // Custom shutdown hook + if (this.hooks.onBeforeShutdown) { + this.shutdown.onShutdownHigh(async () => { + try { + await this.hooks.onBeforeShutdown!(); + } catch (error) { + this.logger.error('Error in custom shutdown hook', { error }); + } + }, 'Custom Shutdown'); + } + + // Priority 2: Services and connections (medium priority) + this.shutdown.onShutdownMedium(async () => { + this.logger.info('Disposing services and connections...'); + try { + if (this.container) { + // Disconnect database clients + const mongoClient = this.container.resolve('mongoClient'); + if (mongoClient?.disconnect) { + await mongoClient.disconnect(); + } + + const postgresClient = this.container.resolve('postgresClient'); + if (postgresClient?.disconnect) { + await postgresClient.disconnect(); + } + + const questdbClient = this.container.resolve('questdbClient'); + if (questdbClient?.disconnect) { + await questdbClient.disconnect(); + } + + this.logger.info('All services disposed successfully'); + } + } catch (error) { + this.logger.error('Error disposing services', { error }); + } + }, 'Services'); + + // Priority 3: Logger shutdown (lowest priority - runs last) + this.shutdown.onShutdownLow(async () => { + try { + this.logger.info('Shutting down loggers...'); + await shutdownLoggers(); + // Don't log after shutdown + } catch { + // Silently ignore logger shutdown errors + } + }, 'Loggers'); + } + + /** + * Start the service with full initialization + */ + async start( + containerFactory: (config: UnifiedAppConfig) => Promise, + routeFactory: (container: IServiceContainer) => Hono, + handlerInitializer?: (container: IServiceContainer) => Promise + ): Promise { + this.logger.info(`Initializing ${this.serviceConfig.serviceName} service...`); + + try { + // Create and initialize container + this.logger.debug('Creating DI container...'); + // Config already has service name from constructor + this.container = await containerFactory(this.config); + this.serviceContainer = this.container.resolve('serviceContainer'); + this.logger.info('DI container created and initialized'); + + // Call container ready hook + if (this.hooks.onContainerReady) { + await this.hooks.onContainerReady(this.serviceContainer); + } + + // Create Hono application + this.app = this.createApp(); + + // Call app ready hook + if (this.hooks.onAppReady) { + await this.hooks.onAppReady(this.app, this.serviceContainer); + } + + // Initialize handlers if enabled + if (this.serviceConfig.enableHandlers && handlerInitializer) { + this.logger.debug('Initializing handlers...'); + await handlerInitializer(this.serviceContainer); + this.logger.info('Handlers initialized'); + } + + // Create and mount routes + const routes = routeFactory(this.serviceContainer); + this.app.route('/', routes); + + // Initialize scheduled jobs if enabled + if (this.serviceConfig.enableScheduledJobs) { + await this.initializeScheduledJobs(); + } + + // Call before start hook + if (this.hooks.onBeforeStart) { + await this.hooks.onBeforeStart(this.app, this.serviceContainer); + } + + // Register shutdown handlers + this.registerShutdownHandlers(); + + // Start HTTP server + const port = this.config.service.port; + this.server = Bun.serve({ + port, + fetch: this.app.fetch, + development: this.config.environment === 'development', + }); + + this.logger.info(`${this.serviceConfig.serviceName} service started on port ${port}`); + + // Call started hook + if (this.hooks.onStarted) { + await this.hooks.onStarted(port); + } + + } catch (error) { + this.logger.error('DETAILED ERROR:', error); + this.logger.error('Failed to start service', { + error: error instanceof Error ? error.message : String(error), + stack: error instanceof Error ? error.stack : undefined, + details: JSON.stringify(error, null, 2), + }); + throw error; + } + } + + /** + * Initialize scheduled jobs from handler registry + */ + private async initializeScheduledJobs(): Promise { + if (!this.container) { + throw new Error('Container not initialized'); + } + + this.logger.debug('Creating scheduled jobs from registered handlers...'); + const { handlerRegistry } = await import('@stock-bot/handlers'); + const allHandlers = handlerRegistry.getAllHandlersWithSchedule(); + + let totalScheduledJobs = 0; + for (const [handlerName, config] of allHandlers) { + if (config.scheduledJobs && config.scheduledJobs.length > 0) { + // Check if this handler belongs to the current service + const ownerService = handlerRegistry.getHandlerService(handlerName); + + if (ownerService !== this.config.service.serviceName) { + this.logger.trace('Skipping scheduled jobs for handler from different service', { + handler: handlerName, + ownerService, + currentService: this.config.service.serviceName, + }); + continue; + } + + const queueManager = this.container.resolve('queueManager'); + if (!queueManager) { + this.logger.error('Queue manager is not initialized, cannot create scheduled jobs'); + continue; + } + const queue = queueManager.getQueue(handlerName); + + for (const scheduledJob of config.scheduledJobs) { + // Include handler and operation info in job data + const jobData = { + handler: handlerName, + operation: scheduledJob.operation, + payload: scheduledJob.payload, + }; + + // Build job options from scheduled job config + const jobOptions = { + priority: scheduledJob.priority, + delay: scheduledJob.delay, + repeat: { + immediately: scheduledJob.immediately, + }, + }; + + await queue.addScheduledJob( + scheduledJob.operation, + jobData, + scheduledJob.cronPattern, + jobOptions + ); + totalScheduledJobs++; + this.logger.debug('Scheduled job created', { + handler: handlerName, + operation: scheduledJob.operation, + cronPattern: scheduledJob.cronPattern, + immediately: scheduledJob.immediately, + priority: scheduledJob.priority, + }); + } + } + } + this.logger.info('Scheduled jobs created', { totalJobs: totalScheduledJobs }); + + // Start queue workers + this.logger.debug('Starting queue workers...'); + const queueManager = this.container.resolve('queueManager'); + if (queueManager) { + queueManager.startAllWorkers(); + this.logger.info('Queue workers started'); + } + } + + /** + * Stop the service gracefully + */ + async stop(): Promise { + this.logger.info(`Stopping ${this.serviceConfig.serviceName} service...`); + await this.shutdown.shutdown(); + } + + /** + * Get the service container (for testing or advanced use cases) + */ + getServiceContainer(): IServiceContainer | null { + return this.serviceContainer; + } + + /** + * Get the Hono app (for testing or advanced use cases) + */ + getApp(): Hono | null { + return this.app; + } +} \ No newline at end of file diff --git a/libs/core/di/src/types.ts b/libs/core/di/src/types.ts new file mode 100644 index 0000000..2cf2b0f --- /dev/null +++ b/libs/core/di/src/types.ts @@ -0,0 +1,71 @@ +// Generic types to avoid circular dependencies +export interface GenericClientConfig { + [key: string]: any; +} + +export interface ConnectionPoolConfig { + name: string; + poolSize?: number; + minConnections?: number; + maxConnections?: number; + idleTimeoutMillis?: number; + connectionTimeoutMillis?: number; + enableMetrics?: boolean; +} + +export interface MongoDBPoolConfig extends ConnectionPoolConfig { + config: GenericClientConfig; +} + +export interface PostgreSQLPoolConfig extends ConnectionPoolConfig { + config: GenericClientConfig; +} + +export interface CachePoolConfig extends ConnectionPoolConfig { + config: GenericClientConfig; +} + +export interface QueuePoolConfig extends ConnectionPoolConfig { + config: GenericClientConfig; +} + +export interface ConnectionFactoryConfig { + service: string; + environment: 'development' | 'production' | 'test'; + pools?: { + mongodb?: Partial; + postgres?: Partial; + cache?: Partial; + queue?: Partial; + }; +} + +export interface ConnectionPool { + name: string; + client: T; + metrics: PoolMetrics; + health(): Promise; + dispose(): Promise; +} + +export interface PoolMetrics { + created: Date; + totalConnections: number; + activeConnections: number; + idleConnections: number; + waitingRequests: number; + errors: number; +} + +export interface ConnectionFactory { + createMongoDB(config: MongoDBPoolConfig): Promise>; + createPostgreSQL(config: PostgreSQLPoolConfig): Promise>; + createCache(config: CachePoolConfig): Promise>; + createQueue(config: QueuePoolConfig): Promise>; + getPool( + type: 'mongodb' | 'postgres' | 'cache' | 'queue', + name: string + ): ConnectionPool | undefined; + listPools(): Array<{ type: string; name: string; metrics: PoolMetrics }>; + disposeAll(): Promise; +} diff --git a/libs/core/di/src/utils/lifecycle.ts b/libs/core/di/src/utils/lifecycle.ts new file mode 100644 index 0000000..b415b85 --- /dev/null +++ b/libs/core/di/src/utils/lifecycle.ts @@ -0,0 +1,99 @@ +import type { AwilixContainer } from 'awilix'; +import type { ServiceDefinitions } from '../container/types'; +import { getLogger } from '@stock-bot/logger'; + +interface ServiceWithLifecycle { + connect?: () => Promise; + disconnect?: () => Promise; + close?: () => Promise; + initialize?: () => Promise; + shutdown?: () => Promise; +} + +export class ServiceLifecycleManager { + private readonly logger = getLogger('service-lifecycle'); + private readonly services = [ + { name: 'cache', key: 'cache' as const }, + { name: 'mongoClient', key: 'mongoClient' as const }, + { name: 'postgresClient', key: 'postgresClient' as const }, + { name: 'questdbClient', key: 'questdbClient' as const }, + { name: 'proxyManager', key: 'proxyManager' as const }, + { name: 'queueManager', key: 'queueManager' as const }, + ]; + + async initializeServices( + container: AwilixContainer, + timeout = 30000 + ): Promise { + const initPromises: Promise[] = []; + + for (const { name, key } of this.services) { + const service = container.cradle[key] as ServiceWithLifecycle | null; + + if (service) { + const initPromise = this.initializeService(name, service); + initPromises.push( + Promise.race([ + initPromise, + this.createTimeoutPromise(timeout, `${name} initialization timed out after ${timeout}ms`), + ]) + ); + } + } + + await Promise.all(initPromises); + this.logger.info('All services initialized successfully'); + } + + async shutdownServices(container: AwilixContainer): Promise { + const shutdownPromises: Promise[] = []; + + // Shutdown in reverse order + for (const { name, key } of [...this.services].reverse()) { + const service = container.cradle[key] as ServiceWithLifecycle | null; + + if (service) { + shutdownPromises.push(this.shutdownService(name, service)); + } + } + + await Promise.allSettled(shutdownPromises); + this.logger.info('All services shut down'); + } + + private async initializeService(name: string, service: ServiceWithLifecycle): Promise { + try { + if (typeof service.connect === 'function') { + await service.connect(); + this.logger.info(`${name} connected`); + } else if (typeof service.initialize === 'function') { + await service.initialize(); + this.logger.info(`${name} initialized`); + } + } catch (error) { + this.logger.error(`Failed to initialize ${name}:`, error); + throw error; + } + } + + private async shutdownService(name: string, service: ServiceWithLifecycle): Promise { + try { + if (typeof service.disconnect === 'function') { + await service.disconnect(); + } else if (typeof service.close === 'function') { + await service.close(); + } else if (typeof service.shutdown === 'function') { + await service.shutdown(); + } + this.logger.info(`${name} shut down`); + } catch (error) { + this.logger.error(`Error shutting down ${name}:`, error); + } + } + + private createTimeoutPromise(timeout: number, message: string): Promise { + return new Promise((_, reject) => { + setTimeout(() => reject(new Error(message)), timeout); + }); + } +} diff --git a/libs/core/di/test/di.test.ts b/libs/core/di/test/di.test.ts new file mode 100644 index 0000000..3e7bc1f --- /dev/null +++ b/libs/core/di/test/di.test.ts @@ -0,0 +1,183 @@ +/** + * Test DI library functionality + */ +import { describe, expect, test } from 'bun:test'; +import { + ConnectionFactory, + OperationContext, + PoolSizeCalculator, + ServiceContainer, +} from '../src/index'; + +describe('DI Library', () => { + test('ServiceContainer - sync resolution', () => { + const container = new ServiceContainer('test'); + + container.register({ + name: 'testService', + factory: () => ({ value: 'test' }), + singleton: true, + }); + + const service = container.resolve<{ value: string }>('testService'); + expect(service.value).toBe('test'); + }); + + test('ServiceContainer - async resolution', async () => { + const container = new ServiceContainer('test'); + + container.register({ + name: 'asyncService', + factory: async () => ({ value: 'async-test' }), + singleton: true, + }); + + const service = await container.resolveAsync<{ value: string }>('asyncService'); + expect(service.value).toBe('async-test'); + }); + + test('ServiceContainer - scoped container', () => { + const container = new ServiceContainer('test'); + + container.register({ + name: 'testService', + factory: () => ({ value: 'test' }), + singleton: true, + }); + + const scopedContainer = container.createScope(); + const service = scopedContainer.resolve<{ value: string }>('testService'); + expect(service.value).toBe('test'); + }); + + test('ServiceContainer - error on unregistered service', () => { + const container = new ServiceContainer('test'); + + expect(() => { + container.resolve('nonexistent'); + }).toThrow('Service nonexistent not registered'); + }); + + test('ServiceContainer - async service throws error on sync resolve', () => { + const container = new ServiceContainer('test'); + + container.register({ + name: 'asyncService', + factory: async () => ({ value: 'async' }), + singleton: true, + }); + + expect(() => { + container.resolve('asyncService'); + }).toThrow('Service asyncService is async. Use resolveAsync() instead.'); + }); + + test('ServiceContainer - disposal', async () => { + const container = new ServiceContainer('test'); + let disposed = false; + + container.register({ + name: 'disposableService', + factory: () => ({ value: 'test' }), + singleton: true, + dispose: async () => { + disposed = true; + }, + }); + + // Create instance + container.resolve('disposableService'); + + // Dispose container + await container.dispose(); + expect(disposed).toBe(true); + }); + + test('OperationContext - enhanced functionality', () => { + const container = new ServiceContainer('test'); + const context = OperationContext.create('test-handler', 'test-operation', { + container, + metadata: { userId: '123' }, + }); + + expect(context).toBeDefined(); + expect(context.logger).toBeDefined(); + expect(context.traceId).toBeDefined(); + expect(context.metadata.userId).toBe('123'); + expect(context.getExecutionTime()).toBeGreaterThanOrEqual(0); + }); + + test('OperationContext - service resolution', () => { + const container = new ServiceContainer('test'); + + container.register({ + name: 'testService', + factory: () => ({ value: 'resolved' }), + singleton: true, + }); + + const context = OperationContext.create('test-handler', 'test-operation', { + container, + }); + + const service = context.resolve<{ value: string }>('testService'); + expect(service.value).toBe('resolved'); + }); + + test('ConnectionFactory - creation', () => { + const factory = new ConnectionFactory({ + service: 'test', + environment: 'development', + }); + + expect(factory).toBeDefined(); + expect(factory.listPools()).toEqual([]); + }); + + test('OperationContext - creation', () => { + const container = new ServiceContainer('test'); + const context = OperationContext.create('test-handler', 'test-operation', { + container, + }); + + expect(context).toBeDefined(); + expect(context.logger).toBeDefined(); + }); + + test('OperationContext - child context', () => { + const context = OperationContext.create('test-handler', 'test-operation'); + const child = context.createChild('child-operation'); + + expect(child).toBeDefined(); + expect(child.logger).toBeDefined(); + }); + + test('PoolSizeCalculator - service defaults', () => { + const poolSize = PoolSizeCalculator.calculate('data-ingestion'); + expect(poolSize).toEqual({ min: 5, max: 50, idle: 10 }); + }); + + test('PoolSizeCalculator - handler defaults', () => { + const poolSize = PoolSizeCalculator.calculate('unknown-service', 'batch-import'); + expect(poolSize).toEqual({ min: 10, max: 100, idle: 20 }); + }); + + test('PoolSizeCalculator - fallback defaults', () => { + const poolSize = PoolSizeCalculator.calculate('unknown-service', 'unknown-handler'); + expect(poolSize).toEqual({ min: 2, max: 10, idle: 3 }); + }); + + test('PoolSizeCalculator - custom config', () => { + const poolSize = PoolSizeCalculator.calculate('test-service', undefined, { + minConnections: 5, + maxConnections: 15, + }); + expect(poolSize).toEqual({ min: 5, max: 15, idle: 5 }); + }); + + test('PoolSizeCalculator - optimal size calculation', () => { + const optimalSize = PoolSizeCalculator.getOptimalPoolSize(10, 100, 50); + expect(optimalSize).toBeGreaterThan(0); + expect(typeof optimalSize).toBe('number'); + }); +}); diff --git a/libs/core/di/tsconfig.json b/libs/core/di/tsconfig.json new file mode 100644 index 0000000..cd2db06 --- /dev/null +++ b/libs/core/di/tsconfig.json @@ -0,0 +1,14 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "rootDir": "./src", + "outDir": "./dist", + "composite": true, + "declaration": true, + "declarationMap": true, + "types": ["node", "bun-types"] + }, + "include": ["src/**/*.ts"], + "exclude": ["node_modules", "dist", "test"], + "references": [{ "path": "../config" }, { "path": "../logger" }, { "path": "../queue" }] +} diff --git a/libs/event-bus/README.md b/libs/core/event-bus/README.md similarity index 99% rename from libs/event-bus/README.md rename to libs/core/event-bus/README.md index a6c03e3..4846088 100644 --- a/libs/event-bus/README.md +++ b/libs/core/event-bus/README.md @@ -28,7 +28,7 @@ bun add @stock-bot/event-bus import { createEventBus, TradingEventType } from '@stock-bot/event-bus'; const eventBus = createEventBus({ - serviceName: 'data-service', + serviceName: 'data-ingestion', redisConfig: { host: 'localhost', port: 6379, diff --git a/libs/event-bus/package.json b/libs/core/event-bus/package.json similarity index 100% rename from libs/event-bus/package.json rename to libs/core/event-bus/package.json diff --git a/libs/event-bus/src/event-bus.ts b/libs/core/event-bus/src/event-bus.ts similarity index 93% rename from libs/event-bus/src/event-bus.ts rename to libs/core/event-bus/src/event-bus.ts index 749f613..2a40b3a 100644 --- a/libs/event-bus/src/event-bus.ts +++ b/libs/core/event-bus/src/event-bus.ts @@ -1,12 +1,7 @@ import { EventEmitter } from 'eventemitter3'; import Redis from 'ioredis'; import { getLogger } from '@stock-bot/logger'; -import type { - EventBusConfig, - EventBusMessage, - EventHandler, - EventSubscription, -} from './types'; +import type { EventBusConfig, EventBusMessage, EventHandler, EventSubscription } from './types'; /** * Lightweight Event Bus for inter-service communication @@ -52,7 +47,7 @@ export class EventBus extends EventEmitter { this.isConnected = true; }); - this.publisher.on('error', (error) => { + this.publisher.on('error', error => { this.logger.error('Publisher Redis error:', error); }); @@ -63,7 +58,7 @@ export class EventBus extends EventEmitter { this.resubscribeAll(); }); - this.subscriber.on('error', (error) => { + this.subscriber.on('error', error => { this.logger.error('Subscriber Redis error:', error); }); @@ -89,7 +84,7 @@ export class EventBus extends EventEmitter { // Call registered handler if exists const subscription = this.subscriptions.get(eventType); if (subscription?.handler) { - Promise.resolve(subscription.handler(eventMessage)).catch((error) => { + Promise.resolve(subscription.handler(eventMessage)).catch(error => { this.logger.error(`Handler error for event ${eventType}:`, error); }); } @@ -103,11 +98,7 @@ export class EventBus extends EventEmitter { /** * Publish an event */ - async publish( - type: string, - data: T, - metadata?: Record - ): Promise { + async publish(type: string, data: T, metadata?: Record): Promise { const message: EventBusMessage = { id: this.generateId(), type, @@ -199,11 +190,11 @@ export class EventBus extends EventEmitter { */ async waitForConnection(timeout: number = 5000): Promise { const startTime = Date.now(); - + while (!this.isConnected && Date.now() - startTime < timeout) { await new Promise(resolve => setTimeout(resolve, 100)); } - + if (!this.isConnected) { throw new Error(`Failed to connect to Redis within ${timeout}ms`); } @@ -220,10 +211,7 @@ export class EventBus extends EventEmitter { this.removeAllListeners(); // Close Redis connections - await Promise.all([ - this.publisher.quit(), - this.subscriber.quit(), - ]); + await Promise.all([this.publisher.quit(), this.subscriber.quit()]); this.logger.info('Event bus closed'); } @@ -248,4 +236,4 @@ export class EventBus extends EventEmitter { get service(): string { return this.serviceName; } -} \ No newline at end of file +} diff --git a/libs/event-bus/src/index.ts b/libs/core/event-bus/src/index.ts similarity index 86% rename from libs/event-bus/src/index.ts rename to libs/core/event-bus/src/index.ts index d92979e..87dfbd2 100644 --- a/libs/event-bus/src/index.ts +++ b/libs/core/event-bus/src/index.ts @@ -11,6 +11,3 @@ export function createEventBus(config: EventBusConfig): EventBus { // Re-export everything export { EventBus } from './event-bus'; export * from './types'; - -// Default export -export default createEventBus; \ No newline at end of file diff --git a/libs/event-bus/src/types.ts b/libs/core/event-bus/src/types.ts similarity index 99% rename from libs/event-bus/src/types.ts rename to libs/core/event-bus/src/types.ts index d07d569..07b8f53 100644 --- a/libs/event-bus/src/types.ts +++ b/libs/core/event-bus/src/types.ts @@ -33,27 +33,27 @@ export enum TradingEventType { PRICE_UPDATE = 'market.price.update', ORDERBOOK_UPDATE = 'market.orderbook.update', TRADE_EXECUTED = 'market.trade.executed', - + // Order events ORDER_CREATED = 'order.created', ORDER_FILLED = 'order.filled', ORDER_CANCELLED = 'order.cancelled', ORDER_REJECTED = 'order.rejected', - + // Position events POSITION_OPENED = 'position.opened', POSITION_CLOSED = 'position.closed', POSITION_UPDATED = 'position.updated', - + // Strategy events STRATEGY_SIGNAL = 'strategy.signal', STRATEGY_STARTED = 'strategy.started', STRATEGY_STOPPED = 'strategy.stopped', - + // Risk events RISK_LIMIT_BREACH = 'risk.limit.breach', RISK_WARNING = 'risk.warning', - + // System events SERVICE_STARTED = 'system.service.started', SERVICE_STOPPED = 'system.service.stopped', @@ -108,4 +108,4 @@ export interface RiskEvent { portfolioId?: string; strategyId?: string; message: string; -} \ No newline at end of file +} diff --git a/libs/core/event-bus/tsconfig.json b/libs/core/event-bus/tsconfig.json new file mode 100644 index 0000000..55c59a8 --- /dev/null +++ b/libs/core/event-bus/tsconfig.json @@ -0,0 +1,10 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "composite": true + }, + "include": ["src/**/*"], + "references": [{ "path": "../../core/logger" }] +} diff --git a/libs/core/handlers/package.json b/libs/core/handlers/package.json new file mode 100644 index 0000000..3314981 --- /dev/null +++ b/libs/core/handlers/package.json @@ -0,0 +1,24 @@ +{ + "name": "@stock-bot/handlers", + "version": "1.0.0", + "description": "Universal handler system for queue and event-driven operations", + "main": "./src/index.ts", + "types": "./src/index.ts", + "scripts": { + "build": "tsc", + "clean": "rimraf dist", + "test": "bun test" + }, + "dependencies": { + "@stock-bot/config": "workspace:*", + "@stock-bot/logger": "workspace:*", + "@stock-bot/types": "workspace:*", + "@stock-bot/cache": "workspace:*", + "@stock-bot/utils": "workspace:*" + }, + "devDependencies": { + "@types/node": "^20.11.0", + "typescript": "^5.3.0", + "bun-types": "^1.2.15" + } +} diff --git a/libs/core/handlers/src/base/BaseHandler.ts b/libs/core/handlers/src/base/BaseHandler.ts new file mode 100644 index 0000000..015ef92 --- /dev/null +++ b/libs/core/handlers/src/base/BaseHandler.ts @@ -0,0 +1,383 @@ +import type { Collection } from 'mongodb'; +import { getLogger } from '@stock-bot/logger'; +import type { + HandlerConfigWithSchedule, + IServiceContainer, + ExecutionContext, + IHandler +} from '@stock-bot/types'; +import { fetch } from '@stock-bot/utils'; +import { createNamespacedCache } from '@stock-bot/cache'; +import { handlerRegistry } from '../registry/handler-registry'; +import { createJobHandler } from '../utils/create-job-handler'; + +/** + * Job scheduling options + */ +export interface JobScheduleOptions { + delay?: number; + priority?: number; + attempts?: number; + removeOnComplete?: number; + removeOnFail?: number; + backoff?: { + type: 'exponential' | 'fixed'; + delay: number; + }; + repeat?: { + pattern?: string; + key?: string; + limit?: number; + every?: number; + immediately?: boolean; + }; +} + +/** + * Abstract base class for all handlers with improved DI + * Provides common functionality and structure for queue/event operations + */ +export abstract class BaseHandler implements IHandler { + // Direct service properties - flattened for cleaner access + readonly logger; + readonly cache; + readonly globalCache; + readonly queue; + readonly proxy; + readonly browser; + readonly mongodb; + readonly postgres; + readonly questdb; + + private handlerName: string; + + constructor(services: IServiceContainer, handlerName?: string) { + // Flatten all services onto the handler instance + this.logger = getLogger(this.constructor.name); + this.cache = services.cache; + this.globalCache = services.globalCache; + this.queue = services.queue; + this.proxy = services.proxy; + this.browser = services.browser; + this.mongodb = services.mongodb; + this.postgres = services.postgres; + this.questdb = services.questdb; + + // Read handler name from decorator first, then fallback to parameter or class name + const constructor = this.constructor as any; + this.handlerName = + constructor.__handlerName || handlerName || this.constructor.name.toLowerCase(); + } + + /** + * Main execution method - automatically routes to decorated methods + * Works with queue (events commented for future) + */ + async execute(operation: string, input: unknown, context: ExecutionContext): Promise { + const constructor = this.constructor as any; + const operations = constructor.__operations || []; + + // Debug logging + this.logger.debug('Handler execute called', { + handler: this.handlerName, + operation, + availableOperations: operations.map((op: any) => ({ name: op.name, method: op.method })), + }); + + // Find the operation metadata + const operationMeta = operations.find((op: any) => op.name === operation); + if (!operationMeta) { + this.logger.error('Operation not found', { + requestedOperation: operation, + availableOperations: operations.map((op: any) => op.name), + }); + throw new Error(`Unknown operation: ${operation}`); + } + + // Get the method from the instance and call it + const method = (this as any)[operationMeta.method]; + if (typeof method !== 'function') { + throw new Error(`Operation method '${operationMeta.method}' not found on handler`); + } + + this.logger.debug('Executing operation method', { + operation, + method: operationMeta.method, + }); + + return await method.call(this, input, context); + } + + async scheduleOperation( + operation: string, + payload: unknown, + options?: JobScheduleOptions + ): Promise { + if (!this.queue) { + throw new Error('Queue service is not available'); + } + const queue = this.queue.getQueue(this.handlerName); + const jobData = { + handler: this.handlerName, + operation, + payload, + }; + + await queue.add(operation, jobData, options || {}); + } + + /** + * Create execution context for operations + */ + protected createExecutionContext( + type: 'http' | 'queue' | 'scheduled', + metadata: Record = {} + ): ExecutionContext { + return { + type, + metadata: { + ...metadata, + timestamp: Date.now(), + traceId: `${this.constructor.name}-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`, + }, + }; + } + + /** + * Helper methods for common operations + */ + + /** + * Get a MongoDB collection with type safety + */ + protected collection(name: string): Collection { + if (!this.mongodb) { + throw new Error('MongoDB service is not available'); + } + return this.mongodb.collection(name); + } + + /** + * Create a sub-namespaced cache for specific operations + * Example: handler 'webshare' creates namespace 'webshare:api' -> keys will be 'cache:data-ingestion:webshare:api:*' + */ + protected createNamespacedCache(subNamespace: string) { + return createNamespacedCache(this.cache, `${this.handlerName}:${subNamespace}`); + } + + /** + * Set cache with handler-prefixed key + */ + protected async cacheSet(key: string, value: any, ttl?: number): Promise { + if (!this.cache) { + return; + } + // Don't add 'cache:' prefix since the cache already has its own prefix + return this.cache.set(`${this.handlerName}:${key}`, value, ttl); + } + + /** + * Get cache with handler-prefixed key + */ + protected async cacheGet(key: string): Promise { + if (!this.cache) { + return null; + } + // Don't add 'cache:' prefix since the cache already has its own prefix + return this.cache.get(`${this.handlerName}:${key}`); + } + + /** + * Delete cache with handler-prefixed key + */ + protected async cacheDel(key: string): Promise { + if (!this.cache) { + return; + } + // Don't add 'cache:' prefix since the cache already has its own prefix + return this.cache.del(`${this.handlerName}:${key}`); + } + + /** + * Set global cache with key + */ + protected async globalCacheSet(key: string, value: any, ttl?: number): Promise { + if (!this.globalCache) { + return; + } + return this.globalCache.set(key, value, ttl); + } + + /** + * Get global cache with key + */ + protected async globalCacheGet(key: string): Promise { + if (!this.globalCache) { + return null; + } + return this.globalCache.get(key); + } + + /** + * Delete global cache with key + */ + protected async globalCacheDel(key: string): Promise { + if (!this.globalCache) { + return; + } + return this.globalCache.del(key); + } + /** + * Schedule operation with delay in seconds + */ + protected async scheduleIn( + operation: string, + payload: unknown, + delaySeconds: number, + additionalOptions?: Omit + ): Promise { + return this.scheduleOperation(operation, payload, { + delay: delaySeconds * 1000, + ...additionalOptions + }); + } + + /** + * Log with handler context + */ + protected log(level: 'info' | 'warn' | 'error' | 'debug', message: string, meta?: any): void { + this.logger[level](message, { handler: this.handlerName, ...meta }); + } + + /** + * HTTP client helper using fetch from utils + */ + protected get http() { + return { + get: (url: string, options?: any) => + fetch(url, { ...options, method: 'GET', logger: this.logger }), + post: (url: string, data?: any, options?: any) => + fetch(url, { + ...options, + method: 'POST', + body: JSON.stringify(data), + headers: { 'Content-Type': 'application/json', ...options?.headers }, + logger: this.logger, + }), + put: (url: string, data?: any, options?: any) => + fetch(url, { + ...options, + method: 'PUT', + body: JSON.stringify(data), + headers: { 'Content-Type': 'application/json', ...options?.headers }, + logger: this.logger, + }), + delete: (url: string, options?: any) => + fetch(url, { ...options, method: 'DELETE', logger: this.logger }), + }; + } + + /** + * Check if a service is available + */ + protected hasService(name: keyof IServiceContainer): boolean { + const service = this[name as keyof this]; + return service !== null; + } + + /** + * Event methods - commented for future + */ + // protected async publishEvent(eventName: string, payload: unknown): Promise { + // const eventBus = await this.container.resolveAsync('eventBus'); + // await eventBus.publish(eventName, payload); + // } + + /** + * Register this handler using decorator metadata + * Automatically reads @Handler, @Operation, and @QueueSchedule decorators + */ + register(serviceName?: string): void { + const constructor = this.constructor as any; + const handlerName = constructor.__handlerName || this.handlerName; + const operations = constructor.__operations || []; + const schedules = constructor.__schedules || []; + + // Create operation handlers from decorator metadata + const operationHandlers: Record = {}; + for (const op of operations) { + operationHandlers[op.name] = createJobHandler(async payload => { + const context: ExecutionContext = { + type: 'queue', + metadata: { source: 'queue', timestamp: Date.now() }, + }; + return await this.execute(op.name, payload, context); + }); + } + + // Create scheduled jobs from decorator metadata + const scheduledJobs = schedules.map((schedule: any) => { + // Find the operation name from the method name + const operation = operations.find((op: any) => op.method === schedule.operation); + return { + type: `${handlerName}-${schedule.operation}`, + operation: operation?.name || schedule.operation, + cronPattern: schedule.cronPattern, + priority: schedule.priority || 5, + immediately: schedule.immediately || false, + description: schedule.description || `${handlerName} ${schedule.operation}`, + payload: this.getScheduledJobPayload?.(schedule.operation), + }; + }); + + const config: HandlerConfigWithSchedule = { + name: handlerName, + operations: operationHandlers, + scheduledJobs, + }; + + handlerRegistry.registerWithSchedule(config, serviceName); + this.logger.info('Handler registered using decorator metadata', { + handlerName, + service: serviceName, + operations: operations.map((op: any) => ({ name: op.name, method: op.method })), + scheduledJobs: scheduledJobs.map((job: any) => ({ + operation: job.operation, + cronPattern: job.cronPattern, + immediately: job.immediately, + })), + }); + } + + /** + * Override this method to provide payloads for scheduled jobs + * @param operation The operation name that needs a payload + * @returns The payload for the scheduled job, or undefined + */ + protected getScheduledJobPayload?(operation: string): any; + + /** + * Lifecycle hooks - can be overridden by subclasses + */ + async onInit?(): Promise; + async onStart?(): Promise; + async onStop?(): Promise; + async onDispose?(): Promise; +} + +/** + * Specialized handler for operations that have scheduled jobs + */ +export abstract class ScheduledHandler extends BaseHandler { + /** + * Get scheduled job configurations for this handler + * Override in subclasses to define schedules + */ + getScheduledJobs?(): Array<{ + operation: string; + cronPattern: string; + priority?: number; + immediately?: boolean; + description?: string; + }>; +} diff --git a/libs/core/handlers/src/decorators/decorators.ts b/libs/core/handlers/src/decorators/decorators.ts new file mode 100644 index 0000000..102bfa2 --- /dev/null +++ b/libs/core/handlers/src/decorators/decorators.ts @@ -0,0 +1,130 @@ +// Bun-compatible decorators (hybrid approach) + +/** + * Handler decorator - marks a class as a handler + * @param name Handler name for registration + */ +export function Handler(name: string) { + return function (target: T, _context?: any) { + // Store handler name on the constructor + (target as any).__handlerName = name; + (target as any).__needsAutoRegistration = true; + + return target; + }; +} + +/** + * Operation decorator - marks a method as an operation + * @param name Operation name + */ +export function Operation(name: string): any { + return function (target: any, methodName: string, descriptor?: PropertyDescriptor): any { + // Store metadata directly on the class constructor + const constructor = target.constructor; + + if (!constructor.__operations) { + constructor.__operations = []; + } + constructor.__operations.push({ + name, + method: methodName, + }); + + return descriptor; + }; +} + +/** + * Queue schedule decorator - marks an operation as scheduled + * @param cronPattern Cron pattern for scheduling + * @param options Additional scheduling options + */ +export function QueueSchedule( + cronPattern: string, + options?: { + priority?: number; + immediately?: boolean; + description?: string; + } +): any { + return function (target: any, methodName: string, descriptor?: PropertyDescriptor): any { + // Store metadata directly on the class constructor + const constructor = target.constructor; + + if (!constructor.__schedules) { + constructor.__schedules = []; + } + constructor.__schedules.push({ + operation: methodName, + cronPattern, + ...options, + }); + + return descriptor; + }; +} + +/** + * Disabled decorator - marks a handler as disabled for auto-registration + * Handlers marked with @Disabled() will be skipped during auto-registration + */ +export function Disabled() { + return function (target: T, _context?: any) { + // Store disabled flag on the constructor + (target as any).__disabled = true; + + return target; + }; +} + +/** + * Combined decorator for scheduled operations + * Automatically creates both an operation and a schedule + * @param name Operation name + * @param cronPattern Cron pattern for scheduling + * @param options Schedule options + */ +export function ScheduledOperation( + name: string, + cronPattern: string, + options?: { + priority?: number; + immediately?: boolean; + description?: string; + } +): any { + return function (target: any, methodName: string, descriptor?: PropertyDescriptor): any { + // Apply both decorators + Operation(name)(target, methodName, descriptor); + QueueSchedule(cronPattern, options)(target, methodName, descriptor); + return descriptor; + }; +} + +// Future event decorators - commented for now +// export function EventListener(eventName: string) { +// return function (target: any, propertyName: string, descriptor: PropertyDescriptor) { +// if (!target.constructor.__eventListeners) { +// target.constructor.__eventListeners = []; +// } +// target.constructor.__eventListeners.push({ +// eventName, +// method: propertyName, +// }); +// return descriptor; +// }; +// } + +// export function EventPublisher(eventName: string) { +// return function (target: any, propertyName: string, descriptor: PropertyDescriptor) { +// if (!target.constructor.__eventPublishers) { +// target.constructor.__eventPublishers = []; +// } +// target.constructor.__eventPublishers.push({ +// eventName, +// method: propertyName, +// }); +// return descriptor; +// }; +// } diff --git a/libs/core/handlers/src/index.ts b/libs/core/handlers/src/index.ts new file mode 100644 index 0000000..f531f38 --- /dev/null +++ b/libs/core/handlers/src/index.ts @@ -0,0 +1,38 @@ +// Base handler classes +export { BaseHandler, ScheduledHandler } from './base/BaseHandler'; +export type { JobScheduleOptions } from './base/BaseHandler'; + +// Handler registry +export { handlerRegistry } from './registry/handler-registry'; + +// Utilities +export { createJobHandler } from './utils/create-job-handler'; + +// Re-export types from types package for convenience +export type { + ExecutionContext, + IHandler, + JobHandler, + ScheduledJob, + HandlerConfig, + HandlerConfigWithSchedule, + TypedJobHandler, + HandlerMetadata, + OperationMetadata, + IServiceContainer, +} from '@stock-bot/types'; + +// Decorators +export { + Handler, + Operation, + QueueSchedule, + ScheduledOperation, + Disabled, +} from './decorators/decorators'; + +// Auto-registration utilities +export { autoRegisterHandlers, createAutoHandlerRegistry } from './registry/auto-register'; + +// Future exports - commented for now +// export { EventListener, EventPublisher } from './decorators/decorators'; diff --git a/libs/core/handlers/src/registry/auto-register.ts b/libs/core/handlers/src/registry/auto-register.ts new file mode 100644 index 0000000..7337875 --- /dev/null +++ b/libs/core/handlers/src/registry/auto-register.ts @@ -0,0 +1,191 @@ +/** + * Auto-registration utilities for handlers + * Automatically discovers and registers handlers based on file patterns + */ + +import { readdirSync, statSync } from 'fs'; +import { join, relative } from 'path'; +import { getLogger } from '@stock-bot/logger'; +import { BaseHandler } from '../base/BaseHandler'; +import type { IServiceContainer } from '@stock-bot/types'; + +const logger = getLogger('handler-auto-register'); + +/** + * Recursively find all handler files in a directory + */ +function findHandlerFiles(dir: string, pattern = '.handler.'): string[] { + const files: string[] = []; + + function scan(currentDir: string) { + const entries = readdirSync(currentDir); + + for (const entry of entries) { + const fullPath = join(currentDir, entry); + const stat = statSync(fullPath); + + if (stat.isDirectory() && !entry.startsWith('.') && entry !== 'node_modules') { + scan(fullPath); + } else if (stat.isFile() && entry.includes(pattern) && entry.endsWith('.ts')) { + files.push(fullPath); + } + } + } + + scan(dir); + return files; +} + +/** + * Extract handler classes from a module + */ +function extractHandlerClasses( + module: any +): Array BaseHandler> { + const handlers: Array BaseHandler> = []; + + for (const key of Object.keys(module)) { + const exported = module[key]; + + // Check if it's a class that extends BaseHandler + if ( + typeof exported === 'function' && + exported.prototype && + exported.prototype instanceof BaseHandler + ) { + handlers.push(exported); + } + } + + return handlers; +} + +/** + * Auto-register all handlers in a directory + * @param directory The directory to scan for handlers + * @param services The service container to inject into handlers + * @param options Configuration options + */ +export async function autoRegisterHandlers( + directory: string, + services: IServiceContainer, + options: { + pattern?: string; + exclude?: string[]; + dryRun?: boolean; + serviceName?: string; + } = {} +): Promise<{ registered: string[]; failed: string[] }> { + const { pattern = '.handler.', exclude = [], dryRun = false, serviceName } = options; + const registered: string[] = []; + const failed: string[] = []; + + try { + logger.info('Starting auto-registration of handlers', { directory, pattern }); + + // Find all handler files + const handlerFiles = findHandlerFiles(directory, pattern); + logger.debug(`Found ${handlerFiles.length} handler files`, { files: handlerFiles }); + + // Process each handler file + for (const file of handlerFiles) { + const relativePath = relative(directory, file); + + // Skip excluded files + if (exclude.some(ex => relativePath.includes(ex))) { + logger.debug(`Skipping excluded file: ${relativePath}`); + continue; + } + + try { + // Import the module + const module = await import(file); + const handlerClasses = extractHandlerClasses(module); + + if (handlerClasses.length === 0) { + logger.warn(`No handler classes found in ${relativePath}`); + continue; + } + + // Register each handler class + for (const HandlerClass of handlerClasses) { + const handlerName = HandlerClass.name; + + // Check if handler is disabled + if ((HandlerClass as any).__disabled) { + logger.info(`Skipping disabled handler: ${handlerName} from ${relativePath}`); + continue; + } + + if (dryRun) { + logger.info(`[DRY RUN] Would register handler: ${handlerName} from ${relativePath}`); + registered.push(handlerName); + } else { + logger.info(`Registering handler: ${handlerName} from ${relativePath}`); + + // Create instance and register + const handler = new HandlerClass(services); + handler.register(serviceName); + + // No need to set service ownership separately - it's done in register() + + registered.push(handlerName); + logger.info(`Successfully registered handler: ${handlerName}`, { service: serviceName }); + } + } + } catch (error) { + logger.error(`Failed to process handler file: ${relativePath}`, { error }); + failed.push(relativePath); + } + } + + logger.info('Auto-registration complete', { + totalFiles: handlerFiles.length, + registered: registered.length, + failed: failed.length, + }); + + return { registered, failed }; + } catch (error) { + logger.error('Auto-registration failed', { error }); + throw error; + } +} + +/** + * Create a handler registry that auto-discovers handlers + */ +export function createAutoHandlerRegistry(services: IServiceContainer) { + return { + /** + * Register all handlers from a directory + */ + async registerDirectory( + directory: string, + options?: Parameters[2] + ) { + return autoRegisterHandlers(directory, services, options); + }, + + /** + * Register handlers from multiple directories + */ + async registerDirectories( + directories: string[], + options?: Parameters[2] + ) { + const results = { + registered: [] as string[], + failed: [] as string[], + }; + + for (const dir of directories) { + const result = await autoRegisterHandlers(dir, services, options); + results.registered.push(...result.registered); + results.failed.push(...result.failed); + } + + return results; + }, + }; +} diff --git a/libs/core/handlers/src/registry/handler-registry.ts b/libs/core/handlers/src/registry/handler-registry.ts new file mode 100644 index 0000000..19401b5 --- /dev/null +++ b/libs/core/handlers/src/registry/handler-registry.ts @@ -0,0 +1,184 @@ +/** + * Handler Registry - Runtime registry for queue handlers + * Properly located in handlers package instead of types + */ + +import type { + HandlerConfig, + HandlerConfigWithSchedule, + JobHandler, + ScheduledJob, +} from '@stock-bot/types'; +import { getLogger } from '@stock-bot/logger'; + +class HandlerRegistry { + private readonly logger = getLogger('handler-registry'); + private handlers = new Map(); + private handlerSchedules = new Map(); + private handlerServices = new Map(); // Maps handler to service name + + /** + * Register a handler with its operations (simple config) + */ + register(handlerName: string, config: HandlerConfig, serviceName?: string): void { + this.logger.info(`Registering handler: ${handlerName}`, { + operations: Object.keys(config), + service: serviceName, + }); + + this.handlers.set(handlerName, config); + + // Track service ownership if provided + if (serviceName) { + this.handlerServices.set(handlerName, serviceName); + } + } + + /** + * Register a handler with scheduled jobs (enhanced config) + */ + registerWithSchedule(config: HandlerConfigWithSchedule, serviceName?: string): void { + this.logger.info(`Registering handler with schedule: ${config.name}`, { + operations: Object.keys(config.operations), + scheduledJobs: config.scheduledJobs?.length || 0, + service: serviceName, + }); + + this.handlers.set(config.name, config.operations); + + if (config.scheduledJobs && config.scheduledJobs.length > 0) { + this.handlerSchedules.set(config.name, config.scheduledJobs); + } + + // Track service ownership if provided + if (serviceName) { + this.handlerServices.set(config.name, serviceName); + } + } + + /** + * Get a specific handler's configuration + */ + getHandler(handlerName: string): HandlerConfig | undefined { + return this.handlers.get(handlerName); + } + + /** + * Get all registered handlers + */ + getAllHandlers(): Map { + return new Map(this.handlers); + } + + /** + * Get scheduled jobs for a handler + */ + getScheduledJobs(handlerName: string): ScheduledJob[] { + return this.handlerSchedules.get(handlerName) || []; + } + + /** + * Get all handlers with their scheduled jobs + */ + getAllHandlersWithSchedule(): Map< + string, + { operations: HandlerConfig; scheduledJobs: ScheduledJob[] } + > { + const result = new Map(); + + for (const [name, operations] of this.handlers) { + result.set(name, { + operations, + scheduledJobs: this.handlerSchedules.get(name) || [], + }); + } + + return result; + } + + /** + * Get a specific operation from a handler + */ + getOperation(handlerName: string, operationName: string): JobHandler | undefined { + const handler = this.handlers.get(handlerName); + if (!handler) { + return undefined; + } + return handler[operationName]; + } + + /** + * Check if a handler is registered + */ + hasHandler(handlerName: string): boolean { + return this.handlers.has(handlerName); + } + + /** + * Get list of all registered handler names + */ + getHandlerNames(): string[] { + return Array.from(this.handlers.keys()); + } + + /** + * Get registry statistics + */ + getStats(): { handlers: number; operations: number; scheduledJobs: number } { + let operationCount = 0; + let scheduledJobCount = 0; + + for (const [_, config] of this.handlers) { + operationCount += Object.keys(config).length; + } + + for (const [_, jobs] of this.handlerSchedules) { + scheduledJobCount += jobs.length; + } + + return { + handlers: this.handlers.size, + operations: operationCount, + scheduledJobs: scheduledJobCount, + }; + } + + /** + * Get the service that owns a handler + */ + getHandlerService(handlerName: string): string | undefined { + return this.handlerServices.get(handlerName); + } + + /** + * Get all handlers for a specific service + */ + getServiceHandlers(serviceName: string): string[] { + const handlers: string[] = []; + for (const [handler, service] of this.handlerServices) { + if (service === serviceName) { + handlers.push(handler); + } + } + return handlers; + } + + /** + * Set service ownership for a handler (used during auto-discovery) + */ + setHandlerService(handlerName: string, serviceName: string): void { + this.handlerServices.set(handlerName, serviceName); + } + + /** + * Clear all registrations (useful for testing) + */ + clear(): void { + this.handlers.clear(); + this.handlerSchedules.clear(); + this.handlerServices.clear(); + } +} + +// Export singleton instance +export const handlerRegistry = new HandlerRegistry(); \ No newline at end of file diff --git a/libs/core/handlers/src/utils/create-job-handler.ts b/libs/core/handlers/src/utils/create-job-handler.ts new file mode 100644 index 0000000..7f5012e --- /dev/null +++ b/libs/core/handlers/src/utils/create-job-handler.ts @@ -0,0 +1,16 @@ +/** + * Utility for creating typed job handlers + */ + +import type { JobHandler, TypedJobHandler } from '@stock-bot/types'; + +/** + * Create a typed job handler with validation + */ +export function createJobHandler( + handler: TypedJobHandler +): JobHandler { + return async (payload: unknown): Promise => { + return handler(payload as TPayload); + }; +} \ No newline at end of file diff --git a/libs/core/handlers/tsconfig.json b/libs/core/handlers/tsconfig.json new file mode 100644 index 0000000..6007caa --- /dev/null +++ b/libs/core/handlers/tsconfig.json @@ -0,0 +1,16 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "composite": true + }, + "include": ["src/**/*"], + "references": [ + { "path": "../config" }, + { "path": "../logger" }, + { "path": "../cache" }, + { "path": "../types" }, + { "path": "../../utils" } + ] +} diff --git a/libs/logger/README.md b/libs/core/logger/README.md similarity index 100% rename from libs/logger/README.md rename to libs/core/logger/README.md diff --git a/libs/logger/bunfig.toml b/libs/core/logger/bunfig.toml similarity index 100% rename from libs/logger/bunfig.toml rename to libs/core/logger/bunfig.toml diff --git a/libs/logger/package.json b/libs/core/logger/package.json similarity index 100% rename from libs/logger/package.json rename to libs/core/logger/package.json diff --git a/libs/logger/src/index.ts b/libs/core/logger/src/index.ts similarity index 82% rename from libs/logger/src/index.ts rename to libs/core/logger/src/index.ts index db74377..81d9113 100644 --- a/libs/logger/src/index.ts +++ b/libs/core/logger/src/index.ts @@ -9,6 +9,3 @@ export { Logger, getLogger, shutdownLoggers, setLoggerConfig } from './logger'; // Type definitions export type { LogLevel, LogContext, LogMetadata, LoggerConfig } from './types'; - -// Default export -export { getLogger as default } from './logger'; diff --git a/libs/logger/src/logger.ts b/libs/core/logger/src/logger.ts similarity index 96% rename from libs/logger/src/logger.ts rename to libs/core/logger/src/logger.ts index 97253c9..58b8c7b 100644 --- a/libs/logger/src/logger.ts +++ b/libs/core/logger/src/logger.ts @@ -58,12 +58,12 @@ function createDestination( // Console: In-process pretty stream for dev (fast shutdown) if (config.logConsole && config.environment !== 'production') { const prettyStream = pretty({ - sync: true, // IMPORTANT: Make async to prevent blocking the event loop + sync: true, // IMPORTANT: Make async to prevent blocking the event loop colorize: true, translateTime: 'yyyy-mm-dd HH:MM:ss.l', messageFormat: '[{service}{childName}] {msg}', - singleLine: false, // This was causing logs to be on one line - hideObject: false, // Hide metadata objects + singleLine: false, // This was causing logs to be on one line + hideObject: false, // Hide metadata objects ignore: 'pid,hostname,service,environment,version,childName', errorLikeObjectKeys: ['err', 'error'], errorProps: 'message,stack,name,code', @@ -177,15 +177,15 @@ export class Logger { let data = { ...this.context, ...metadata }; - // Hide all metadata if hideObject is enabled - if (globalConfig.hideObject) { + // Hide all metadata if hideObject is enabled, EXCEPT for error and fatal levels + if (globalConfig.hideObject && level !== 'error' && level !== 'fatal') { data = {}; // Clear all metadata } if (typeof message === 'string') { (this.pino as any)[level](data, message); } else { - if (globalConfig.hideObject) { + if (globalConfig.hideObject && level !== 'error' && level !== 'fatal') { (this.pino as any)[level]({}, `Object logged (hidden)`); } else { (this.pino as any)[level]({ ...data, data: message }, 'Object logged'); @@ -193,7 +193,6 @@ export class Logger { } } - // Simple log level methods trace(message: string | object, metadata?: LogMetadata): void { this.log('trace', message, metadata); diff --git a/libs/logger/src/types.ts b/libs/core/logger/src/types.ts similarity index 100% rename from libs/logger/src/types.ts rename to libs/core/logger/src/types.ts diff --git a/libs/logger/test/advanced.test.ts b/libs/core/logger/test/advanced.test.ts similarity index 100% rename from libs/logger/test/advanced.test.ts rename to libs/core/logger/test/advanced.test.ts diff --git a/libs/logger/test/basic.test.ts b/libs/core/logger/test/basic.test.ts similarity index 100% rename from libs/logger/test/basic.test.ts rename to libs/core/logger/test/basic.test.ts diff --git a/libs/logger/test/integration.test.ts b/libs/core/logger/test/integration.test.ts similarity index 100% rename from libs/logger/test/integration.test.ts rename to libs/core/logger/test/integration.test.ts diff --git a/libs/logger/test/setup.ts b/libs/core/logger/test/setup.ts similarity index 100% rename from libs/logger/test/setup.ts rename to libs/core/logger/test/setup.ts diff --git a/libs/core/logger/tsconfig.json b/libs/core/logger/tsconfig.json new file mode 100644 index 0000000..9405533 --- /dev/null +++ b/libs/core/logger/tsconfig.json @@ -0,0 +1,10 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "composite": true + }, + "include": ["src/**/*"], + "references": [] +} diff --git a/libs/queue/README.md b/libs/core/queue/README.md similarity index 89% rename from libs/queue/README.md rename to libs/core/queue/README.md index 8ea23c4..ab38abc 100644 --- a/libs/queue/README.md +++ b/libs/core/queue/README.md @@ -22,21 +22,24 @@ npm install @stock-bot/queue ### Basic Queue Setup ```typescript -import { QueueManager, providerRegistry } from '@stock-bot/queue'; +import { QueueManager, handlerRegistry } from '@stock-bot/queue'; -// Initialize queue manager +// Initialize queue manager (typically done via dependency injection) const queueManager = new QueueManager({ - queueName: 'my-service-queue', - workers: 5, - concurrency: 20, redis: { host: 'localhost', port: 6379, }, }); -// Register providers -providerRegistry.register('market-data', { +// Get or create a queue +const queue = queueManager.getQueue('my-service-queue', { + workers: 5, + concurrency: 20, +}); + +// Register handlers +handlerRegistry.register('market-data', { 'fetch-price': async (payload) => { // Handle price fetching return { price: 100, symbol: payload.symbol }; @@ -47,8 +50,7 @@ providerRegistry.register('market-data', { }, }); -// Initialize -await queueManager.initialize(); +// Queue is ready to use - no initialization needed ``` ### Batch Processing @@ -242,8 +244,10 @@ If you're migrating from an existing queue implementation: await queueService.initialize(); // After - const queueManager = new QueueManager(); - await queueManager.initialize(); + const queueManager = new QueueManager({ + redis: { host: 'localhost', port: 6379 } + }); + // No initialization needed ``` 3. **Update provider registration**: @@ -252,7 +256,7 @@ If you're migrating from an existing queue implementation: providerRegistry.register('provider', config); // After - queueManager.registerProvider('provider', config); + handlerRegistry.register('provider', config); ``` ## Examples @@ -281,12 +285,14 @@ See the `/examples` directory for complete implementation examples: 4. **Clean up periodically**: ```typescript - await queueManager.clean(24 * 60 * 60 * 1000); // Clean jobs older than 24h + const queue = queueManager.getQueue('my-queue'); + await queue.clean(24 * 60 * 60 * 1000); // Clean jobs older than 24h ``` 5. **Monitor queue stats**: ```typescript - const stats = await queueManager.getStats(); + const queue = queueManager.getQueue('my-queue'); + const stats = await queue.getStats(); console.log('Queue status:', stats); ``` diff --git a/libs/queue/package.json b/libs/core/queue/package.json similarity index 91% rename from libs/queue/package.json rename to libs/core/queue/package.json index 5f10b43..db2c01e 100644 --- a/libs/queue/package.json +++ b/libs/core/queue/package.json @@ -15,7 +15,8 @@ "rate-limiter-flexible": "^3.0.0", "@stock-bot/cache": "*", "@stock-bot/logger": "*", - "@stock-bot/types": "*" + "@stock-bot/types": "*", + "@stock-bot/handlers": "*" }, "devDependencies": { "typescript": "^5.3.0", diff --git a/libs/queue/src/batch-processor.ts b/libs/core/queue/src/batch-processor.ts similarity index 81% rename from libs/queue/src/batch-processor.ts rename to libs/core/queue/src/batch-processor.ts index 8d822a4..a69116e 100644 --- a/libs/queue/src/batch-processor.ts +++ b/libs/core/queue/src/batch-processor.ts @@ -1,9 +1,6 @@ -import { getLogger } from '@stock-bot/logger'; import { QueueManager } from './queue-manager'; import type { BatchJobData, BatchResult, JobData, ProcessOptions } from './types'; -const logger = getLogger('batch-processor'); - /** * Main function - processes items either directly or in batches * Each item becomes payload: item (no processing needed) @@ -11,10 +8,15 @@ const logger = getLogger('batch-processor'); export async function processItems( items: T[], queueName: string, - options: ProcessOptions + options: ProcessOptions, + queueManager: QueueManager ): Promise { - const queueManager = QueueManager.getInstance(); - queueManager.getQueue(queueName); + const queue = queueManager.getQueue(queueName); + const logger = queue.createChildLogger('batch-processor', { + queueName, + totalItems: items.length, + mode: options.useBatching ? 'batch' : 'direct', + }); const startTime = Date.now(); if (items.length === 0) { @@ -35,8 +37,8 @@ export async function processItems( try { const result = options.useBatching - ? await processBatched(items, queueName, options) - : await processDirect(items, queueName, options); + ? await processBatched(items, queueName, options, queueManager) + : await processDirect(items, queueName, options, queueManager); const duration = Date.now() - startTime; @@ -47,7 +49,7 @@ export async function processItems( return { ...result, duration }; } catch (error) { - logger.error('Batch processing failed', error); + logger.error('Batch processing failed', { error }); throw error; } } @@ -58,10 +60,14 @@ export async function processItems( async function processDirect( items: T[], queueName: string, - options: ProcessOptions + options: ProcessOptions, + queueManager: QueueManager ): Promise> { - const queueManager = QueueManager.getInstance(); - queueManager.getQueue(queueName); + const queue = queueManager.getQueue(queueName); + const logger = queue.createChildLogger('batch-direct', { + queueName, + totalItems: items.length, + }); const totalDelayMs = options.totalDelayHours * 60 * 60 * 1000; // Convert hours to milliseconds const delayPerItem = totalDelayMs / items.length; @@ -87,7 +93,7 @@ async function processDirect( }, })); - const createdJobs = await addJobsInChunks(queueName, jobs); + const createdJobs = await addJobsInChunks(queueName, jobs, queueManager); return { totalItems: items.length, @@ -102,10 +108,14 @@ async function processDirect( async function processBatched( items: T[], queueName: string, - options: ProcessOptions + options: ProcessOptions, + queueManager: QueueManager ): Promise> { - const queueManager = QueueManager.getInstance(); - queueManager.getQueue(queueName); + const queue = queueManager.getQueue(queueName); + const logger = queue.createChildLogger('batch-batched', { + queueName, + totalItems: items.length, + }); const batchSize = options.batchSize || 100; const batches = createBatches(items, batchSize); const totalDelayMs = options.totalDelayHours * 60 * 60 * 1000; // Convert hours to milliseconds @@ -121,7 +131,7 @@ async function processBatched( const batchJobs = await Promise.all( batches.map(async (batch, batchIndex) => { // Just store the items directly - no processing needed - const payloadKey = await storeItems(batch, queueName, options); + const payloadKey = await storeItems(batch, queueName, options, queueManager); return { name: 'process-batch', @@ -148,7 +158,7 @@ async function processBatched( }) ); - const createdJobs = await addJobsInChunks(queueName, batchJobs); + const createdJobs = await addJobsInChunks(queueName, batchJobs, queueManager); return { totalItems: items.length, @@ -161,12 +171,16 @@ async function processBatched( /** * Process a batch job - loads items and creates individual jobs */ -export async function processBatchJob(jobData: BatchJobData, queueName: string): Promise { - const queueManager = QueueManager.getInstance(); - queueManager.getQueue(queueName); +export async function processBatchJob(jobData: BatchJobData, queueName: string, queueManager: QueueManager): Promise { + const queue = queueManager.getQueue(queueName); + const logger = queue.createChildLogger('batch-job', { + queueName, + batchIndex: jobData.batchIndex, + payloadKey: jobData.payloadKey, + }); const { payloadKey, batchIndex, totalBatches, itemCount, totalDelayHours } = jobData; - logger.trace('Processing batch job', { + logger.debug('Processing batch job', { batchIndex, totalBatches, itemCount, @@ -174,7 +188,7 @@ export async function processBatchJob(jobData: BatchJobData, queueName: string): }); try { - const payload = await loadPayload(payloadKey, queueName); + const payload = await loadPayload(payloadKey, queueName, queueManager); if (!payload || !payload.items || !payload.options) { logger.error('Invalid payload data', { payloadKey, payload }); throw new Error(`Invalid payload data for key: ${payloadKey}`); @@ -187,7 +201,7 @@ export async function processBatchJob(jobData: BatchJobData, queueName: string): const delayPerBatch = totalDelayMs / totalBatches; // Time allocated for each batch const delayPerItem = delayPerBatch / items.length; // Distribute items evenly within batch window - logger.trace('Calculating job delays', { + logger.debug('Calculating job delays', { batchIndex, delayPerBatch: `${(delayPerBatch / 1000 / 60).toFixed(2)} minutes`, delayPerItem: `${(delayPerItem / 1000).toFixed(2)} seconds`, @@ -210,10 +224,10 @@ export async function processBatchJob(jobData: BatchJobData, queueName: string): }, })); - const createdJobs = await addJobsInChunks(queueName, jobs); + const createdJobs = await addJobsInChunks(queueName, jobs, queueManager); // Cleanup payload after successful processing - await cleanupPayload(payloadKey, queueName); + await cleanupPayload(payloadKey, queueName, queueManager); return { batchIndex, @@ -239,9 +253,9 @@ function createBatches(items: T[], batchSize: number): T[][] { async function storeItems( items: T[], queueName: string, - options: ProcessOptions + options: ProcessOptions, + queueManager: QueueManager ): Promise { - const queueManager = QueueManager.getInstance(); const cache = queueManager.getCache(queueName); const payloadKey = `payload:${Date.now()}:${Math.random().toString(36).substr(2, 9)}`; @@ -265,7 +279,8 @@ async function storeItems( async function loadPayload( key: string, - queueName: string + queueName: string, + queueManager: QueueManager ): Promise<{ items: T[]; options: { @@ -276,7 +291,6 @@ async function loadPayload( operation: string; }; } | null> { - const queueManager = QueueManager.getInstance(); const cache = queueManager.getCache(queueName); return (await cache.get(key)) as { items: T[]; @@ -290,8 +304,7 @@ async function loadPayload( } | null; } -async function cleanupPayload(key: string, queueName: string): Promise { - const queueManager = QueueManager.getInstance(); +async function cleanupPayload(key: string, queueName: string, queueManager: QueueManager): Promise { const cache = queueManager.getCache(queueName); await cache.del(key); } @@ -299,10 +312,14 @@ async function cleanupPayload(key: string, queueName: string): Promise { async function addJobsInChunks( queueName: string, jobs: Array<{ name: string; data: JobData; opts?: Record }>, + queueManager: QueueManager, chunkSize = 100 ): Promise { - const queueManager = QueueManager.getInstance(); const queue = queueManager.getQueue(queueName); + const logger = queue.createChildLogger('batch-chunk', { + queueName, + totalJobs: jobs.length, + }); const allCreatedJobs = []; for (let i = 0; i < jobs.length; i += chunkSize) { diff --git a/libs/queue/src/dlq-handler.ts b/libs/core/queue/src/dlq-handler.ts similarity index 81% rename from libs/queue/src/dlq-handler.ts rename to libs/core/queue/src/dlq-handler.ts index 76b2a2d..28fbc08 100644 --- a/libs/queue/src/dlq-handler.ts +++ b/libs/core/queue/src/dlq-handler.ts @@ -1,251 +1,257 @@ -import { Queue, type Job } from 'bullmq'; -import { getLogger } from '@stock-bot/logger'; -import type { DLQConfig, RedisConfig } from './types'; -import { getRedisConnection } from './utils'; - -const logger = getLogger('dlq-handler'); - -export class DeadLetterQueueHandler { - private dlq: Queue; - private config: Required; - private failureCount = new Map(); - - constructor( - private mainQueue: Queue, - connection: RedisConfig, - config: DLQConfig = {} - ) { - this.config = { - maxRetries: config.maxRetries ?? 3, - retryDelay: config.retryDelay ?? 60000, // 1 minute - alertThreshold: config.alertThreshold ?? 100, - cleanupAge: config.cleanupAge ?? 168, // 7 days - }; - - // Create DLQ with same name but -dlq suffix - const dlqName = `${mainQueue.name}-dlq`; - this.dlq = new Queue(dlqName, { connection: getRedisConnection(connection) }); - } - - /** - * Process a failed job - either retry or move to DLQ - */ - async handleFailedJob(job: Job, error: Error): Promise { - const jobKey = `${job.name}:${job.id}`; - const currentFailures = (this.failureCount.get(jobKey) || 0) + 1; - this.failureCount.set(jobKey, currentFailures); - - logger.warn('Job failed', { - jobId: job.id, - jobName: job.name, - attempt: job.attemptsMade, - maxAttempts: job.opts.attempts, - error: error.message, - failureCount: currentFailures, - }); - - // Check if job should be moved to DLQ - if (job.attemptsMade >= (job.opts.attempts || this.config.maxRetries)) { - await this.moveToDeadLetterQueue(job, error); - this.failureCount.delete(jobKey); - } - } - - /** - * Move job to dead letter queue - */ - private async moveToDeadLetterQueue(job: Job, error: Error): Promise { - try { - const dlqData = { - originalJob: { - id: job.id, - name: job.name, - data: job.data, - opts: job.opts, - attemptsMade: job.attemptsMade, - failedReason: job.failedReason, - processedOn: job.processedOn, - timestamp: job.timestamp, - }, - error: { - message: error.message, - stack: error.stack, - name: error.name, - }, - movedToDLQAt: new Date().toISOString(), - }; - - await this.dlq.add('failed-job', dlqData, { - removeOnComplete: false, - removeOnFail: false, - }); - - logger.error('Job moved to DLQ', { - jobId: job.id, - jobName: job.name, - error: error.message, - }); - - // Check if we need to alert - await this.checkAlertThreshold(); - } catch (dlqError) { - logger.error('Failed to move job to DLQ', { - jobId: job.id, - error: dlqError, - }); - } - } - - /** - * Retry jobs from DLQ - */ - async retryDLQJobs(limit = 10): Promise { - const jobs = await this.dlq.getCompleted(0, limit); - let retriedCount = 0; - - for (const dlqJob of jobs) { - try { - const { originalJob } = dlqJob.data; - - // Re-add to main queue with delay - await this.mainQueue.add( - originalJob.name, - originalJob.data, - { - ...originalJob.opts, - delay: this.config.retryDelay, - attempts: this.config.maxRetries, - } - ); - - // Remove from DLQ - await dlqJob.remove(); - retriedCount++; - - logger.info('Job retried from DLQ', { - originalJobId: originalJob.id, - jobName: originalJob.name, - }); - } catch (error) { - logger.error('Failed to retry DLQ job', { - dlqJobId: dlqJob.id, - error, - }); - } - } - - return retriedCount; - } - - /** - * Get DLQ statistics - */ - async getStats(): Promise<{ - total: number; - recent: number; - byJobName: Record; - oldestJob: Date | null; - }> { - const [completed, failed, waiting] = await Promise.all([ - this.dlq.getCompleted(), - this.dlq.getFailed(), - this.dlq.getWaiting(), - ]); - - const allJobs = [...completed, ...failed, ...waiting]; - const byJobName: Record = {}; - let oldestTimestamp: number | null = null; - - for (const job of allJobs) { - const jobName = job.data.originalJob?.name || 'unknown'; - byJobName[jobName] = (byJobName[jobName] || 0) + 1; - - if (!oldestTimestamp || job.timestamp < oldestTimestamp) { - oldestTimestamp = job.timestamp; - } - } - - // Count recent jobs (last 24 hours) - const oneDayAgo = Date.now() - 24 * 60 * 60 * 1000; - const recent = allJobs.filter(job => job.timestamp > oneDayAgo).length; - - return { - total: allJobs.length, - recent, - byJobName, - oldestJob: oldestTimestamp ? new Date(oldestTimestamp) : null, - }; - } - - /** - * Clean up old DLQ entries - */ - async cleanup(): Promise { - const ageInMs = this.config.cleanupAge * 60 * 60 * 1000; - const cutoffTime = Date.now() - ageInMs; - - const jobs = await this.dlq.getCompleted(); - let removedCount = 0; - - for (const job of jobs) { - if (job.timestamp < cutoffTime) { - await job.remove(); - removedCount++; - } - } - - logger.info('DLQ cleanup completed', { - removedCount, - cleanupAge: `${this.config.cleanupAge} hours`, - }); - - return removedCount; - } - - /** - * Check if alert threshold is exceeded - */ - private async checkAlertThreshold(): Promise { - const stats = await this.getStats(); - - if (stats.total >= this.config.alertThreshold) { - logger.error('DLQ alert threshold exceeded', { - threshold: this.config.alertThreshold, - currentCount: stats.total, - byJobName: stats.byJobName, - }); - // In a real implementation, this would trigger alerts - } - } - - /** - * Get failed jobs for inspection - */ - async inspectFailedJobs(limit = 10): Promise> { - const jobs = await this.dlq.getCompleted(0, limit); - - return jobs.map(job => ({ - id: job.data.originalJob.id, - name: job.data.originalJob.name, - data: job.data.originalJob.data, - error: job.data.error, - failedAt: job.data.movedToDLQAt, - attempts: job.data.originalJob.attemptsMade, - })); - } - - /** - * Shutdown DLQ handler - */ - async shutdown(): Promise { - await this.dlq.close(); - this.failureCount.clear(); - } -} \ No newline at end of file +import { Queue, type Job } from 'bullmq'; +import type { DLQConfig, RedisConfig } from './types'; +import { getRedisConnection } from './utils'; + +// Logger interface for type safety +interface Logger { + info(message: string, meta?: Record): void; + error(message: string, meta?: Record): void; + warn(message: string, meta?: Record): void; + debug(message: string, meta?: Record): void; +} + +export class DeadLetterQueueHandler { + private dlq: Queue; + private config: Required; + private failureCount = new Map(); + private readonly logger: Logger; + + constructor( + private mainQueue: Queue, + connection: RedisConfig, + config: DLQConfig = {}, + logger?: Logger + ) { + this.logger = logger || console; + this.config = { + maxRetries: config.maxRetries ?? 3, + retryDelay: config.retryDelay ?? 60000, // 1 minute + alertThreshold: config.alertThreshold ?? 100, + cleanupAge: config.cleanupAge ?? 168, // 7 days + }; + + // Create DLQ with same name but -dlq suffix + const dlqName = `${mainQueue.name}-dlq`; + this.dlq = new Queue(dlqName, { connection: getRedisConnection(connection) }); + } + + /** + * Process a failed job - either retry or move to DLQ + */ + async handleFailedJob(job: Job, error: Error): Promise { + const jobKey = `${job.name}:${job.id}`; + const currentFailures = (this.failureCount.get(jobKey) || 0) + 1; + this.failureCount.set(jobKey, currentFailures); + + this.logger.warn('Job failed', { + jobId: job.id, + jobName: job.name, + attempt: job.attemptsMade, + maxAttempts: job.opts.attempts, + error: error.message, + failureCount: currentFailures, + }); + + // Check if job should be moved to DLQ + if (job.attemptsMade >= (job.opts.attempts || this.config.maxRetries)) { + await this.moveToDeadLetterQueue(job, error); + this.failureCount.delete(jobKey); + } + } + + /** + * Move job to dead letter queue + */ + private async moveToDeadLetterQueue(job: Job, error: Error): Promise { + try { + const dlqData = { + originalJob: { + id: job.id, + name: job.name, + data: job.data, + opts: job.opts, + attemptsMade: job.attemptsMade, + failedReason: job.failedReason, + processedOn: job.processedOn, + timestamp: job.timestamp, + }, + error: { + message: error.message, + stack: error.stack, + name: error.name, + }, + movedToDLQAt: new Date().toISOString(), + }; + + await this.dlq.add('failed-job', dlqData, { + removeOnComplete: 100, + removeOnFail: 50, + }); + + this.logger.error('Job moved to DLQ', { + jobId: job.id, + jobName: job.name, + error: error.message, + }); + + // Check if we need to alert + await this.checkAlertThreshold(); + } catch (dlqError) { + this.logger.error('Failed to move job to DLQ', { + jobId: job.id, + error: dlqError, + }); + } + } + + /** + * Retry jobs from DLQ + */ + async retryDLQJobs(limit = 10): Promise { + const jobs = await this.dlq.getCompleted(0, limit); + let retriedCount = 0; + + for (const dlqJob of jobs) { + try { + const { originalJob } = dlqJob.data; + + // Re-add to main queue with delay + await this.mainQueue.add(originalJob.name, originalJob.data, { + ...originalJob.opts, + delay: this.config.retryDelay, + attempts: this.config.maxRetries, + }); + + // Remove from DLQ + await dlqJob.remove(); + retriedCount++; + + this.logger.info('Job retried from DLQ', { + originalJobId: originalJob.id, + jobName: originalJob.name, + }); + } catch (error) { + this.logger.error('Failed to retry DLQ job', { + dlqJobId: dlqJob.id, + error, + }); + } + } + + return retriedCount; + } + + /** + * Get DLQ statistics + */ + async getStats(): Promise<{ + total: number; + recent: number; + byJobName: Record; + oldestJob: Date | null; + }> { + const [completed, failed, waiting] = await Promise.all([ + this.dlq.getCompleted(), + this.dlq.getFailed(), + this.dlq.getWaiting(), + ]); + + const allJobs = [...completed, ...failed, ...waiting]; + const byJobName: Record = {}; + let oldestTimestamp: number | null = null; + + for (const job of allJobs) { + const jobName = job.data.originalJob?.name || 'unknown'; + byJobName[jobName] = (byJobName[jobName] || 0) + 1; + + if (!oldestTimestamp || job.timestamp < oldestTimestamp) { + oldestTimestamp = job.timestamp; + } + } + + // Count recent jobs (last 24 hours) + const oneDayAgo = Date.now() - 24 * 60 * 60 * 1000; + const recent = allJobs.filter(job => job.timestamp > oneDayAgo).length; + + return { + total: allJobs.length, + recent, + byJobName, + oldestJob: oldestTimestamp ? new Date(oldestTimestamp) : null, + }; + } + + /** + * Clean up old DLQ entries + */ + async cleanup(): Promise { + const ageInMs = this.config.cleanupAge * 60 * 60 * 1000; + const cutoffTime = Date.now() - ageInMs; + + const jobs = await this.dlq.getCompleted(); + let removedCount = 0; + + for (const job of jobs) { + if (job.timestamp < cutoffTime) { + await job.remove(); + removedCount++; + } + } + + this.logger.info('DLQ cleanup completed', { + removedCount, + cleanupAge: `${this.config.cleanupAge} hours`, + }); + + return removedCount; + } + + /** + * Check if alert threshold is exceeded + */ + private async checkAlertThreshold(): Promise { + const stats = await this.getStats(); + + if (stats.total >= this.config.alertThreshold) { + this.logger.error('DLQ alert threshold exceeded', { + threshold: this.config.alertThreshold, + currentCount: stats.total, + byJobName: stats.byJobName, + }); + // In a real implementation, this would trigger alerts + } + } + + /** + * Get failed jobs for inspection + */ + async inspectFailedJobs(limit = 10): Promise< + Array<{ + id: string; + name: string; + data: unknown; + error: unknown; + failedAt: string; + attempts: number; + }> + > { + const jobs = await this.dlq.getCompleted(0, limit); + + return jobs.map(job => ({ + id: job.data.originalJob.id, + name: job.data.originalJob.name, + data: job.data.originalJob.data, + error: job.data.error, + failedAt: job.data.movedToDLQAt, + attempts: job.data.originalJob.attemptsMade, + })); + } + + /** + * Shutdown DLQ handler + */ + async shutdown(): Promise { + await this.dlq.close(); + this.failureCount.clear(); + } +} diff --git a/libs/queue/src/index.ts b/libs/core/queue/src/index.ts similarity index 60% rename from libs/queue/src/index.ts rename to libs/core/queue/src/index.ts index fe606e8..2f986af 100644 --- a/libs/queue/src/index.ts +++ b/libs/core/queue/src/index.ts @@ -1,17 +1,24 @@ // Core exports -export { Queue, type QueueWorkerConfig } from './queue'; +export { Queue } from './queue'; export { QueueManager } from './queue-manager'; -export { handlerRegistry } from './handler-registry'; -export { createJobHandler } from './types'; +export { SmartQueueManager } from './smart-queue-manager'; +export { ServiceCache, createServiceCache } from './service-cache'; +// Service utilities +export { + normalizeServiceName, + generateCachePrefix, + getFullQueueName, + parseQueueName +} from './service-utils'; + +// Re-export handler registry and utilities from handlers package +export { handlerRegistry, createJobHandler } from '@stock-bot/handlers'; // Batch processing export { processBatchJob, processItems } from './batch-processor'; -// Queue factory functions -// QueueFactory removed - use QueueManager directly - // DLQ handling -export { DeadLetterQueueHandler, DeadLetterQueueHandler as DLQHandler } from './dlq-handler'; +export { DeadLetterQueueHandler } from './dlq-handler'; // Metrics export { QueueMetricsCollector } from './queue-metrics'; @@ -25,38 +32,42 @@ export type { JobData, JobOptions, QueueOptions, - QueueStats, GlobalStats, - + // Batch processing types BatchResult, ProcessOptions, BatchJobData, - + // Handler types JobHandler, TypedJobHandler, HandlerConfig, - TypedHandlerConfig, HandlerConfigWithSchedule, - TypedHandlerConfigWithSchedule, HandlerInitializer, - + QueueStats, + QueueWorkerConfig, + // Configuration types RedisConfig, QueueConfig, QueueManagerConfig, - + // Rate limiting types RateLimitConfig, RateLimitRule, - + // DLQ types DLQConfig, DLQJobInfo, - + // Scheduled job types ScheduledJob, ScheduleConfig, + + // Smart Queue types + SmartQueueConfig, + QueueRoute, + } from './types'; diff --git a/libs/queue/src/queue-manager.ts b/libs/core/queue/src/queue-manager.ts similarity index 75% rename from libs/queue/src/queue-manager.ts rename to libs/core/queue/src/queue-manager.ts index ee2ac14..ad18385 100644 --- a/libs/queue/src/queue-manager.ts +++ b/libs/core/queue/src/queue-manager.ts @@ -1,5 +1,5 @@ -import { CacheProvider, createCache } from '@stock-bot/cache'; -import { getLogger } from '@stock-bot/logger'; +import { createCache } from '@stock-bot/cache'; +import type { CacheProvider } from '@stock-bot/cache'; import { Queue, type QueueWorkerConfig } from './queue'; import { QueueRateLimiter } from './rate-limiter'; import type { @@ -8,17 +8,25 @@ import type { QueueOptions, QueueStats, RateLimitRule, + RedisConfig, } from './types'; import { getRedisConnection } from './utils'; -const logger = getLogger('queue-manager'); +// Logger interface for type safety +interface Logger { + info(message: string, meta?: Record): void; + error(message: string, meta?: Record): void; + warn(message: string, meta?: Record): void; + debug(message: string, meta?: Record): void; + trace(message: string, meta?: Record): void; + child?(name: string, context?: Record): Logger; +} /** - * Singleton QueueManager that provides unified queue and cache management + * QueueManager provides unified queue and cache management * Main entry point for all queue operations with getQueue() method */ export class QueueManager { - private static instance: QueueManager | null = null; private queues = new Map(); private caches = new Map(); private rateLimiter?: QueueRateLimiter; @@ -26,14 +34,16 @@ export class QueueManager { private isShuttingDown = false; private shutdownPromise: Promise | null = null; private config: QueueManagerConfig; + private readonly logger: Logger; - private constructor(config: QueueManagerConfig) { + constructor(config: QueueManagerConfig, logger?: Logger) { this.config = config; + this.logger = logger || console; this.redisConnection = getRedisConnection(config.redis); // Initialize rate limiter if rules are provided if (config.rateLimitRules && config.rateLimitRules.length > 0) { - this.rateLimiter = new QueueRateLimiter(this.redisConnection); + this.rateLimiter = new QueueRateLimiter(this.redisConnection, this.logger); config.rateLimitRules.forEach(rule => { if (this.rateLimiter) { this.rateLimiter.addRule(rule); @@ -41,71 +51,11 @@ export class QueueManager { }); } - logger.info('QueueManager singleton initialized', { + this.logger.info('QueueManager initialized', { redis: `${config.redis.host}:${config.redis.port}`, }); } - /** - * Get the singleton instance - * @throws Error if not initialized - use initialize() first - */ - static getInstance(): QueueManager { - if (!QueueManager.instance) { - throw new Error('QueueManager not initialized. Call QueueManager.initialize(config) first.'); - } - return QueueManager.instance; - } - - /** - * Initialize the singleton with config - * Must be called before getInstance() - */ - static initialize(config: QueueManagerConfig): QueueManager { - if (QueueManager.instance) { - logger.warn('QueueManager already initialized, returning existing instance'); - return QueueManager.instance; - } - QueueManager.instance = new QueueManager(config); - return QueueManager.instance; - } - - /** - * Get or initialize the singleton - * Convenience method that combines initialize and getInstance - */ - static getOrInitialize(config?: QueueManagerConfig): QueueManager { - if (QueueManager.instance) { - return QueueManager.instance; - } - - if (!config) { - throw new Error( - 'QueueManager not initialized and no config provided. ' + - 'Either call initialize(config) first or provide config to getOrInitialize(config).' - ); - } - - return QueueManager.initialize(config); - } - - /** - * Check if the QueueManager is initialized - */ - static isInitialized(): boolean { - return QueueManager.instance !== null; - } - - /** - * Reset the singleton (mainly for testing) - */ - static async reset(): Promise { - if (QueueManager.instance) { - await QueueManager.instance.shutdown(); - QueueManager.instance = null; - } - } - /** * Get or create a queue - unified method that handles both scenarios * This is the main method for accessing queues @@ -126,17 +76,21 @@ export class QueueManager { }; // Prepare queue configuration + const workers = mergedOptions.workers ?? this.config.defaultQueueOptions?.workers ?? 1; + const concurrency = mergedOptions.concurrency ?? this.config.defaultQueueOptions?.concurrency ?? 1; + const queueConfig: QueueWorkerConfig = { - workers: mergedOptions.workers, - concurrency: mergedOptions.concurrency, - startWorker: !!mergedOptions.workers && mergedOptions.workers > 0 && !this.config.delayWorkerStart, + workers, + concurrency, + startWorker: workers > 0 && !this.config.delayWorkerStart, }; const queue = new Queue( queueName, this.config.redis, mergedOptions.defaultJobOptions || {}, - queueConfig + queueConfig, + this.logger ); // Store the queue @@ -156,10 +110,10 @@ export class QueueManager { }); } - logger.info('Queue created with batch cache', { + this.logger.info('Queue created with batch cache', { queueName, - workers: mergedOptions.workers || 0, - concurrency: mergedOptions.concurrency || 1, + workers: workers, + concurrency: concurrency, }); return queue; @@ -189,9 +143,10 @@ export class QueueManager { keyPrefix: `batch:${queueName}:`, ttl: 86400, // 24 hours default enableMetrics: true, + logger: this.logger, }); this.caches.set(queueName, cacheProvider); - logger.trace('Cache created for queue', { queueName }); + this.logger.trace('Cache created for queue', { queueName }); } const cache = this.caches.get(queueName); if (!cache) { @@ -206,7 +161,7 @@ export class QueueManager { async initializeCache(queueName: string): Promise { const cache = this.getCache(queueName); await cache.waitForReady(10000); - logger.info('Cache initialized for queue', { queueName }); + this.logger.info('Cache initialized for queue', { queueName }); } /** @@ -216,9 +171,17 @@ export class QueueManager { private initializeBatchCacheSync(queueName: string): void { // Just create the cache - it will connect automatically when first used this.getCache(queueName); - logger.trace('Batch cache initialized synchronously for queue', { queueName }); + this.logger.trace('Batch cache initialized synchronously for queue', { queueName }); } + /** + * Get the queues map (for subclasses) + */ + protected getQueues(): Map { + return this.queues; + } + + /** * Get statistics for all queues */ @@ -259,7 +222,7 @@ export class QueueManager { */ addRateLimitRule(rule: RateLimitRule): void { if (!this.rateLimiter) { - this.rateLimiter = new QueueRateLimiter(this.redisConnection); + this.rateLimiter = new QueueRateLimiter(this.redisConnection, this.logger); } this.rateLimiter.addRule(rule); } @@ -305,7 +268,7 @@ export class QueueManager { async pauseAll(): Promise { const pausePromises = Array.from(this.queues.values()).map(queue => queue.pause()); await Promise.all(pausePromises); - logger.info('All queues paused'); + this.logger.info('All queues paused'); } /** @@ -314,7 +277,7 @@ export class QueueManager { async resumeAll(): Promise { const resumePromises = Array.from(this.queues.values()).map(queue => queue.resume()); await Promise.all(resumePromises); - logger.info('All queues resumed'); + this.logger.info('All queues resumed'); } /** @@ -349,7 +312,7 @@ export class QueueManager { async drainAll(delayed = false): Promise { const drainPromises = Array.from(this.queues.values()).map(queue => queue.drain(delayed)); await Promise.all(drainPromises); - logger.info('All queues drained', { delayed }); + this.logger.info('All queues drained', { delayed }); } /** @@ -364,7 +327,7 @@ export class QueueManager { queue.clean(grace, limit, type) ); await Promise.all(cleanPromises); - logger.info('All queues cleaned', { type, grace, limit }); + this.logger.info('All queues cleaned', { type, grace, limit }); } /** @@ -381,7 +344,7 @@ export class QueueManager { } this.isShuttingDown = true; - logger.info('Shutting down QueueManager...'); + this.logger.info('Shutting down QueueManager...'); // Create shutdown promise this.shutdownPromise = this.performShutdown(); @@ -404,7 +367,7 @@ export class QueueManager { // await Promise.race([closePromise, timeoutPromise]); } catch (error) { - logger.warn('Error closing queue', { error: (error as Error).message }); + this.logger.warn('Error closing queue', { error: (error as Error).message }); } }); @@ -416,7 +379,7 @@ export class QueueManager { // Clear cache before shutdown await cache.clear(); } catch (error) { - logger.warn('Error clearing cache', { error: (error as Error).message }); + this.logger.warn('Error clearing cache', { error: (error as Error).message }); } }); @@ -426,9 +389,9 @@ export class QueueManager { this.queues.clear(); this.caches.clear(); - logger.info('QueueManager shutdown complete'); + this.logger.info('QueueManager shutdown complete'); } catch (error) { - logger.error('Error during shutdown', { error: (error as Error).message }); + this.logger.error('Error during shutdown', { error: (error as Error).message }); throw error; } finally { // Reset shutdown state @@ -442,7 +405,9 @@ export class QueueManager { */ startAllWorkers(): void { if (!this.config.delayWorkerStart) { - logger.warn('startAllWorkers() called but delayWorkerStart is not enabled'); + this.logger.info( + 'startAllWorkers() called but workers already started automatically (delayWorkerStart is false)' + ); return; } @@ -450,17 +415,17 @@ export class QueueManager { for (const queue of this.queues.values()) { const workerCount = this.config.defaultQueueOptions?.workers || 1; const concurrency = this.config.defaultQueueOptions?.concurrency || 1; - + if (workerCount > 0) { queue.startWorkersManually(workerCount, concurrency); workersStarted++; } } - logger.info('All workers started', { + this.logger.info('All workers started', { totalQueues: this.queues.size, queuesWithWorkers: workersStarted, - delayWorkerStart: this.config.delayWorkerStart + delayWorkerStart: this.config.delayWorkerStart, }); } @@ -485,4 +450,4 @@ export class QueueManager { getConfig(): Readonly { return { ...this.config }; } -} +} \ No newline at end of file diff --git a/libs/queue/src/queue-metrics.ts b/libs/core/queue/src/queue-metrics.ts similarity index 92% rename from libs/queue/src/queue-metrics.ts rename to libs/core/queue/src/queue-metrics.ts index c74ada5..e45477d 100644 --- a/libs/queue/src/queue-metrics.ts +++ b/libs/core/queue/src/queue-metrics.ts @@ -1,314 +1,318 @@ -import { Queue, QueueEvents } from 'bullmq'; -// import { getLogger } from '@stock-bot/logger'; - -// const logger = getLogger('queue-metrics'); - -export interface QueueMetrics { - // Job counts - waiting: number; - active: number; - completed: number; - failed: number; - delayed: number; - paused?: number; - - // Performance metrics - processingTime: { - avg: number; - min: number; - max: number; - p95: number; - p99: number; - }; - - // Throughput - throughput: { - completedPerMinute: number; - failedPerMinute: number; - totalPerMinute: number; - }; - - // Job age - oldestWaitingJob: Date | null; - - // Health - isHealthy: boolean; - healthIssues: string[]; -} - -export class QueueMetricsCollector { - private processingTimes: number[] = []; - private completedTimestamps: number[] = []; - private failedTimestamps: number[] = []; - private jobStartTimes = new Map(); - private readonly maxSamples = 1000; - private readonly metricsInterval = 60000; // 1 minute - - constructor( - private queue: Queue, - private queueEvents: QueueEvents - ) { - this.setupEventListeners(); - } - - /** - * Setup event listeners for metrics collection - */ - private setupEventListeners(): void { - this.queueEvents.on('completed', () => { - // Record completion - this.completedTimestamps.push(Date.now()); - this.cleanupOldTimestamps(); - }); - - this.queueEvents.on('failed', () => { - // Record failure - this.failedTimestamps.push(Date.now()); - this.cleanupOldTimestamps(); - }); - - // Track processing times - this.queueEvents.on('active', ({ jobId }) => { - this.jobStartTimes.set(jobId, Date.now()); - }); - - this.queueEvents.on('completed', ({ jobId }) => { - const startTime = this.jobStartTimes.get(jobId); - if (startTime) { - const processingTime = Date.now() - startTime; - this.recordProcessingTime(processingTime); - this.jobStartTimes.delete(jobId); - } - }); - } - - /** - * Record processing time - */ - private recordProcessingTime(time: number): void { - this.processingTimes.push(time); - - // Keep only recent samples - if (this.processingTimes.length > this.maxSamples) { - this.processingTimes = this.processingTimes.slice(-this.maxSamples); - } - } - - /** - * Clean up old timestamps - */ - private cleanupOldTimestamps(): void { - const cutoff = Date.now() - this.metricsInterval; - - this.completedTimestamps = this.completedTimestamps.filter(ts => ts > cutoff); - this.failedTimestamps = this.failedTimestamps.filter(ts => ts > cutoff); - } - - /** - * Collect current metrics - */ - async collect(): Promise { - // Get job counts - const [waiting, active, completed, failed, delayed] = await Promise.all([ - this.queue.getWaitingCount(), - this.queue.getActiveCount(), - this.queue.getCompletedCount(), - this.queue.getFailedCount(), - this.queue.getDelayedCount(), - ]); - - // BullMQ doesn't have getPausedCount, check if queue is paused - const paused = await this.queue.isPaused() ? waiting : 0; - - // Calculate processing time metrics - const processingTime = this.calculateProcessingTimeMetrics(); - - // Calculate throughput - const throughput = this.calculateThroughput(); - - // Get oldest waiting job - const oldestWaitingJob = await this.getOldestWaitingJob(); - - // Check health - const { isHealthy, healthIssues } = this.checkHealth({ - waiting, - active, - failed, - processingTime, - }); - - return { - waiting, - active, - completed, - failed, - delayed, - paused, - processingTime, - throughput, - oldestWaitingJob, - isHealthy, - healthIssues, - }; - } - - /** - * Calculate processing time metrics - */ - private calculateProcessingTimeMetrics(): QueueMetrics['processingTime'] { - if (this.processingTimes.length === 0) { - return { avg: 0, min: 0, max: 0, p95: 0, p99: 0 }; - } - - const sorted = [...this.processingTimes].sort((a, b) => a - b); - const sum = sorted.reduce((acc, val) => acc + val, 0); - - return { - avg: sorted.length > 0 ? Math.round(sum / sorted.length) : 0, - min: sorted[0] || 0, - max: sorted[sorted.length - 1] || 0, - p95: sorted[Math.floor(sorted.length * 0.95)] || 0, - p99: sorted[Math.floor(sorted.length * 0.99)] || 0, - }; - } - - /** - * Calculate throughput metrics - */ - private calculateThroughput(): QueueMetrics['throughput'] { - const now = Date.now(); - const oneMinuteAgo = now - 60000; - - const completedPerMinute = this.completedTimestamps.filter(ts => ts > oneMinuteAgo).length; - const failedPerMinute = this.failedTimestamps.filter(ts => ts > oneMinuteAgo).length; - - return { - completedPerMinute, - failedPerMinute, - totalPerMinute: completedPerMinute + failedPerMinute, - }; - } - - /** - * Get oldest waiting job - */ - private async getOldestWaitingJob(): Promise { - const waitingJobs = await this.queue.getWaiting(0, 1); - - if (waitingJobs.length > 0) { - return new Date(waitingJobs[0].timestamp); - } - - return null; - } - - /** - * Check queue health - */ - private checkHealth(metrics: { - waiting: number; - active: number; - failed: number; - processingTime: QueueMetrics['processingTime']; - }): { isHealthy: boolean; healthIssues: string[] } { - const issues: string[] = []; - - // Check for high failure rate - const failureRate = metrics.failed / (metrics.failed + this.completedTimestamps.length); - if (failureRate > 0.1) { - issues.push(`High failure rate: ${(failureRate * 100).toFixed(1)}%`); - } - - // Check for queue backlog - if (metrics.waiting > 1000) { - issues.push(`Large queue backlog: ${metrics.waiting} jobs waiting`); - } - - // Check for slow processing - if (metrics.processingTime.avg > 30000) { // 30 seconds - issues.push(`Slow average processing time: ${(metrics.processingTime.avg / 1000).toFixed(1)}s`); - } - - // Check for stalled active jobs - if (metrics.active > 100) { - issues.push(`High number of active jobs: ${metrics.active}`); - } - - return { - isHealthy: issues.length === 0, - healthIssues: issues, - }; - } - - /** - * Get formatted metrics report - */ - async getReport(): Promise { - const metrics = await this.collect(); - - return ` -Queue Metrics Report -=================== -Status: ${metrics.isHealthy ? '✅ Healthy' : '⚠️ Issues Detected'} - -Job Counts: -- Waiting: ${metrics.waiting} -- Active: ${metrics.active} -- Completed: ${metrics.completed} -- Failed: ${metrics.failed} -- Delayed: ${metrics.delayed} -- Paused: ${metrics.paused} - -Performance: -- Avg Processing Time: ${(metrics.processingTime.avg / 1000).toFixed(2)}s -- Min/Max: ${(metrics.processingTime.min / 1000).toFixed(2)}s / ${(metrics.processingTime.max / 1000).toFixed(2)}s -- P95/P99: ${(metrics.processingTime.p95 / 1000).toFixed(2)}s / ${(metrics.processingTime.p99 / 1000).toFixed(2)}s - -Throughput: -- Completed/min: ${metrics.throughput.completedPerMinute} -- Failed/min: ${metrics.throughput.failedPerMinute} -- Total/min: ${metrics.throughput.totalPerMinute} - -${metrics.oldestWaitingJob ? `Oldest Waiting Job: ${metrics.oldestWaitingJob.toISOString()}` : 'No waiting jobs'} - -${metrics.healthIssues.length > 0 ? `\nHealth Issues:\n${metrics.healthIssues.map(issue => `- ${issue}`).join('\n')}` : ''} - `.trim(); - } - - /** - * Export metrics in Prometheus format - */ - async getPrometheusMetrics(): Promise { - const metrics = await this.collect(); - const queueName = this.queue.name; - - return ` -# HELP queue_jobs_total Total number of jobs by status -# TYPE queue_jobs_total gauge -queue_jobs_total{queue="${queueName}",status="waiting"} ${metrics.waiting} -queue_jobs_total{queue="${queueName}",status="active"} ${metrics.active} -queue_jobs_total{queue="${queueName}",status="completed"} ${metrics.completed} -queue_jobs_total{queue="${queueName}",status="failed"} ${metrics.failed} -queue_jobs_total{queue="${queueName}",status="delayed"} ${metrics.delayed} -queue_jobs_total{queue="${queueName}",status="paused"} ${metrics.paused} - -# HELP queue_processing_time_seconds Job processing time in seconds -# TYPE queue_processing_time_seconds summary -queue_processing_time_seconds{queue="${queueName}",quantile="0.5"} ${(metrics.processingTime.avg / 1000).toFixed(3)} -queue_processing_time_seconds{queue="${queueName}",quantile="0.95"} ${(metrics.processingTime.p95 / 1000).toFixed(3)} -queue_processing_time_seconds{queue="${queueName}",quantile="0.99"} ${(metrics.processingTime.p99 / 1000).toFixed(3)} -queue_processing_time_seconds_sum{queue="${queueName}"} ${(metrics.processingTime.avg * this.processingTimes.length / 1000).toFixed(3)} -queue_processing_time_seconds_count{queue="${queueName}"} ${this.processingTimes.length} - -# HELP queue_throughput_per_minute Jobs processed per minute -# TYPE queue_throughput_per_minute gauge -queue_throughput_per_minute{queue="${queueName}",status="completed"} ${metrics.throughput.completedPerMinute} -queue_throughput_per_minute{queue="${queueName}",status="failed"} ${metrics.throughput.failedPerMinute} -queue_throughput_per_minute{queue="${queueName}",status="total"} ${metrics.throughput.totalPerMinute} - -# HELP queue_health Queue health status -# TYPE queue_health gauge -queue_health{queue="${queueName}"} ${metrics.isHealthy ? 1 : 0} - `.trim(); - } -} \ No newline at end of file +import { Queue, QueueEvents } from 'bullmq'; + +// import { getLogger } from '@stock-bot/logger'; + +// const logger = getLogger('queue-metrics'); + +export interface QueueMetrics { + // Job counts + waiting: number; + active: number; + completed: number; + failed: number; + delayed: number; + paused?: number; + + // Performance metrics + processingTime: { + avg: number; + min: number; + max: number; + p95: number; + p99: number; + }; + + // Throughput + throughput: { + completedPerMinute: number; + failedPerMinute: number; + totalPerMinute: number; + }; + + // Job age + oldestWaitingJob: Date | null; + + // Health + isHealthy: boolean; + healthIssues: string[]; +} + +export class QueueMetricsCollector { + private processingTimes: number[] = []; + private completedTimestamps: number[] = []; + private failedTimestamps: number[] = []; + private jobStartTimes = new Map(); + private readonly maxSamples = 1000; + private readonly metricsInterval = 60000; // 1 minute + + constructor( + private queue: Queue, + private queueEvents: QueueEvents + ) { + this.setupEventListeners(); + } + + /** + * Setup event listeners for metrics collection + */ + private setupEventListeners(): void { + this.queueEvents.on('completed', () => { + // Record completion + this.completedTimestamps.push(Date.now()); + this.cleanupOldTimestamps(); + }); + + this.queueEvents.on('failed', () => { + // Record failure + this.failedTimestamps.push(Date.now()); + this.cleanupOldTimestamps(); + }); + + // Track processing times + this.queueEvents.on('active', ({ jobId }) => { + this.jobStartTimes.set(jobId, Date.now()); + }); + + this.queueEvents.on('completed', ({ jobId }) => { + const startTime = this.jobStartTimes.get(jobId); + if (startTime) { + const processingTime = Date.now() - startTime; + this.recordProcessingTime(processingTime); + this.jobStartTimes.delete(jobId); + } + }); + } + + /** + * Record processing time + */ + private recordProcessingTime(time: number): void { + this.processingTimes.push(time); + + // Keep only recent samples + if (this.processingTimes.length > this.maxSamples) { + this.processingTimes = this.processingTimes.slice(-this.maxSamples); + } + } + + /** + * Clean up old timestamps + */ + private cleanupOldTimestamps(): void { + const cutoff = Date.now() - this.metricsInterval; + + this.completedTimestamps = this.completedTimestamps.filter(ts => ts > cutoff); + this.failedTimestamps = this.failedTimestamps.filter(ts => ts > cutoff); + } + + /** + * Collect current metrics + */ + async collect(): Promise { + // Get job counts + const [waiting, active, completed, failed, delayed] = await Promise.all([ + this.queue.getWaitingCount(), + this.queue.getActiveCount(), + this.queue.getCompletedCount(), + this.queue.getFailedCount(), + this.queue.getDelayedCount(), + ]); + + // BullMQ doesn't have getPausedCount, check if queue is paused + const paused = (await this.queue.isPaused()) ? waiting : 0; + + // Calculate processing time metrics + const processingTime = this.calculateProcessingTimeMetrics(); + + // Calculate throughput + const throughput = this.calculateThroughput(); + + // Get oldest waiting job + const oldestWaitingJob = await this.getOldestWaitingJob(); + + // Check health + const { isHealthy, healthIssues } = this.checkHealth({ + waiting, + active, + failed, + processingTime, + }); + + return { + waiting, + active, + completed, + failed, + delayed, + paused, + processingTime, + throughput, + oldestWaitingJob, + isHealthy, + healthIssues, + }; + } + + /** + * Calculate processing time metrics + */ + private calculateProcessingTimeMetrics(): QueueMetrics['processingTime'] { + if (this.processingTimes.length === 0) { + return { avg: 0, min: 0, max: 0, p95: 0, p99: 0 }; + } + + const sorted = [...this.processingTimes].sort((a, b) => a - b); + const sum = sorted.reduce((acc, val) => acc + val, 0); + + return { + avg: sorted.length > 0 ? Math.round(sum / sorted.length) : 0, + min: sorted[0] || 0, + max: sorted[sorted.length - 1] || 0, + p95: sorted[Math.floor(sorted.length * 0.95)] || 0, + p99: sorted[Math.floor(sorted.length * 0.99)] || 0, + }; + } + + /** + * Calculate throughput metrics + */ + private calculateThroughput(): QueueMetrics['throughput'] { + const now = Date.now(); + const oneMinuteAgo = now - 60000; + + const completedPerMinute = this.completedTimestamps.filter(ts => ts > oneMinuteAgo).length; + const failedPerMinute = this.failedTimestamps.filter(ts => ts > oneMinuteAgo).length; + + return { + completedPerMinute, + failedPerMinute, + totalPerMinute: completedPerMinute + failedPerMinute, + }; + } + + /** + * Get oldest waiting job + */ + private async getOldestWaitingJob(): Promise { + const waitingJobs = await this.queue.getWaiting(0, 1); + + if (waitingJobs.length > 0) { + return new Date(waitingJobs[0].timestamp); + } + + return null; + } + + /** + * Check queue health + */ + private checkHealth(metrics: { + waiting: number; + active: number; + failed: number; + processingTime: QueueMetrics['processingTime']; + }): { isHealthy: boolean; healthIssues: string[] } { + const issues: string[] = []; + + // Check for high failure rate + const failureRate = metrics.failed / (metrics.failed + this.completedTimestamps.length); + if (failureRate > 0.1) { + issues.push(`High failure rate: ${(failureRate * 100).toFixed(1)}%`); + } + + // Check for queue backlog + if (metrics.waiting > 1000) { + issues.push(`Large queue backlog: ${metrics.waiting} jobs waiting`); + } + + // Check for slow processing + if (metrics.processingTime.avg > 30000) { + // 30 seconds + issues.push( + `Slow average processing time: ${(metrics.processingTime.avg / 1000).toFixed(1)}s` + ); + } + + // Check for stalled active jobs + if (metrics.active > 100) { + issues.push(`High number of active jobs: ${metrics.active}`); + } + + return { + isHealthy: issues.length === 0, + healthIssues: issues, + }; + } + + /** + * Get formatted metrics report + */ + async getReport(): Promise { + const metrics = await this.collect(); + + return ` +Queue Metrics Report +=================== +Status: ${metrics.isHealthy ? '✅ Healthy' : '⚠️ Issues Detected'} + +Job Counts: +- Waiting: ${metrics.waiting} +- Active: ${metrics.active} +- Completed: ${metrics.completed} +- Failed: ${metrics.failed} +- Delayed: ${metrics.delayed} +- Paused: ${metrics.paused} + +Performance: +- Avg Processing Time: ${(metrics.processingTime.avg / 1000).toFixed(2)}s +- Min/Max: ${(metrics.processingTime.min / 1000).toFixed(2)}s / ${(metrics.processingTime.max / 1000).toFixed(2)}s +- P95/P99: ${(metrics.processingTime.p95 / 1000).toFixed(2)}s / ${(metrics.processingTime.p99 / 1000).toFixed(2)}s + +Throughput: +- Completed/min: ${metrics.throughput.completedPerMinute} +- Failed/min: ${metrics.throughput.failedPerMinute} +- Total/min: ${metrics.throughput.totalPerMinute} + +${metrics.oldestWaitingJob ? `Oldest Waiting Job: ${metrics.oldestWaitingJob.toISOString()}` : 'No waiting jobs'} + +${metrics.healthIssues.length > 0 ? `\nHealth Issues:\n${metrics.healthIssues.map(issue => `- ${issue}`).join('\n')}` : ''} + `.trim(); + } + + /** + * Export metrics in Prometheus format + */ + async getPrometheusMetrics(): Promise { + const metrics = await this.collect(); + const queueName = this.queue.name; + + return ` +# HELP queue_jobs_total Total number of jobs by status +# TYPE queue_jobs_total gauge +queue_jobs_total{queue="${queueName}",status="waiting"} ${metrics.waiting} +queue_jobs_total{queue="${queueName}",status="active"} ${metrics.active} +queue_jobs_total{queue="${queueName}",status="completed"} ${metrics.completed} +queue_jobs_total{queue="${queueName}",status="failed"} ${metrics.failed} +queue_jobs_total{queue="${queueName}",status="delayed"} ${metrics.delayed} +queue_jobs_total{queue="${queueName}",status="paused"} ${metrics.paused} + +# HELP queue_processing_time_seconds Job processing time in seconds +# TYPE queue_processing_time_seconds summary +queue_processing_time_seconds{queue="${queueName}",quantile="0.5"} ${(metrics.processingTime.avg / 1000).toFixed(3)} +queue_processing_time_seconds{queue="${queueName}",quantile="0.95"} ${(metrics.processingTime.p95 / 1000).toFixed(3)} +queue_processing_time_seconds{queue="${queueName}",quantile="0.99"} ${(metrics.processingTime.p99 / 1000).toFixed(3)} +queue_processing_time_seconds_sum{queue="${queueName}"} ${((metrics.processingTime.avg * this.processingTimes.length) / 1000).toFixed(3)} +queue_processing_time_seconds_count{queue="${queueName}"} ${this.processingTimes.length} + +# HELP queue_throughput_per_minute Jobs processed per minute +# TYPE queue_throughput_per_minute gauge +queue_throughput_per_minute{queue="${queueName}",status="completed"} ${metrics.throughput.completedPerMinute} +queue_throughput_per_minute{queue="${queueName}",status="failed"} ${metrics.throughput.failedPerMinute} +queue_throughput_per_minute{queue="${queueName}",status="total"} ${metrics.throughput.totalPerMinute} + +# HELP queue_health Queue health status +# TYPE queue_health gauge +queue_health{queue="${queueName}"} ${metrics.isHealthy ? 1 : 0} + `.trim(); + } +} diff --git a/libs/queue/src/queue.ts b/libs/core/queue/src/queue.ts similarity index 72% rename from libs/queue/src/queue.ts rename to libs/core/queue/src/queue.ts index efc4cc1..e8abd24 100644 --- a/libs/queue/src/queue.ts +++ b/libs/core/queue/src/queue.ts @@ -1,372 +1,394 @@ -import { Queue as BullQueue, QueueEvents, Worker, type Job } from 'bullmq'; -import { getLogger } from '@stock-bot/logger'; -import { handlerRegistry } from './handler-registry'; -import type { JobData, JobOptions, QueueStats, RedisConfig } from './types'; -import { getRedisConnection } from './utils'; - -const logger = getLogger('queue'); - -export interface QueueWorkerConfig { - workers?: number; - concurrency?: number; - startWorker?: boolean; -} - -/** - * Consolidated Queue class that handles both job operations and optional worker management - * Can be used as a simple job queue or with workers for automatic processing - */ -export class Queue { - private bullQueue: BullQueue; - private workers: Worker[] = []; - private queueEvents?: QueueEvents; - private queueName: string; - private redisConfig: RedisConfig; - - constructor( - queueName: string, - redisConfig: RedisConfig, - defaultJobOptions: JobOptions = {}, - config: QueueWorkerConfig = {} - ) { - this.queueName = queueName; - this.redisConfig = redisConfig; - - const connection = getRedisConnection(redisConfig); - - // Initialize BullMQ queue - this.bullQueue = new BullQueue(`{${queueName}}`, { - connection, - defaultJobOptions: { - removeOnComplete: 10, - removeOnFail: 5, - attempts: 3, - backoff: { - type: 'exponential', - delay: 1000, - }, - ...defaultJobOptions, - }, - }); - - // Initialize queue events if workers will be used - if (config.workers && config.workers > 0) { - this.queueEvents = new QueueEvents(`{${queueName}}`, { connection }); - } - - // Start workers if requested and not explicitly disabled - if (config.workers && config.workers > 0 && config.startWorker !== false) { - this.startWorkers(config.workers, config.concurrency || 1); - } - - logger.trace('Queue created', { - queueName, - workers: config.workers || 0, - concurrency: config.concurrency || 1, - }); - } - - /** - * Get the queue name - */ - getName(): string { - return this.queueName; - } - - /** - * Add a single job to the queue - */ - async add(name: string, data: JobData, options: JobOptions = {}): Promise { - logger.trace('Adding job', { queueName: this.queueName, jobName: name }); - return await this.bullQueue.add(name, data, options); - } - - /** - * Add multiple jobs to the queue in bulk - */ - async addBulk(jobs: Array<{ name: string; data: JobData; opts?: JobOptions }>): Promise { - logger.trace('Adding bulk jobs', { - queueName: this.queueName, - jobCount: jobs.length, - }); - return await this.bullQueue.addBulk(jobs); - } - - /** - * Add a scheduled job with cron-like pattern - */ - async addScheduledJob( - name: string, - data: JobData, - cronPattern: string, - options: JobOptions = {} - ): Promise { - const scheduledOptions: JobOptions = { - ...options, - repeat: { - pattern: cronPattern, - // Use job name as repeat key to prevent duplicates - key: `${this.queueName}:${name}`, - ...options.repeat, - }, - }; - - logger.info('Adding scheduled job', { - queueName: this.queueName, - jobName: name, - cronPattern, - repeatKey: scheduledOptions.repeat?.key, - immediately: scheduledOptions.repeat?.immediately, - }); - - return await this.bullQueue.add(name, data, scheduledOptions); - } - - /** - * Get queue statistics - */ - async getStats(): Promise { - const [waiting, active, completed, failed, delayed] = await Promise.all([ - this.bullQueue.getWaiting(), - this.bullQueue.getActive(), - this.bullQueue.getCompleted(), - this.bullQueue.getFailed(), - this.bullQueue.getDelayed(), - ]); - - const isPaused = await this.bullQueue.isPaused(); - - return { - waiting: waiting.length, - active: active.length, - completed: completed.length, - failed: failed.length, - delayed: delayed.length, - paused: isPaused, - workers: this.workers.length, - }; - } - - /** - * Get a specific job by ID - */ - async getJob(jobId: string): Promise { - return await this.bullQueue.getJob(jobId); - } - - /** - * Get jobs by state - */ - async getJobs( - states: Array<'waiting' | 'active' | 'completed' | 'failed' | 'delayed'>, - start = 0, - end = 100 - ): Promise { - return await this.bullQueue.getJobs(states, start, end); - } - - /** - * Pause the queue (stops processing new jobs) - */ - async pause(): Promise { - await this.bullQueue.pause(); - logger.info('Queue paused', { queueName: this.queueName }); - } - - /** - * Resume the queue - */ - async resume(): Promise { - await this.bullQueue.resume(); - logger.info('Queue resumed', { queueName: this.queueName }); - } - - /** - * Drain the queue (remove all jobs) - */ - async drain(delayed = false): Promise { - await this.bullQueue.drain(delayed); - logger.info('Queue drained', { queueName: this.queueName, delayed }); - } - - /** - * Clean completed and failed jobs - */ - async clean( - grace: number = 0, - limit: number = 100, - type: 'completed' | 'failed' = 'completed' - ): Promise { - await this.bullQueue.clean(grace, limit, type); - logger.debug('Queue cleaned', { queueName: this.queueName, type, grace, limit }); - } - - /** - * Wait until the queue is ready - */ - async waitUntilReady(): Promise { - await this.bullQueue.waitUntilReady(); - } - - /** - * Close the queue (cleanup resources) - */ - /** - * Close the queue (cleanup resources) - */ - async close(): Promise { - try { - // Close the queue itself - await this.bullQueue.close(); - logger.info('Queue closed', { queueName: this.queueName }); - - // Close queue events - if (this.queueEvents) { - await this.queueEvents.close(); - logger.debug('Queue events closed', { queueName: this.queueName }); - } - - // Close workers first - if (this.workers.length > 0) { - await Promise.all( - this.workers.map(async worker => { - return await worker.close(); - }) - ); - this.workers = []; - logger.debug('Workers closed', { queueName: this.queueName }); - } - } catch (error) { - logger.error('Error closing queue', { queueName: this.queueName, error }); - throw error; - } - } - - /** - * Start workers for this queue - */ - private startWorkers(workerCount: number, concurrency: number): void { - const connection = getRedisConnection(this.redisConfig); - - for (let i = 0; i < workerCount; i++) { - const worker = new Worker(`{${this.queueName}}`, this.processJob.bind(this), { - connection, - concurrency, - maxStalledCount: 3, - stalledInterval: 30000, - }); - - // Setup worker event handlers - worker.on('completed', job => { - logger.trace('Job completed', { - queueName: this.queueName, - jobId: job.id, - handler: job.data?.handler, - operation: job.data?.operation, - }); - }); - - worker.on('failed', (job, err) => { - logger.error('Job failed', { - queueName: this.queueName, - jobId: job?.id, - handler: job?.data?.handler, - operation: job?.data?.operation, - error: err.message, - }); - }); - - worker.on('error', error => { - logger.error('Worker error', { - queueName: this.queueName, - workerId: i, - error: error.message, - }); - }); - - this.workers.push(worker); - } - - logger.info('Workers started', { - queueName: this.queueName, - workerCount, - concurrency, - }); - } - - /** - * Process a job using the handler registry - */ - private async processJob(job: Job): Promise { - const { handler, operation, payload }: JobData = job.data; - - logger.trace('Processing job', { - id: job.id, - handler, - operation, - queueName: this.queueName, - }); - - try { - // Look up handler in registry - const jobHandler = handlerRegistry.getHandler(handler, operation); - - if (!jobHandler) { - throw new Error(`No handler found for ${handler}:${operation}`); - } - - const result = await jobHandler(payload); - - logger.trace('Job completed successfully', { - id: job.id, - handler, - operation, - queueName: this.queueName, - }); - - return result; - } catch (error) { - logger.error('Job processing failed', { - id: job.id, - handler, - operation, - queueName: this.queueName, - error: error instanceof Error ? error.message : String(error), - }); - throw error; - } - } - - /** - * Start workers manually (for delayed initialization) - */ - startWorkersManually(workerCount: number, concurrency: number = 1): void { - if (this.workers.length > 0) { - logger.warn('Workers already started for queue', { queueName: this.queueName }); - return; - } - - // Initialize queue events if not already done - if (!this.queueEvents) { - const connection = getRedisConnection(this.redisConfig); - this.queueEvents = new QueueEvents(`{${this.queueName}}`, { connection }); - } - - this.startWorkers(workerCount, concurrency); - } - - /** - * Get the number of active workers - */ - getWorkerCount(): number { - return this.workers.length; - } - - /** - * Get the underlying BullMQ queue (for advanced operations) - * @deprecated Use direct methods instead - */ - getBullQueue(): BullQueue { - return this.bullQueue; - } -} +import { Queue as BullQueue, QueueEvents, Worker, type Job } from 'bullmq'; +import { handlerRegistry } from '@stock-bot/handlers'; +import type { JobData, JobOptions, ExtendedJobOptions, QueueStats, RedisConfig } from './types'; +import { getRedisConnection } from './utils'; + +// Logger interface for type safety +interface Logger { + info(message: string, meta?: Record): void; + error(message: string, meta?: Record): void; + warn(message: string, meta?: Record): void; + debug(message: string, meta?: Record): void; + trace(message: string, meta?: Record): void; + child?(name: string, context?: Record): Logger; +} + +export interface QueueWorkerConfig { + workers?: number; + concurrency?: number; + startWorker?: boolean; +} + +/** + * Consolidated Queue class that handles both job operations and optional worker management + * Can be used as a simple job queue or with workers for automatic processing + */ +export class Queue { + private bullQueue: BullQueue; + private workers: Worker[] = []; + private queueEvents?: QueueEvents; + private queueName: string; + private redisConfig: RedisConfig; + private readonly logger: Logger; + + constructor( + queueName: string, + redisConfig: RedisConfig, + defaultJobOptions: JobOptions = {}, + config: QueueWorkerConfig = {}, + logger?: Logger + ) { + this.queueName = queueName; + this.redisConfig = redisConfig; + this.logger = logger || console; + + const connection = getRedisConnection(redisConfig); + + // Initialize BullMQ queue + this.bullQueue = new BullQueue(queueName, { + connection, + defaultJobOptions: { + removeOnComplete: 10, + removeOnFail: 5, + attempts: 3, + backoff: { + type: 'exponential', + delay: 1000, + }, + ...defaultJobOptions, + }, + }); + + // Initialize queue events if workers will be used + if (config.workers && config.workers > 0) { + this.queueEvents = new QueueEvents(queueName, { connection }); + } + + // Start workers if requested and not explicitly disabled + if (config.workers && config.workers > 0 && config.startWorker !== false) { + this.startWorkers(config.workers, config.concurrency || 1); + } + + this.logger.trace('Queue created', { + queueName, + workers: config.workers || 0, + concurrency: config.concurrency || 1, + }); + } + + /** + * Get the queue name + */ + getName(): string { + return this.queueName; + } + + /** + * Get the underlying BullMQ queue instance (for monitoring/admin purposes) + */ + getBullQueue(): BullQueue { + return this.bullQueue; + } + + /** + * Add a single job to the queue + */ + async add(name: string, data: JobData, options: JobOptions = {}): Promise { + this.logger.trace('Adding job', { queueName: this.queueName, jobName: name }); + return await this.bullQueue.add(name, data, options); + } + + /** + * Add multiple jobs to the queue in bulk + */ + async addBulk(jobs: Array<{ name: string; data: JobData; opts?: JobOptions }>): Promise { + this.logger.trace('Adding bulk jobs', { + queueName: this.queueName, + jobCount: jobs.length, + }); + return await this.bullQueue.addBulk(jobs); + } + + /** + * Add a scheduled job with cron-like pattern + */ + async addScheduledJob( + name: string, + data: JobData, + cronPattern: string, + options: ExtendedJobOptions = {} + ): Promise { + const scheduledOptions: ExtendedJobOptions = { + ...options, + repeat: { + pattern: cronPattern, + // Use job name as repeat key to prevent duplicates + key: `${this.queueName}:${name}`, + ...options.repeat, + }, + }; + + this.logger.info('Adding scheduled job', { + queueName: this.queueName, + jobName: name, + cronPattern, + repeatKey: scheduledOptions.repeat?.key, + immediately: scheduledOptions.repeat?.immediately, + }); + + return await this.bullQueue.add(name, data, scheduledOptions); + } + + /** + * Get queue statistics + */ + async getStats(): Promise { + const [waiting, active, completed, failed, delayed] = await Promise.all([ + this.bullQueue.getWaiting(), + this.bullQueue.getActive(), + this.bullQueue.getCompleted(), + this.bullQueue.getFailed(), + this.bullQueue.getDelayed(), + ]); + + const isPaused = await this.bullQueue.isPaused(); + + return { + waiting: waiting.length, + active: active.length, + completed: completed.length, + failed: failed.length, + delayed: delayed.length, + paused: isPaused, + workers: this.workers.length, + }; + } + + /** + * Get a specific job by ID + */ + async getJob(jobId: string): Promise { + return await this.bullQueue.getJob(jobId); + } + + /** + * Get jobs by state + */ + async getJobs( + states: Array<'waiting' | 'active' | 'completed' | 'failed' | 'delayed'>, + start = 0, + end = 100 + ): Promise { + return await this.bullQueue.getJobs(states, start, end); + } + + /** + * Pause the queue (stops processing new jobs) + */ + async pause(): Promise { + await this.bullQueue.pause(); + this.logger.info('Queue paused', { queueName: this.queueName }); + } + + /** + * Resume the queue + */ + async resume(): Promise { + await this.bullQueue.resume(); + this.logger.info('Queue resumed', { queueName: this.queueName }); + } + + /** + * Drain the queue (remove all jobs) + */ + async drain(delayed = false): Promise { + await this.bullQueue.drain(delayed); + this.logger.info('Queue drained', { queueName: this.queueName, delayed }); + } + + /** + * Clean completed and failed jobs + */ + async clean( + grace: number = 0, + limit: number = 100, + type: 'completed' | 'failed' = 'completed' + ): Promise { + await this.bullQueue.clean(grace, limit, type); + this.logger.debug('Queue cleaned', { queueName: this.queueName, type, grace, limit }); + } + + /** + * Wait until the queue is ready + */ + async waitUntilReady(): Promise { + await this.bullQueue.waitUntilReady(); + } + + /** + * Close the queue (cleanup resources) + */ + /** + * Close the queue (cleanup resources) + */ + async close(): Promise { + try { + // Close the queue itself + await this.bullQueue.close(); + this.logger.info('Queue closed', { queueName: this.queueName }); + + // Close queue events + if (this.queueEvents) { + await this.queueEvents.close(); + this.logger.debug('Queue events closed', { queueName: this.queueName }); + } + + // Close workers first + if (this.workers.length > 0) { + await Promise.all( + this.workers.map(async worker => { + return await worker.close(); + }) + ); + this.workers = []; + this.logger.debug('Workers closed', { queueName: this.queueName }); + } + } catch (error) { + this.logger.error('Error closing queue', { queueName: this.queueName, error }); + throw error; + } + } + + /** + * Create a child logger with additional context + * Useful for batch processing and other queue operations + */ + createChildLogger(name: string, context?: Record) { + if (this.logger && typeof this.logger.child === 'function') { + return this.logger.child(name, context); + } + // Fallback to main logger if child not supported (e.g., console) + return this.logger; + } + + /** + * Start workers for this queue + */ + private startWorkers(workerCount: number, concurrency: number): void { + const connection = getRedisConnection(this.redisConfig); + + for (let i = 0; i < workerCount; i++) { + const worker = new Worker(this.queueName, this.processJob.bind(this), { + connection, + concurrency, + maxStalledCount: 3, + stalledInterval: 30000, + }); + + // Setup worker event handlers + worker.on('completed', job => { + this.logger.trace('Job completed', { + queueName: this.queueName, + jobId: job.id, + handler: job.data?.handler, + operation: job.data?.operation, + }); + }); + + worker.on('failed', (job, err) => { + this.logger.error('Job failed', { + queueName: this.queueName, + jobId: job?.id, + handler: job?.data?.handler, + operation: job?.data?.operation, + error: err.message, + }); + }); + + worker.on('error', error => { + this.logger.error('Worker error', { + queueName: this.queueName, + workerId: i, + error: error.message, + }); + }); + + this.workers.push(worker); + } + + this.logger.info('Workers started', { + queueName: this.queueName, + workerCount, + concurrency, + }); + } + + /** + * Process a job using the handler registry + */ + private async processJob(job: Job): Promise { + const { handler, operation, payload }: JobData = job.data; + + this.logger.trace('Processing job', { + id: job.id, + handler, + operation, + queueName: this.queueName, + }); + + try { + // Look up handler in registry + const jobHandler = handlerRegistry.getOperation(handler, operation); + + if (!jobHandler) { + throw new Error(`No handler found for ${handler}:${operation}`); + } + + const result = await jobHandler(payload); + + this.logger.trace('Job completed successfully', { + id: job.id, + handler, + operation, + queueName: this.queueName, + }); + + return result; + } catch (error) { + this.logger.error('Job processing failed', { + id: job.id, + handler, + operation, + queueName: this.queueName, + error: error instanceof Error ? error.message : String(error), + }); + throw error; + } + } + + /** + * Start workers manually (for delayed initialization) + */ + startWorkersManually(workerCount: number, concurrency: number = 1): void { + if (this.workers.length > 0) { + this.logger.warn('Workers already started for queue', { queueName: this.queueName }); + return; + } + + // Initialize queue events if not already done + if (!this.queueEvents) { + const connection = getRedisConnection(this.redisConfig); + this.queueEvents = new QueueEvents(this.queueName, { connection }); + } + + this.startWorkers(workerCount, concurrency); + } + + /** + * Get the number of active workers + */ + getWorkerCount(): number { + return this.workers.length; + } + +} diff --git a/libs/queue/src/rate-limiter.ts b/libs/core/queue/src/rate-limiter.ts similarity index 68% rename from libs/queue/src/rate-limiter.ts rename to libs/core/queue/src/rate-limiter.ts index f8cf62a..06ba222 100644 --- a/libs/queue/src/rate-limiter.ts +++ b/libs/core/queue/src/rate-limiter.ts @@ -1,294 +1,338 @@ -import { RateLimiterRedis, RateLimiterRes } from 'rate-limiter-flexible'; -import { getLogger } from '@stock-bot/logger'; -import type { RateLimitConfig as BaseRateLimitConfig, RateLimitRule } from './types'; - -const logger = getLogger('rate-limiter'); - -// Extend the base config to add rate-limiter specific fields -export interface RateLimitConfig extends BaseRateLimitConfig { - keyPrefix?: string; -} - -export class QueueRateLimiter { - private limiters = new Map(); - private rules: RateLimitRule[] = []; - - constructor(private redisClient: ReturnType) {} - - /** - * Add a rate limit rule - */ - addRule(rule: RateLimitRule): void { - this.rules.push(rule); - - const key = this.getRuleKey(rule.level, rule.queueName, rule.handler, rule.operation); - const limiter = new RateLimiterRedis({ - storeClient: this.redisClient, - keyPrefix: `rl:${key}`, - points: rule.config.points, - duration: rule.config.duration, - blockDuration: rule.config.blockDuration || 0, - }); - - this.limiters.set(key, limiter); - - logger.info('Rate limit rule added', { - level: rule.level, - queueName: rule.queueName, - handler: rule.handler, - operation: rule.operation, - points: rule.config.points, - duration: rule.config.duration, - }); - } - - /** - * Check if a job can be processed based on rate limits - * Uses hierarchical precedence: operation > handler > queue > global - * The most specific matching rule takes precedence - */ - async checkLimit(queueName: string, handler: string, operation: string): Promise<{ - allowed: boolean; - retryAfter?: number; - remainingPoints?: number; - appliedRule?: RateLimitRule; - }> { - const applicableRule = this.getMostSpecificRule(queueName, handler, operation); - - if (!applicableRule) { - return { allowed: true }; - } - - const key = this.getRuleKey(applicableRule.level, applicableRule.queueName, applicableRule.handler, applicableRule.operation); - const limiter = this.limiters.get(key); - - if (!limiter) { - logger.warn('Rate limiter not found for rule', { key, rule: applicableRule }); - return { allowed: true }; - } - - try { - const result = await this.consumePoint(limiter, this.getConsumerKey(queueName, handler, operation)); - - return { - ...result, - appliedRule: applicableRule, - }; - } catch (error) { - logger.error('Rate limit check failed', { queueName, handler, operation, error }); - // On error, allow the request to proceed - return { allowed: true }; - } - } - - /** - * Get the most specific rule that applies to this job - * Precedence: operation > handler > queue > global - */ - private getMostSpecificRule(queueName: string, handler: string, operation: string): RateLimitRule | undefined { - // 1. Check for operation-specific rule (most specific) - let rule = this.rules.find(r => - r.level === 'operation' && - r.queueName === queueName && - r.handler === handler && - r.operation === operation - ); - if (rule) {return rule;} - - // 2. Check for handler-specific rule - rule = this.rules.find(r => - r.level === 'handler' && - r.queueName === queueName && - r.handler === handler - ); - if (rule) {return rule;} - - // 3. Check for queue-specific rule - rule = this.rules.find(r => - r.level === 'queue' && - r.queueName === queueName - ); - if (rule) {return rule;} - - // 4. Check for global rule (least specific) - rule = this.rules.find(r => r.level === 'global'); - return rule; - } - - /** - * Consume a point from the rate limiter - */ - private async consumePoint( - limiter: RateLimiterRedis, - key: string - ): Promise<{ allowed: boolean; retryAfter?: number; remainingPoints?: number }> { - try { - const result = await limiter.consume(key); - return { - allowed: true, - remainingPoints: result.remainingPoints, - }; - } catch (rejRes) { - if (rejRes instanceof RateLimiterRes) { - logger.warn('Rate limit exceeded', { - key, - retryAfter: rejRes.msBeforeNext, - }); - - return { - allowed: false, - retryAfter: rejRes.msBeforeNext, - remainingPoints: rejRes.remainingPoints, - }; - } - throw rejRes; - } - } - - /** - * Get rule key for storing rate limiter - */ - private getRuleKey(level: string, queueName?: string, handler?: string, operation?: string): string { - switch (level) { - case 'global': - return 'global'; - case 'queue': - return `queue:${queueName}`; - case 'handler': - return `handler:${queueName}:${handler}`; - case 'operation': - return `operation:${queueName}:${handler}:${operation}`; - default: - return level; - } - } - - /** - * Get consumer key for rate limiting (what gets counted) - */ - private getConsumerKey(queueName: string, handler: string, operation: string): string { - return `${queueName}:${handler}:${operation}`; - } - - /** - * Get current rate limit status for a queue/handler/operation - */ - async getStatus(queueName: string, handler: string, operation: string): Promise<{ - queueName: string; - handler: string; - operation: string; - appliedRule?: RateLimitRule; - limit?: { - level: string; - points: number; - duration: number; - remaining: number; - resetIn: number; - }; - }> { - const applicableRule = this.getMostSpecificRule(queueName, handler, operation); - - if (!applicableRule) { - return { - queueName, - handler, - operation, - }; - } - - const key = this.getRuleKey(applicableRule.level, applicableRule.queueName, applicableRule.handler, applicableRule.operation); - const limiter = this.limiters.get(key); - - if (!limiter) { - return { - queueName, - handler, - operation, - appliedRule: applicableRule, - }; - } - - try { - const consumerKey = this.getConsumerKey(queueName, handler, operation); - const result = await limiter.get(consumerKey); - - const limit = { - level: applicableRule.level, - points: limiter.points, - duration: limiter.duration, - remaining: result?.remainingPoints ?? limiter.points, - resetIn: result?.msBeforeNext ?? 0, - }; - - return { - queueName, - handler, - operation, - appliedRule: applicableRule, - limit, - }; - } catch (error) { - logger.error('Failed to get rate limit status', { queueName, handler, operation, error }); - return { - queueName, - handler, - operation, - appliedRule: applicableRule, - }; - } - } - - /** - * Reset rate limits for a specific consumer - */ - async reset(queueName: string, handler?: string, operation?: string): Promise { - if (handler && operation) { - // Reset specific operation - const consumerKey = this.getConsumerKey(queueName, handler, operation); - const rule = this.getMostSpecificRule(queueName, handler, operation); - - if (rule) { - const key = this.getRuleKey(rule.level, rule.queueName, rule.handler, rule.operation); - const limiter = this.limiters.get(key); - if (limiter) { - await limiter.delete(consumerKey); - } - } - } else { - // Reset broader scope - this is more complex with the new hierarchy - logger.warn('Broad reset not implemented yet', { queueName, handler, operation }); - } - - logger.info('Rate limits reset', { queueName, handler, operation }); - } - - /** - * Get all configured rate limit rules - */ - getRules(): RateLimitRule[] { - return [...this.rules]; - } - - /** - * Remove a rate limit rule - */ - removeRule(level: string, queueName?: string, handler?: string, operation?: string): boolean { - const key = this.getRuleKey(level, queueName, handler, operation); - const ruleIndex = this.rules.findIndex(r => - r.level === level && - (!queueName || r.queueName === queueName) && - (!handler || r.handler === handler) && - (!operation || r.operation === operation) - ); - - if (ruleIndex >= 0) { - this.rules.splice(ruleIndex, 1); - this.limiters.delete(key); - - logger.info('Rate limit rule removed', { level, queueName, handler, operation }); - return true; - } - - return false; - } -} \ No newline at end of file +import { RateLimiterRedis, RateLimiterRes } from 'rate-limiter-flexible'; +import type { RateLimitConfig as BaseRateLimitConfig, RateLimitRule } from './types'; + +// Logger interface for type safety +interface Logger { + info(message: string, meta?: Record): void; + error(message: string, meta?: Record): void; + warn(message: string, meta?: Record): void; + debug(message: string, meta?: Record): void; +} + +// Extend the base config to add rate-limiter specific fields +export interface RateLimitConfig extends BaseRateLimitConfig { + keyPrefix?: string; +} + +export class QueueRateLimiter { + private limiters = new Map(); + private rules: RateLimitRule[] = []; + private readonly logger: Logger; + + constructor( + private redisClient: ReturnType, + logger?: Logger + ) { + this.logger = logger || console; + } + + /** + * Add a rate limit rule + */ + addRule(rule: RateLimitRule): void { + this.rules.push(rule); + + const key = this.getRuleKey(rule.level, rule.queueName, rule.handler, rule.operation); + const limiter = new RateLimiterRedis({ + storeClient: this.redisClient, + keyPrefix: `rl:${key}`, + points: rule.config.points, + duration: rule.config.duration, + blockDuration: rule.config.blockDuration || 0, + }); + + this.limiters.set(key, limiter); + + this.logger.info('Rate limit rule added', { + level: rule.level, + queueName: rule.queueName, + handler: rule.handler, + operation: rule.operation, + points: rule.config.points, + duration: rule.config.duration, + }); + } + + /** + * Check if a job can be processed based on rate limits + * Uses hierarchical precedence: operation > handler > queue > global + * The most specific matching rule takes precedence + */ + async checkLimit( + queueName: string, + handler: string, + operation: string + ): Promise<{ + allowed: boolean; + retryAfter?: number; + remainingPoints?: number; + appliedRule?: RateLimitRule; + }> { + const applicableRule = this.getMostSpecificRule(queueName, handler, operation); + + if (!applicableRule) { + return { allowed: true }; + } + + const key = this.getRuleKey( + applicableRule.level, + applicableRule.queueName, + applicableRule.handler, + applicableRule.operation + ); + const limiter = this.limiters.get(key); + + if (!limiter) { + this.logger.warn('Rate limiter not found for rule', { key, rule: applicableRule }); + return { allowed: true }; + } + + try { + const result = await this.consumePoint( + limiter, + this.getConsumerKey(queueName, handler, operation) + ); + + return { + ...result, + appliedRule: applicableRule, + }; + } catch (error) { + this.logger.error('Rate limit check failed', { queueName, handler, operation, error }); + // On error, allow the request to proceed + return { allowed: true }; + } + } + + /** + * Get the most specific rule that applies to this job + * Precedence: operation > handler > queue > global + */ + private getMostSpecificRule( + queueName: string, + handler: string, + operation: string + ): RateLimitRule | undefined { + // 1. Check for operation-specific rule (most specific) + let rule = this.rules.find( + r => + r.level === 'operation' && + r.queueName === queueName && + r.handler === handler && + r.operation === operation + ); + if (rule) { + return rule; + } + + // 2. Check for handler-specific rule + rule = this.rules.find( + r => r.level === 'handler' && r.queueName === queueName && r.handler === handler + ); + if (rule) { + return rule; + } + + // 3. Check for queue-specific rule + rule = this.rules.find(r => r.level === 'queue' && r.queueName === queueName); + if (rule) { + return rule; + } + + // 4. Check for global rule (least specific) + rule = this.rules.find(r => r.level === 'global'); + return rule; + } + + /** + * Consume a point from the rate limiter + */ + private async consumePoint( + limiter: RateLimiterRedis, + key: string + ): Promise<{ allowed: boolean; retryAfter?: number; remainingPoints?: number }> { + try { + const result = await limiter.consume(key); + return { + allowed: true, + remainingPoints: result.remainingPoints, + }; + } catch (rejRes) { + if (rejRes instanceof RateLimiterRes) { + this.logger.warn('Rate limit exceeded', { + key, + retryAfter: rejRes.msBeforeNext, + }); + + return { + allowed: false, + retryAfter: rejRes.msBeforeNext, + remainingPoints: rejRes.remainingPoints, + }; + } + throw rejRes; + } + } + + /** + * Get rule key for storing rate limiter + */ + private getRuleKey( + level: string, + queueName?: string, + handler?: string, + operation?: string + ): string { + switch (level) { + case 'global': + return 'global'; + case 'queue': + return `queue:${queueName}`; + case 'handler': + return `handler:${queueName}:${handler}`; + case 'operation': + return `operation:${queueName}:${handler}:${operation}`; + default: + return level; + } + } + + /** + * Get consumer key for rate limiting (what gets counted) + */ + private getConsumerKey(queueName: string, handler: string, operation: string): string { + return `${queueName}:${handler}:${operation}`; + } + + /** + * Get current rate limit status for a queue/handler/operation + */ + async getStatus( + queueName: string, + handler: string, + operation: string + ): Promise<{ + queueName: string; + handler: string; + operation: string; + appliedRule?: RateLimitRule; + limit?: { + level: string; + points: number; + duration: number; + remaining: number; + resetIn: number; + }; + }> { + const applicableRule = this.getMostSpecificRule(queueName, handler, operation); + + if (!applicableRule) { + return { + queueName, + handler, + operation, + }; + } + + const key = this.getRuleKey( + applicableRule.level, + applicableRule.queueName, + applicableRule.handler, + applicableRule.operation + ); + const limiter = this.limiters.get(key); + + if (!limiter) { + return { + queueName, + handler, + operation, + appliedRule: applicableRule, + }; + } + + try { + const consumerKey = this.getConsumerKey(queueName, handler, operation); + const result = await limiter.get(consumerKey); + + const limit = { + level: applicableRule.level, + points: limiter.points, + duration: limiter.duration, + remaining: result?.remainingPoints ?? limiter.points, + resetIn: result?.msBeforeNext ?? 0, + }; + + return { + queueName, + handler, + operation, + appliedRule: applicableRule, + limit, + }; + } catch (error) { + this.logger.error('Failed to get rate limit status', { queueName, handler, operation, error }); + return { + queueName, + handler, + operation, + appliedRule: applicableRule, + }; + } + } + + /** + * Reset rate limits for a specific consumer + */ + async reset(queueName: string, handler?: string, operation?: string): Promise { + if (handler && operation) { + // Reset specific operation + const consumerKey = this.getConsumerKey(queueName, handler, operation); + const rule = this.getMostSpecificRule(queueName, handler, operation); + + if (rule) { + const key = this.getRuleKey(rule.level, rule.queueName, rule.handler, rule.operation); + const limiter = this.limiters.get(key); + if (limiter) { + await limiter.delete(consumerKey); + } + } + } else { + // Reset broader scope - this is more complex with the new hierarchy + this.logger.warn('Broad reset not implemented yet', { queueName, handler, operation }); + } + + this.logger.info('Rate limits reset', { queueName, handler, operation }); + } + + /** + * Get all configured rate limit rules + */ + getRules(): RateLimitRule[] { + return [...this.rules]; + } + + /** + * Remove a rate limit rule + */ + removeRule(level: string, queueName?: string, handler?: string, operation?: string): boolean { + const key = this.getRuleKey(level, queueName, handler, operation); + const ruleIndex = this.rules.findIndex( + r => + r.level === level && + (!queueName || r.queueName === queueName) && + (!handler || r.handler === handler) && + (!operation || r.operation === operation) + ); + + if (ruleIndex >= 0) { + this.rules.splice(ruleIndex, 1); + this.limiters.delete(key); + + this.logger.info('Rate limit rule removed', { level, queueName, handler, operation }); + return true; + } + + return false; + } +} diff --git a/libs/core/queue/src/service-cache.ts b/libs/core/queue/src/service-cache.ts new file mode 100644 index 0000000..4ffb329 --- /dev/null +++ b/libs/core/queue/src/service-cache.ts @@ -0,0 +1,175 @@ +import { createCache, type CacheProvider, type CacheStats } from '@stock-bot/cache'; +import type { RedisConfig } from './types'; +import { generateCachePrefix } from './service-utils'; + +/** + * Service-aware cache that uses the service's Redis DB + * Automatically prefixes keys with the service's cache namespace + */ +export class ServiceCache implements CacheProvider { + private cache: CacheProvider; + private prefix: string; + + constructor( + serviceName: string, + redisConfig: RedisConfig, + isGlobalCache: boolean = false, + logger?: any + ) { + // Determine Redis DB and prefix + let db: number; + let prefix: string; + + if (isGlobalCache) { + // Global cache uses db:1 + db = 1; + prefix = 'stock-bot:shared'; + } else { + // Service cache also uses db:1 with service-specific prefix + db = 1; + prefix = generateCachePrefix(serviceName); + } + + // Create underlying cache with correct DB + const cacheConfig = { + redisConfig: { + ...redisConfig, + db, + }, + keyPrefix: prefix + ':', + logger, + }; + + this.cache = createCache(cacheConfig); + this.prefix = prefix; + } + + // Implement CacheProvider interface + async get(key: string): Promise { + return this.cache.get(key); + } + + async set( + key: string, + value: T, + options?: + | number + | { + ttl?: number; + preserveTTL?: boolean; + onlyIfExists?: boolean; + onlyIfNotExists?: boolean; + getOldValue?: boolean; + } + ): Promise { + return this.cache.set(key, value, options); + } + + async del(key: string): Promise { + return this.cache.del(key); + } + + async exists(key: string): Promise { + return this.cache.exists(key); + } + + async clear(): Promise { + return this.cache.clear(); + } + + async keys(pattern: string): Promise { + return this.cache.keys(pattern); + } + + getStats(): CacheStats { + return this.cache.getStats(); + } + + async health(): Promise { + return this.cache.health(); + } + + async waitForReady(timeout?: number): Promise { + return this.cache.waitForReady(timeout); + } + + isReady(): boolean { + return this.cache.isReady(); + } + + // Enhanced cache methods (delegate to underlying cache if available) + async update(key: string, value: T): Promise { + if (this.cache.update) { + return this.cache.update(key, value); + } + // Fallback implementation + return this.cache.set(key, value, { preserveTTL: true }); + } + + async setIfExists(key: string, value: T, ttl?: number): Promise { + if (this.cache.setIfExists) { + return this.cache.setIfExists(key, value, ttl); + } + // Fallback implementation + const result = await this.cache.set(key, value, { onlyIfExists: true, ttl }); + return result !== null; + } + + async setIfNotExists(key: string, value: T, ttl?: number): Promise { + if (this.cache.setIfNotExists) { + return this.cache.setIfNotExists(key, value, ttl); + } + // Fallback implementation + const result = await this.cache.set(key, value, { onlyIfNotExists: true, ttl }); + return result !== null; + } + + async replace(key: string, value: T, ttl?: number): Promise { + if (this.cache.replace) { + return this.cache.replace(key, value, ttl); + } + // Fallback implementation + return this.cache.set(key, value, ttl); + } + + async updateField(key: string, updater: (current: T | null) => T, ttl?: number): Promise { + if (this.cache.updateField) { + return this.cache.updateField(key, updater, ttl); + } + // Fallback implementation + const current = await this.cache.get(key); + const updated = updater(current); + return this.cache.set(key, updated, ttl); + } + + /** + * Get a value using a raw Redis key (bypassing the keyPrefix) + * Delegates to the underlying cache's getRaw method if available + */ + async getRaw(key: string): Promise { + if (this.cache.getRaw) { + return this.cache.getRaw(key); + } + // Fallback: if underlying cache doesn't support getRaw, return null + return null; + } + + /** + * Get the actual Redis key with prefix + */ + getKey(key: string): string { + return `${this.prefix}:${key}`; + } +} + + +/** + * Factory function to create service cache + */ +export function createServiceCache( + serviceName: string, + redisConfig: RedisConfig, + options: { global?: boolean; logger?: any } = {} +): ServiceCache { + return new ServiceCache(serviceName, redisConfig, options.global, options.logger); +} \ No newline at end of file diff --git a/libs/core/queue/src/service-utils.ts b/libs/core/queue/src/service-utils.ts new file mode 100644 index 0000000..d6b3a5e --- /dev/null +++ b/libs/core/queue/src/service-utils.ts @@ -0,0 +1,53 @@ +/** + * Service utilities for name normalization and auto-discovery + */ + +/** + * Normalize service name to kebab-case format + * Examples: + * - webApi -> web-api + * - dataIngestion -> data-ingestion + * - data-pipeline -> data-pipeline (unchanged) + */ +export function normalizeServiceName(serviceName: string): string { + // Handle camelCase to kebab-case conversion + const kebabCase = serviceName + .replace(/([a-z])([A-Z])/g, '$1-$2') + .toLowerCase(); + + return kebabCase; +} + +/** + * Generate cache prefix for a service + */ +export function generateCachePrefix(serviceName: string): string { + const normalized = normalizeServiceName(serviceName); + return `cache:${normalized}`; +} + +/** + * Generate full queue name with service namespace + */ +export function getFullQueueName(serviceName: string, handlerName: string): string { + const normalized = normalizeServiceName(serviceName); + // Use {service_handler} format for Dragonfly optimization and BullMQ compatibility + return `{${normalized}_${handlerName}}`; +} + +/** + * Parse a full queue name into service and handler + */ +export function parseQueueName(fullQueueName: string): { service: string; handler: string } | null { + // Match pattern {service_handler} + const match = fullQueueName.match(/^\{([^_]+)_([^}]+)\}$/); + + if (!match || !match[1] || !match[2]) { + return null; + } + + return { + service: match[1], + handler: match[2], + }; +} \ No newline at end of file diff --git a/libs/core/queue/src/smart-queue-manager.ts b/libs/core/queue/src/smart-queue-manager.ts new file mode 100644 index 0000000..29cd599 --- /dev/null +++ b/libs/core/queue/src/smart-queue-manager.ts @@ -0,0 +1,411 @@ +import { Queue as BullQueue, type Job } from 'bullmq'; +import { handlerRegistry } from '@stock-bot/handlers'; +import { getLogger, type Logger } from '@stock-bot/logger'; +import { QueueManager } from './queue-manager'; +import { Queue } from './queue'; +import type { + SmartQueueConfig, + QueueRoute, + JobData, + JobOptions, + RedisConfig +} from './types'; +import { getFullQueueName, parseQueueName } from './service-utils'; +import { getRedisConnection } from './utils'; + +/** + * Smart Queue Manager with automatic service discovery and routing + * Handles cross-service communication seamlessly + */ +export class SmartQueueManager extends QueueManager { + private serviceName: string; + private queueRoutes = new Map(); + private connections = new Map(); // Redis connections by DB + private producerQueues = new Map(); // For cross-service sending + private _logger: Logger; + + constructor(config: SmartQueueConfig, logger?: Logger) { + // Always use DB 0 for queues (unified queue database) + const modifiedConfig = { + ...config, + redis: { + ...config.redis, + db: 0, // All queues in DB 0 + }, + }; + + super(modifiedConfig, logger); + + this.serviceName = config.serviceName; + this._logger = logger || getLogger('SmartQueueManager'); + + // Auto-discover routes if enabled + if (config.autoDiscoverHandlers !== false) { + this.discoverQueueRoutes(); + } + + this._logger.info('SmartQueueManager initialized', { + service: this.serviceName, + discoveredRoutes: this.queueRoutes.size, + }); + } + + /** + * Discover all available queue routes from handler registry + */ + private discoverQueueRoutes(): void { + try { + const handlers = handlerRegistry.getAllHandlers(); + for (const [handlerName, handlerConfig] of handlers) { + // Get the service that registered this handler + const ownerService = handlerRegistry.getHandlerService(handlerName); + if (ownerService) { + const fullName = getFullQueueName(ownerService, handlerName); + + this.queueRoutes.set(handlerName, { + fullName, + service: ownerService, + handler: handlerName, + db: 0, // All queues in DB 0 + operations: Object.keys(handlerConfig.operations || {}), + }); + + this._logger.trace('Discovered queue route', { + handler: handlerName, + service: ownerService, + operations: Object.keys(handlerConfig.operations || {}).length, + }); + } else { + this._logger.warn('Handler has no service ownership', { handlerName }); + } + } + + // Also discover handlers registered by the current service + const myHandlers = handlerRegistry.getServiceHandlers(this.serviceName); + for (const handlerName of myHandlers) { + if (!this.queueRoutes.has(handlerName)) { + const fullName = getFullQueueName(this.serviceName, handlerName); + this.queueRoutes.set(handlerName, { + fullName, + service: this.serviceName, + handler: handlerName, + db: 0, // All queues in DB 0 + }); + } + } + + this._logger.info('Queue routes discovered', { + totalRoutes: this.queueRoutes.size, + routes: Array.from(this.queueRoutes.values()).map(r => ({ + handler: r.handler, + service: r.service + })), + }); + } catch (error) { + this._logger.error('Failed to discover queue routes', { error }); + } + } + + /** + * Get or create a Redis connection for a specific DB + */ + private getConnection(db: number): any { + if (!this.connections.has(db)) { + const redisConfig: RedisConfig = { + ...this.getRedisConfig(), + db, + }; + const connection = getRedisConnection(redisConfig); + this.connections.set(db, connection); + this._logger.debug('Created Redis connection', { db }); + } + return this.connections.get(db); + } + + /** + * Get a queue for the current service (for processing) + * Overrides parent to use namespaced queue names and ensure service-specific workers + */ + override getQueue(queueName: string, options = {}): Queue { + // Check if this is already a full queue name (service:handler format) + const parsed = parseQueueName(queueName); + + let fullQueueName: string; + let isOwnQueue: boolean; + + if (parsed) { + // Already in service:handler format + fullQueueName = queueName; + isOwnQueue = parsed.service === this.serviceName; + } else { + // Just handler name, assume it's for current service + fullQueueName = getFullQueueName(this.serviceName, queueName); + isOwnQueue = true; + } + + // For cross-service queues, create without workers (producer-only) + if (!isOwnQueue) { + return super.getQueue(fullQueueName, { + ...options, + workers: 0, // No workers for other services' queues + }); + } + + // For own service queues, use configured workers + return super.getQueue(fullQueueName, options); + } + + + /** + * Send a job to any queue (local or remote) + * This is the main method for cross-service communication + */ + async send( + targetQueue: string, + operation: string, + payload: unknown, + options: JobOptions = {} + ): Promise { + // Resolve the target queue + const route = this.resolveQueueRoute(targetQueue); + if (!route) { + throw new Error(`Unknown queue: ${targetQueue}`); + } + + // Validate operation if we have metadata + if (route.operations && !route.operations.includes(operation)) { + this._logger.warn('Operation not found in handler metadata', { + queue: targetQueue, + operation, + available: route.operations, + }); + } + + // Get or create producer queue for the target + const producerQueue = this.getProducerQueue(route); + + // Create job data + const jobData: JobData = { + handler: route.handler, + operation, + payload, + }; + + // Send the job + const job = await producerQueue.add(operation, jobData, options); + + this._logger.debug('Job sent to queue', { + from: this.serviceName, + to: route.service, + queue: route.handler, + operation, + jobId: job.id, + }); + + return job; + } + + /** + * Alias for send() with more explicit name + */ + async sendTo( + targetService: string, + handler: string, + operation: string, + payload: unknown, + options: JobOptions = {} + ): Promise { + const fullQueueName = `${targetService}:${handler}`; + return this.send(fullQueueName, operation, payload, options); + } + + /** + * Resolve a queue name to a route + */ + private resolveQueueRoute(queueName: string): QueueRoute | null { + // Check if it's a full queue name with service prefix + const parsed = parseQueueName(queueName); + if (parsed) { + // Try to find in discovered routes by handler name + const route = this.queueRoutes.get(parsed.handler); + if (route && route.service === parsed.service) { + return route; + } + // Create a route on the fly + return { + fullName: queueName, + service: parsed.service, + handler: parsed.handler, + db: 0, // All queues in DB 0 + }; + } + + // Check if it's just a handler name in our routes + const route = this.queueRoutes.get(queueName); + if (route) { + return route; + } + + // Try to find in handler registry + const ownerService = handlerRegistry.getHandlerService(queueName); + if (ownerService) { + return { + fullName: getFullQueueName(ownerService, queueName), + service: ownerService, + handler: queueName, + db: 0, // All queues in DB 0 + }; + } + + return null; + } + + /** + * Get or create a producer queue for cross-service communication + */ + private getProducerQueue(route: QueueRoute): BullQueue { + if (!this.producerQueues.has(route.fullName)) { + const connection = this.getConnection(route.db); + // Use the same queue name format as workers + const queue = new BullQueue(route.fullName, { + connection, + defaultJobOptions: this.getConfig().defaultQueueOptions?.defaultJobOptions || {}, + }); + this.producerQueues.set(route.fullName, queue); + } + return this.producerQueues.get(route.fullName)!; + } + + /** + * Get all queues (for monitoring purposes) + */ + getAllQueues(): Record { + const allQueues: Record = {}; + + // Get all worker queues using public API + const workerQueueNames = this.getQueueNames(); + for (const name of workerQueueNames) { + const queue = this.getQueue(name); + if (queue && typeof queue.getBullQueue === 'function') { + // Extract the underlying BullMQ queue using the public getter + // Use the simple handler name without service prefix for display + const parsed = parseQueueName(name); + const simpleName = parsed ? parsed.handler : name; + if (simpleName) { + allQueues[simpleName] = queue.getBullQueue(); + } + } + } + + // Add producer queues + for (const [name, queue] of this.producerQueues) { + // Use the simple handler name without service prefix for display + const parsed = parseQueueName(name); + const simpleName = parsed ? parsed.handler : name; + if (simpleName && !allQueues[simpleName]) { + allQueues[simpleName] = queue; + } + } + + // If no queues found, create from discovered routes + if (Object.keys(allQueues).length === 0) { + for (const [handlerName, route] of this.queueRoutes) { + const connection = this.getConnection(0); // Use unified queue DB + allQueues[handlerName] = new BullQueue(route.fullName, { + connection, + defaultJobOptions: this.getConfig().defaultQueueOptions?.defaultJobOptions || {}, + }); + } + } + + return allQueues; + } + + /** + * Get statistics for all queues across all services + */ + async getAllStats(): Promise> { + const stats: Record = {}; + + // Get stats for local queues + stats[this.serviceName] = await this.getGlobalStats(); + + // Get stats for other services if we have access + // This would require additional implementation + + return stats; + } + + /** + * Start workers for all queues belonging to this service + * Overrides parent to ensure only own queues get workers + */ + override startAllWorkers(): void { + if (!this.getConfig().delayWorkerStart) { + this._logger.info( + 'startAllWorkers() called but workers already started automatically (delayWorkerStart is false)' + ); + return; + } + + let workersStarted = 0; + const queues = this.getQueues(); + + for (const [queueName, queue] of queues) { + // Parse queue name to check if it belongs to this service + const parsed = parseQueueName(queueName); + + // Skip if not our service's queue + if (parsed && parsed.service !== this.serviceName) { + this._logger.trace('Skipping workers for cross-service queue', { + queueName, + ownerService: parsed.service, + currentService: this.serviceName, + }); + continue; + } + + const workerCount = this.getConfig().defaultQueueOptions?.workers || 1; + const concurrency = this.getConfig().defaultQueueOptions?.concurrency || 1; + + if (workerCount > 0) { + queue.startWorkersManually(workerCount, concurrency); + workersStarted++; + this._logger.debug('Started workers for queue', { + queueName, + workers: workerCount, + concurrency, + }); + } + } + + this._logger.info('Service workers started', { + service: this.serviceName, + totalQueues: queues.size, + queuesWithWorkers: workersStarted, + delayWorkerStart: this.getConfig().delayWorkerStart, + }); + } + + /** + * Graceful shutdown + */ + override async shutdown(): Promise { + // Close producer queues + for (const [name, queue] of this.producerQueues) { + await queue.close(); + this._logger.debug('Closed producer queue', { queue: name }); + } + + // Close additional connections + for (const [db, connection] of this.connections) { + if (db !== 0) { // Don't close our main connection (DB 0 for queues) + connection.disconnect(); + this._logger.debug('Closed Redis connection', { db }); + } + } + + // Call parent shutdown + await super.shutdown(); + } +} \ No newline at end of file diff --git a/libs/core/queue/src/types.ts b/libs/core/queue/src/types.ts new file mode 100644 index 0000000..780b8ff --- /dev/null +++ b/libs/core/queue/src/types.ts @@ -0,0 +1,169 @@ +// Import types we need to extend +import type { JobOptions, QueueStats } from '@stock-bot/types'; + +// Re-export handler and queue types from shared types package +export type { + HandlerConfig, + HandlerConfigWithSchedule, + JobHandler, + ScheduledJob, + TypedJobHandler, + JobData, + JobOptions, + QueueWorkerConfig, + QueueStats +} from '@stock-bot/types'; + +export interface ProcessOptions { + totalDelayHours: number; + batchSize?: number; + priority?: number; + useBatching?: boolean; + retries?: number; + ttl?: number; + removeOnComplete?: number; + removeOnFail?: number; + // Job routing information + handler?: string; + operation?: string; +} + +export interface BatchResult { + jobsCreated: number; + mode: 'direct' | 'batch'; + totalItems: number; + batchesCreated?: number; + duration: number; +} + +// New improved types for the refactored architecture +export interface RedisConfig { + host: string; + port: number; + password?: string; + db?: number; +} + +// Extended job options specific to this queue implementation +export interface ExtendedJobOptions extends JobOptions { + repeat?: { + pattern?: string; + key?: string; + limit?: number; + every?: number; + immediately?: boolean; + }; +} + +export interface QueueOptions { + defaultJobOptions?: ExtendedJobOptions; + workers?: number; + concurrency?: number; + enableMetrics?: boolean; + enableDLQ?: boolean; + enableRateLimit?: boolean; + rateLimitRules?: RateLimitRule[]; // Queue-specific rate limit rules +} + +export interface QueueManagerConfig { + redis: RedisConfig; + defaultQueueOptions?: QueueOptions; + enableScheduledJobs?: boolean; + globalRateLimit?: RateLimitConfig; + rateLimitRules?: RateLimitRule[]; // Global rate limit rules + delayWorkerStart?: boolean; // If true, workers won't start automatically +} + +// Queue-specific stats that extend the base types +export interface GlobalStats { + queues: Record; + totalJobs: number; + totalWorkers: number; + uptime: number; +} + +// Legacy type for backward compatibility +export interface QueueConfig extends QueueManagerConfig { + queueName?: string; + workers?: number; + concurrency?: number; + handlers?: HandlerInitializer[]; + dlqConfig?: DLQConfig; + enableMetrics?: boolean; +} + + +// Extended batch job data for queue implementation +export interface BatchJobData { + payloadKey: string; + batchIndex: number; + totalBatches: number; + itemCount: number; + totalDelayHours: number; // Total time to distribute all batches +} + +export interface HandlerInitializer { + (): void | Promise; +} + +// Rate limiting types +export interface RateLimitConfig { + points: number; + duration: number; + blockDuration?: number; +} + +export interface RateLimitRule { + level: 'global' | 'queue' | 'handler' | 'operation'; + queueName?: string; // For queue-level limits + handler?: string; // For handler-level limits + operation?: string; // For operation-level limits (most specific) + config: RateLimitConfig; +} + +// DLQ types +export interface DLQConfig { + maxRetries?: number; + retryDelay?: number; + alertThreshold?: number; + cleanupAge?: number; +} + +export interface DLQJobInfo { + id: string; + name: string; + failedReason: string; + attemptsMade: number; + timestamp: number; + data: unknown; +} + +export interface ScheduleConfig { + pattern: string; + jobName: string; + data?: unknown; + options?: ExtendedJobOptions; +} + +// Smart Queue Types +export interface SmartQueueConfig extends QueueManagerConfig { + /** Name of the current service */ + serviceName: string; + /** Whether to auto-discover handlers from registry */ + autoDiscoverHandlers?: boolean; + /** Custom service registry (defaults to built-in) */ + serviceRegistry?: Record; +} + +export interface QueueRoute { + /** Full queue name (now just the handler name, e.g., 'ceo') */ + fullName: string; + /** Service that owns this queue */ + service: string; + /** Handler name */ + handler: string; + /** Redis DB number */ + db: number; + /** Available operations */ + operations?: string[]; +} diff --git a/libs/queue/src/utils.ts b/libs/core/queue/src/utils.ts similarity index 99% rename from libs/queue/src/utils.ts rename to libs/core/queue/src/utils.ts index 0c5e987..6c1d78b 100644 --- a/libs/queue/src/utils.ts +++ b/libs/core/queue/src/utils.ts @@ -5,7 +5,7 @@ import type { RedisConfig } from './types'; */ export function getRedisConnection(config: RedisConfig) { const isTest = process.env.NODE_ENV === 'test' || process.env['BUNIT'] === '1'; - + return { host: config.host, port: config.port, diff --git a/libs/queue/test/batch-processor.test.ts b/libs/core/queue/test/batch-processor.test.ts similarity index 86% rename from libs/queue/test/batch-processor.test.ts rename to libs/core/queue/test/batch-processor.test.ts index 4c1f548..d98ad48 100644 --- a/libs/queue/test/batch-processor.test.ts +++ b/libs/core/queue/test/batch-processor.test.ts @@ -1,355 +1,364 @@ -import { describe, test, expect, beforeEach, afterEach } from 'bun:test'; -import { QueueManager, Queue, handlerRegistry, processItems } from '../src'; - -// Suppress Redis connection errors in tests -process.on('unhandledRejection', (reason, promise) => { - if (reason && typeof reason === 'object' && 'message' in reason) { - const message = (reason as Error).message; - if (message.includes('Connection is closed') || - message.includes('Connection is in monitoring mode')) { - return; - } - } - console.error('Unhandled Rejection at:', promise, 'reason:', reason); -}); - -describe('Batch Processor', () => { - let queueManager: QueueManager; - let queue: Queue; - let queueName: string; - - const redisConfig = { - host: 'localhost', - port: 6379, - password: '', - db: 0, - }; - - - beforeEach(async () => { - // Clear handler registry - handlerRegistry.clear(); - - // Register test handler - handlerRegistry.register('batch-test', { - 'process-item': async (payload) => { - return { processed: true, data: payload }; - }, - 'generic': async (payload) => { - return { processed: true, data: payload }; - }, - 'process-batch-items': async (_batchData) => { - // This is called by the batch processor internally - return { batchProcessed: true }; - }, - }); - - // Use unique queue name per test to avoid conflicts - queueName = `batch-test-queue-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`; - - // Reset and initialize singleton QueueManager for tests - await QueueManager.reset(); - queueManager = QueueManager.initialize({ - redis: redisConfig, - defaultQueueOptions: { - workers: 0, // No workers in tests - concurrency: 5, - }, - }); - - // Get queue using the new getQueue() method (batch cache is now auto-initialized) - queue = queueManager.getQueue(queueName); - // Note: Batch cache is now automatically initialized when getting the queue - - // Ensure completely clean state - wait for queue to be ready first - await queue.getBullQueue().waitUntilReady(); - - // Clear all job states - await queue.getBullQueue().drain(true); - await queue.getBullQueue().clean(0, 1000, 'completed'); - await queue.getBullQueue().clean(0, 1000, 'failed'); - await queue.getBullQueue().clean(0, 1000, 'active'); - await queue.getBullQueue().clean(0, 1000, 'waiting'); - await queue.getBullQueue().clean(0, 1000, 'delayed'); - - // Add a small delay to ensure cleanup is complete - await new Promise(resolve => setTimeout(resolve, 50)); - }); - - afterEach(async () => { - try { - // Clean up jobs first - if (queue) { - try { - await queue.getBullQueue().drain(true); - await queue.getBullQueue().clean(0, 1000, 'completed'); - await queue.getBullQueue().clean(0, 1000, 'failed'); - await queue.getBullQueue().clean(0, 1000, 'active'); - await queue.getBullQueue().clean(0, 1000, 'waiting'); - await queue.getBullQueue().clean(0, 1000, 'delayed'); - } catch { - // Ignore cleanup errors - } - await queue.close(); - } - - if (queueManager) { - await Promise.race([ - QueueManager.reset(), - new Promise((_, reject) => - setTimeout(() => reject(new Error('Shutdown timeout')), 3000) - ) - ]); - } - } catch (error) { - console.warn('Cleanup error:', error.message); - } finally { - handlerRegistry.clear(); - await new Promise(resolve => setTimeout(resolve, 100)); - } - }); - - describe('Direct Processing', () => { - test('should process items directly without batching', async () => { - const items = ['item1', 'item2', 'item3', 'item4', 'item5']; - - const result = await processItems(items, queueName, { - totalDelayHours: 0.001, // 3.6 seconds total - useBatching: false, - handler: 'batch-test', - operation: 'process-item', - priority: 1, - }); - - expect(result.mode).toBe('direct'); - expect(result.totalItems).toBe(5); - expect(result.jobsCreated).toBe(5); - - // Verify jobs were created - BullMQ has an issue where job ID "1" doesn't show up in state queries - // but exists when queried directly, so we need to check both ways - const [delayedJobs, waitingJobs, activeJobs, completedJobs, failedJobs, job1] = await Promise.all([ - queue.getBullQueue().getJobs(['delayed']), - queue.getBullQueue().getJobs(['waiting']), - queue.getBullQueue().getJobs(['active']), - queue.getBullQueue().getJobs(['completed']), - queue.getBullQueue().getJobs(['failed']), - queue.getBullQueue().getJob('1'), // Job 1 often doesn't show up in state queries - ]); - - const jobs = [...delayedJobs, ...waitingJobs, ...activeJobs, ...completedJobs, ...failedJobs]; - const ourJobs = jobs.filter(j => j.name === 'process-item' && j.data.handler === 'batch-test'); - - // Include job 1 if we found it directly but it wasn't in the state queries - if (job1 && job1.name === 'process-item' && job1.data.handler === 'batch-test' && !ourJobs.find(j => j.id === '1')) { - ourJobs.push(job1); - } - - expect(ourJobs.length).toBe(5); - - // Check delays are distributed - const delays = ourJobs.map(j => j.opts.delay || 0).sort((a, b) => a - b); - expect(delays[0]).toBe(0); - expect(delays[4]).toBeGreaterThan(delays[0]); - }); - - test('should process complex objects directly', async () => { - const items = [ - { id: 1, name: 'Product A', price: 100 }, - { id: 2, name: 'Product B', price: 200 }, - { id: 3, name: 'Product C', price: 300 }, - ]; - - const result = await processItems(items, queueName, { - totalDelayHours: 0.001, - useBatching: false, - handler: 'batch-test', - operation: 'process-item', - }); - - expect(result.jobsCreated).toBe(3); - - // Check job payloads - const jobs = await queue.getBullQueue().getJobs(['waiting', 'delayed']); - const ourJobs = jobs.filter(j => j.name === 'process-item' && j.data.handler === 'batch-test'); - const payloads = ourJobs.map(j => j.data.payload); - - expect(payloads).toContainEqual({ id: 1, name: 'Product A', price: 100 }); - expect(payloads).toContainEqual({ id: 2, name: 'Product B', price: 200 }); - expect(payloads).toContainEqual({ id: 3, name: 'Product C', price: 300 }); - }); - }); - - describe('Batch Processing', () => { - test('should process items in batches', async () => { - const items = Array.from({ length: 50 }, (_, i) => ({ id: i, value: `item-${i}` })); - - const result = await processItems(items, queueName, { - totalDelayHours: 0.001, - useBatching: true, - batchSize: 10, - handler: 'batch-test', - operation: 'process-item', - }); - - expect(result.mode).toBe('batch'); - expect(result.totalItems).toBe(50); - expect(result.batchesCreated).toBe(5); // 50 items / 10 per batch - expect(result.jobsCreated).toBe(5); // 5 batch jobs - - // Verify batch jobs were created - const jobs = await queue.getBullQueue().getJobs(['delayed', 'waiting']); - const batchJobs = jobs.filter(j => j.name === 'process-batch'); - expect(batchJobs.length).toBe(5); - }); - - test('should handle different batch sizes', async () => { - const items = Array.from({ length: 23 }, (_, i) => i); - - const result = await processItems(items, queueName, { - totalDelayHours: 0.001, - useBatching: true, - batchSize: 7, - handler: 'batch-test', - operation: 'process-item', - }); - - expect(result.batchesCreated).toBe(4); // 23/7 = 3.28, rounded up to 4 - expect(result.jobsCreated).toBe(4); - }); - - test('should store batch payloads in cache', async () => { - const items = [ - { type: 'A', data: 'test1' }, - { type: 'B', data: 'test2' }, - ]; - - const result = await processItems(items, queueName, { - totalDelayHours: 0.001, - useBatching: true, - batchSize: 2, - handler: 'batch-test', - operation: 'process-item', - ttl: 3600, // 1 hour TTL - }); - - expect(result.jobsCreated).toBe(1); - - // Get the batch job - const jobs = await queue.getBullQueue().getJobs(['waiting', 'delayed']); - expect(jobs.length).toBe(1); - - const batchJob = jobs[0]; - expect(batchJob.data.payload.payloadKey).toBeDefined(); - expect(batchJob.data.payload.itemCount).toBe(2); - }); - }); - - describe('Empty and Edge Cases', () => { - test('should handle empty item list', async () => { - const result = await processItems([], queueName, { - totalDelayHours: 1, - handler: 'batch-test', - operation: 'process-item', - }); - - expect(result.totalItems).toBe(0); - expect(result.jobsCreated).toBe(0); - expect(result.duration).toBeDefined(); - }); - - test('should handle single item', async () => { - const result = await processItems(['single-item'], queueName, { - totalDelayHours: 0.001, - handler: 'batch-test', - operation: 'process-item', - }); - - expect(result.totalItems).toBe(1); - expect(result.jobsCreated).toBe(1); - }); - - test('should handle large batch with delays', async () => { - const items = Array.from({ length: 100 }, (_, i) => ({ index: i })); - - const result = await processItems(items, queueName, { - totalDelayHours: 0.01, // 36 seconds total - useBatching: true, - batchSize: 25, - handler: 'batch-test', - operation: 'process-item', - }); - - expect(result.batchesCreated).toBe(4); // 100/25 - expect(result.jobsCreated).toBe(4); - - // Check delays are distributed - const jobs = await queue.getBullQueue().getJobs(['delayed', 'waiting']); - const delays = jobs.map(j => j.opts.delay || 0).sort((a, b) => a - b); - - expect(delays[0]).toBe(0); // First batch has no delay - expect(delays[3]).toBeGreaterThan(0); // Last batch has delay - }); - }); - - describe('Job Options', () => { - test('should respect custom job options', async () => { - const items = ['a', 'b', 'c']; - - await processItems(items, queueName, { - totalDelayHours: 0, - handler: 'batch-test', - operation: 'process-item', - priority: 5, - retries: 10, - removeOnComplete: 100, - removeOnFail: 50, - }); - - // Check all states including job ID "1" specifically (as it often doesn't show up in state queries) - const [waitingJobs, delayedJobs, job1, job2, job3] = await Promise.all([ - queue.getBullQueue().getJobs(['waiting']), - queue.getBullQueue().getJobs(['delayed']), - queue.getBullQueue().getJob('1'), - queue.getBullQueue().getJob('2'), - queue.getBullQueue().getJob('3'), - ]); - - const jobs = [...waitingJobs, ...delayedJobs]; - // Add any missing jobs that exist but don't show up in state queries - [job1, job2, job3].forEach(job => { - if (job && !jobs.find(j => j.id === job.id)) { - jobs.push(job); - } - }); - - expect(jobs.length).toBe(3); - - jobs.forEach(job => { - expect(job.opts.priority).toBe(5); - expect(job.opts.attempts).toBe(10); - expect(job.opts.removeOnComplete).toBe(100); - expect(job.opts.removeOnFail).toBe(50); - }); - }); - - test('should set handler and operation correctly', async () => { - // Register custom handler for this test - handlerRegistry.register('custom-handler', { - 'custom-operation': async (payload) => { - return { processed: true, data: payload }; - }, - }); - - await processItems(['test'], queueName, { - totalDelayHours: 0, - handler: 'custom-handler', - operation: 'custom-operation', - }); - - const jobs = await queue.getBullQueue().getJobs(['waiting']); - expect(jobs.length).toBe(1); - expect(jobs[0].data.handler).toBe('custom-handler'); - expect(jobs[0].data.operation).toBe('custom-operation'); - }); - }); -}); \ No newline at end of file +import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; +import { handlerRegistry, processItems, Queue, QueueManager } from '../src'; + +// Suppress Redis connection errors in tests +process.on('unhandledRejection', (reason, promise) => { + if (reason && typeof reason === 'object' && 'message' in reason) { + const message = (reason as Error).message; + if ( + message.includes('Connection is closed') || + message.includes('Connection is in monitoring mode') + ) { + return; + } + } + console.error('Unhandled Rejection at:', promise, 'reason:', reason); +}); + +describe('Batch Processor', () => { + let queueManager: QueueManager; + let queue: Queue; + let queueName: string; + + const redisConfig = { + host: 'localhost', + port: 6379, + password: '', + db: 0, + }; + + beforeEach(async () => { + // Clear handler registry + handlerRegistry.clear(); + + // Register test handler + handlerRegistry.register('batch-test', { + 'process-item': async payload => { + return { processed: true, data: payload }; + }, + generic: async payload => { + return { processed: true, data: payload }; + }, + 'process-batch-items': async _batchData => { + // This is called by the batch processor internally + return { batchProcessed: true }; + }, + }); + + // Use unique queue name per test to avoid conflicts + queueName = `batch-test-queue-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`; + + // Reset and initialize singleton QueueManager for tests + await QueueManager.reset(); + queueManager = QueueManager.initialize({ + redis: redisConfig, + defaultQueueOptions: { + workers: 0, // No workers in tests + concurrency: 5, + }, + }); + + // Get queue using the new getQueue() method (batch cache is now auto-initialized) + queue = queueManager.getQueue(queueName); + // Note: Batch cache is now automatically initialized when getting the queue + + // Ensure completely clean state - wait for queue to be ready first + await queue.getBullQueue().waitUntilReady(); + + // Clear all job states + await queue.getBullQueue().drain(true); + await queue.getBullQueue().clean(0, 1000, 'completed'); + await queue.getBullQueue().clean(0, 1000, 'failed'); + await queue.getBullQueue().clean(0, 1000, 'active'); + await queue.getBullQueue().clean(0, 1000, 'waiting'); + await queue.getBullQueue().clean(0, 1000, 'delayed'); + + // Add a small delay to ensure cleanup is complete + await new Promise(resolve => setTimeout(resolve, 50)); + }); + + afterEach(async () => { + try { + // Clean up jobs first + if (queue) { + try { + await queue.getBullQueue().drain(true); + await queue.getBullQueue().clean(0, 1000, 'completed'); + await queue.getBullQueue().clean(0, 1000, 'failed'); + await queue.getBullQueue().clean(0, 1000, 'active'); + await queue.getBullQueue().clean(0, 1000, 'waiting'); + await queue.getBullQueue().clean(0, 1000, 'delayed'); + } catch { + // Ignore cleanup errors + } + await queue.close(); + } + + if (queueManager) { + await Promise.race([ + QueueManager.reset(), + new Promise((_, reject) => setTimeout(() => reject(new Error('Shutdown timeout')), 3000)), + ]); + } + } catch (error) { + console.warn('Cleanup error:', error.message); + } finally { + handlerRegistry.clear(); + await new Promise(resolve => setTimeout(resolve, 100)); + } + }); + + describe('Direct Processing', () => { + test('should process items directly without batching', async () => { + const items = ['item1', 'item2', 'item3', 'item4', 'item5']; + + const result = await processItems(items, queueName, { + totalDelayHours: 0.001, // 3.6 seconds total + useBatching: false, + handler: 'batch-test', + operation: 'process-item', + priority: 1, + }); + + expect(result.mode).toBe('direct'); + expect(result.totalItems).toBe(5); + expect(result.jobsCreated).toBe(5); + + // Verify jobs were created - BullMQ has an issue where job ID "1" doesn't show up in state queries + // but exists when queried directly, so we need to check both ways + const [delayedJobs, waitingJobs, activeJobs, completedJobs, failedJobs, job1] = + await Promise.all([ + queue.getBullQueue().getJobs(['delayed']), + queue.getBullQueue().getJobs(['waiting']), + queue.getBullQueue().getJobs(['active']), + queue.getBullQueue().getJobs(['completed']), + queue.getBullQueue().getJobs(['failed']), + queue.getBullQueue().getJob('1'), // Job 1 often doesn't show up in state queries + ]); + + const jobs = [...delayedJobs, ...waitingJobs, ...activeJobs, ...completedJobs, ...failedJobs]; + const ourJobs = jobs.filter( + j => j.name === 'process-item' && j.data.handler === 'batch-test' + ); + + // Include job 1 if we found it directly but it wasn't in the state queries + if ( + job1 && + job1.name === 'process-item' && + job1.data.handler === 'batch-test' && + !ourJobs.find(j => j.id === '1') + ) { + ourJobs.push(job1); + } + + expect(ourJobs.length).toBe(5); + + // Check delays are distributed + const delays = ourJobs.map(j => j.opts.delay || 0).sort((a, b) => a - b); + expect(delays[0]).toBe(0); + expect(delays[4]).toBeGreaterThan(delays[0]); + }); + + test('should process complex objects directly', async () => { + const items = [ + { id: 1, name: 'Product A', price: 100 }, + { id: 2, name: 'Product B', price: 200 }, + { id: 3, name: 'Product C', price: 300 }, + ]; + + const result = await processItems(items, queueName, { + totalDelayHours: 0.001, + useBatching: false, + handler: 'batch-test', + operation: 'process-item', + }); + + expect(result.jobsCreated).toBe(3); + + // Check job payloads + const jobs = await queue.getBullQueue().getJobs(['waiting', 'delayed']); + const ourJobs = jobs.filter( + j => j.name === 'process-item' && j.data.handler === 'batch-test' + ); + const payloads = ourJobs.map(j => j.data.payload); + + expect(payloads).toContainEqual({ id: 1, name: 'Product A', price: 100 }); + expect(payloads).toContainEqual({ id: 2, name: 'Product B', price: 200 }); + expect(payloads).toContainEqual({ id: 3, name: 'Product C', price: 300 }); + }); + }); + + describe('Batch Processing', () => { + test('should process items in batches', async () => { + const items = Array.from({ length: 50 }, (_, i) => ({ id: i, value: `item-${i}` })); + + const result = await processItems(items, queueName, { + totalDelayHours: 0.001, + useBatching: true, + batchSize: 10, + handler: 'batch-test', + operation: 'process-item', + }); + + expect(result.mode).toBe('batch'); + expect(result.totalItems).toBe(50); + expect(result.batchesCreated).toBe(5); // 50 items / 10 per batch + expect(result.jobsCreated).toBe(5); // 5 batch jobs + + // Verify batch jobs were created + const jobs = await queue.getBullQueue().getJobs(['delayed', 'waiting']); + const batchJobs = jobs.filter(j => j.name === 'process-batch'); + expect(batchJobs.length).toBe(5); + }); + + test('should handle different batch sizes', async () => { + const items = Array.from({ length: 23 }, (_, i) => i); + + const result = await processItems(items, queueName, { + totalDelayHours: 0.001, + useBatching: true, + batchSize: 7, + handler: 'batch-test', + operation: 'process-item', + }); + + expect(result.batchesCreated).toBe(4); // 23/7 = 3.28, rounded up to 4 + expect(result.jobsCreated).toBe(4); + }); + + test('should store batch payloads in cache', async () => { + const items = [ + { type: 'A', data: 'test1' }, + { type: 'B', data: 'test2' }, + ]; + + const result = await processItems(items, queueName, { + totalDelayHours: 0.001, + useBatching: true, + batchSize: 2, + handler: 'batch-test', + operation: 'process-item', + ttl: 3600, // 1 hour TTL + }); + + expect(result.jobsCreated).toBe(1); + + // Get the batch job + const jobs = await queue.getBullQueue().getJobs(['waiting', 'delayed']); + expect(jobs.length).toBe(1); + + const batchJob = jobs[0]; + expect(batchJob.data.payload.payloadKey).toBeDefined(); + expect(batchJob.data.payload.itemCount).toBe(2); + }); + }); + + describe('Empty and Edge Cases', () => { + test('should handle empty item list', async () => { + const result = await processItems([], queueName, { + totalDelayHours: 1, + handler: 'batch-test', + operation: 'process-item', + }); + + expect(result.totalItems).toBe(0); + expect(result.jobsCreated).toBe(0); + expect(result.duration).toBeDefined(); + }); + + test('should handle single item', async () => { + const result = await processItems(['single-item'], queueName, { + totalDelayHours: 0.001, + handler: 'batch-test', + operation: 'process-item', + }); + + expect(result.totalItems).toBe(1); + expect(result.jobsCreated).toBe(1); + }); + + test('should handle large batch with delays', async () => { + const items = Array.from({ length: 100 }, (_, i) => ({ index: i })); + + const result = await processItems(items, queueName, { + totalDelayHours: 0.01, // 36 seconds total + useBatching: true, + batchSize: 25, + handler: 'batch-test', + operation: 'process-item', + }); + + expect(result.batchesCreated).toBe(4); // 100/25 + expect(result.jobsCreated).toBe(4); + + // Check delays are distributed + const jobs = await queue.getBullQueue().getJobs(['delayed', 'waiting']); + const delays = jobs.map(j => j.opts.delay || 0).sort((a, b) => a - b); + + expect(delays[0]).toBe(0); // First batch has no delay + expect(delays[3]).toBeGreaterThan(0); // Last batch has delay + }); + }); + + describe('Job Options', () => { + test('should respect custom job options', async () => { + const items = ['a', 'b', 'c']; + + await processItems(items, queueName, { + totalDelayHours: 0, + handler: 'batch-test', + operation: 'process-item', + priority: 5, + retries: 10, + removeOnComplete: 100, + removeOnFail: 50, + }); + + // Check all states including job ID "1" specifically (as it often doesn't show up in state queries) + const [waitingJobs, delayedJobs, job1, job2, job3] = await Promise.all([ + queue.getBullQueue().getJobs(['waiting']), + queue.getBullQueue().getJobs(['delayed']), + queue.getBullQueue().getJob('1'), + queue.getBullQueue().getJob('2'), + queue.getBullQueue().getJob('3'), + ]); + + const jobs = [...waitingJobs, ...delayedJobs]; + // Add any missing jobs that exist but don't show up in state queries + [job1, job2, job3].forEach(job => { + if (job && !jobs.find(j => j.id === job.id)) { + jobs.push(job); + } + }); + + expect(jobs.length).toBe(3); + + jobs.forEach(job => { + expect(job.opts.priority).toBe(5); + expect(job.opts.attempts).toBe(10); + expect(job.opts.removeOnComplete).toBe(100); + expect(job.opts.removeOnFail).toBe(50); + }); + }); + + test('should set handler and operation correctly', async () => { + // Register custom handler for this test + handlerRegistry.register('custom-handler', { + 'custom-operation': async payload => { + return { processed: true, data: payload }; + }, + }); + + await processItems(['test'], queueName, { + totalDelayHours: 0, + handler: 'custom-handler', + operation: 'custom-operation', + }); + + const jobs = await queue.getBullQueue().getJobs(['waiting']); + expect(jobs.length).toBe(1); + expect(jobs[0].data.handler).toBe('custom-handler'); + expect(jobs[0].data.operation).toBe('custom-operation'); + }); + }); +}); diff --git a/libs/queue/test/dlq-handler.test.ts b/libs/core/queue/test/dlq-handler.test.ts similarity index 86% rename from libs/queue/test/dlq-handler.test.ts rename to libs/core/queue/test/dlq-handler.test.ts index 7b7a335..657404a 100644 --- a/libs/queue/test/dlq-handler.test.ts +++ b/libs/core/queue/test/dlq-handler.test.ts @@ -1,357 +1,379 @@ -import { describe, test, expect, beforeEach, afterEach } from 'bun:test'; -import { Queue, Worker } from 'bullmq'; -import { DeadLetterQueueHandler } from '../src/dlq-handler'; -import { getRedisConnection } from '../src/utils'; - -// Suppress Redis connection errors in tests -process.on('unhandledRejection', (reason, promise) => { - if (reason && typeof reason === 'object' && 'message' in reason) { - const message = (reason as Error).message; - if (message.includes('Connection is closed') || - message.includes('Connection is in monitoring mode')) { - return; - } - } - console.error('Unhandled Rejection at:', promise, 'reason:', reason); -}); - -describe('DeadLetterQueueHandler', () => { - let mainQueue: Queue; - let dlqHandler: DeadLetterQueueHandler; - let worker: Worker; - let connection: any; - - const redisConfig = { - host: 'localhost', - port: 6379, - password: '', - db: 0, - }; - - beforeEach(async () => { - connection = getRedisConnection(redisConfig); - - // Create main queue - mainQueue = new Queue('test-queue', { connection }); - - // Create DLQ handler - dlqHandler = new DeadLetterQueueHandler(mainQueue, connection, { - maxRetries: 3, - retryDelay: 100, - alertThreshold: 5, - cleanupAge: 24, - }); - }); - - afterEach(async () => { - try { - if (worker) { - await worker.close(); - } - await dlqHandler.shutdown(); - await mainQueue.close(); - } catch { - // Ignore cleanup errors - } - await new Promise(resolve => setTimeout(resolve, 50)); - }); - - describe('Failed Job Handling', () => { - test('should move job to DLQ after max retries', async () => { - let attemptCount = 0; - - // Create worker that always fails - worker = new Worker('test-queue', async () => { - attemptCount++; - throw new Error('Job failed'); - }, { - connection, - autorun: false, - }); - - // Add job with limited attempts - const _job = await mainQueue.add('failing-job', { test: true }, { - attempts: 3, - backoff: { type: 'fixed', delay: 50 }, - }); - - // Process job manually - await worker.run(); - - // Wait for retries - await new Promise(resolve => setTimeout(resolve, 300)); - - // Job should have failed 3 times - expect(attemptCount).toBe(3); - - // Check if job was moved to DLQ - const dlqStats = await dlqHandler.getStats(); - expect(dlqStats.total).toBe(1); - expect(dlqStats.byJobName['failing-job']).toBe(1); - }); - - test('should track failure count correctly', async () => { - const job = await mainQueue.add('test-job', { data: 'test' }); - const error = new Error('Test error'); - - // Simulate multiple failures - await dlqHandler.handleFailedJob(job, error); - await dlqHandler.handleFailedJob(job, error); - - // On third failure with max attempts reached, should move to DLQ - job.attemptsMade = 3; - job.opts.attempts = 3; - await dlqHandler.handleFailedJob(job, error); - - const stats = await dlqHandler.getStats(); - expect(stats.total).toBe(1); - }); - }); - - describe('DLQ Statistics', () => { - test('should provide detailed statistics', async () => { - // Add some failed jobs to DLQ - const dlq = new Queue(`test-queue-dlq`, { connection }); - - await dlq.add('failed-job', { - originalJob: { - id: '1', - name: 'job-type-a', - data: { test: true }, - attemptsMade: 3, - }, - error: { message: 'Error 1' }, - movedToDLQAt: new Date().toISOString(), - }); - - await dlq.add('failed-job', { - originalJob: { - id: '2', - name: 'job-type-b', - data: { test: true }, - attemptsMade: 3, - }, - error: { message: 'Error 2' }, - movedToDLQAt: new Date().toISOString(), - }); - - const stats = await dlqHandler.getStats(); - expect(stats.total).toBe(2); - expect(stats.recent).toBe(2); // Both are recent - expect(Object.keys(stats.byJobName).length).toBe(2); - expect(stats.oldestJob).toBeDefined(); - - await dlq.close(); - }); - - test('should count recent jobs correctly', async () => { - const dlq = new Queue(`test-queue-dlq`, { connection }); - - // Add old job (25 hours ago) - const oldTimestamp = Date.now() - 25 * 60 * 60 * 1000; - await dlq.add('failed-job', { - originalJob: { id: '1', name: 'old-job' }, - error: { message: 'Old error' }, - movedToDLQAt: new Date(oldTimestamp).toISOString(), - }, { timestamp: oldTimestamp }); - - // Add recent job - await dlq.add('failed-job', { - originalJob: { id: '2', name: 'recent-job' }, - error: { message: 'Recent error' }, - movedToDLQAt: new Date().toISOString(), - }); - - const stats = await dlqHandler.getStats(); - expect(stats.total).toBe(2); - expect(stats.recent).toBe(1); // Only one is recent - - await dlq.close(); - }); - }); - - describe('DLQ Retry', () => { - test('should retry jobs from DLQ', async () => { - const dlq = new Queue(`test-queue-dlq`, { connection }); - - // Add failed jobs to DLQ - await dlq.add('failed-job', { - originalJob: { - id: '1', - name: 'retry-job', - data: { retry: true }, - opts: { priority: 1 }, - }, - error: { message: 'Failed' }, - movedToDLQAt: new Date().toISOString(), - }); - - await dlq.add('failed-job', { - originalJob: { - id: '2', - name: 'retry-job-2', - data: { retry: true }, - opts: {}, - }, - error: { message: 'Failed' }, - movedToDLQAt: new Date().toISOString(), - }); - - // Retry jobs - const retriedCount = await dlqHandler.retryDLQJobs(10); - expect(retriedCount).toBe(2); - - // Check main queue has the retried jobs - const mainQueueJobs = await mainQueue.getWaiting(); - expect(mainQueueJobs.length).toBe(2); - expect(mainQueueJobs[0].name).toBe('retry-job'); - expect(mainQueueJobs[0].data).toEqual({ retry: true }); - - // DLQ should be empty - const dlqJobs = await dlq.getCompleted(); - expect(dlqJobs.length).toBe(0); - - await dlq.close(); - }); - - test('should respect retry limit', async () => { - const dlq = new Queue(`test-queue-dlq`, { connection }); - - // Add 5 failed jobs - for (let i = 0; i < 5; i++) { - await dlq.add('failed-job', { - originalJob: { - id: `${i}`, - name: `job-${i}`, - data: { index: i }, - }, - error: { message: 'Failed' }, - movedToDLQAt: new Date().toISOString(), - }); - } - - // Retry only 3 jobs - const retriedCount = await dlqHandler.retryDLQJobs(3); - expect(retriedCount).toBe(3); - - // Check counts - const mainQueueJobs = await mainQueue.getWaiting(); - expect(mainQueueJobs.length).toBe(3); - - const remainingDLQ = await dlq.getCompleted(); - expect(remainingDLQ.length).toBe(2); - - await dlq.close(); - }); - }); - - describe('DLQ Cleanup', () => { - test('should cleanup old DLQ entries', async () => { - const dlq = new Queue(`test-queue-dlq`, { connection }); - - // Add old job (25 hours ago) - const oldTimestamp = Date.now() - 25 * 60 * 60 * 1000; - await dlq.add('failed-job', { - originalJob: { id: '1', name: 'old-job' }, - error: { message: 'Old error' }, - }, { timestamp: oldTimestamp }); - - // Add recent job (1 hour ago) - const recentTimestamp = Date.now() - 1 * 60 * 60 * 1000; - await dlq.add('failed-job', { - originalJob: { id: '2', name: 'recent-job' }, - error: { message: 'Recent error' }, - }, { timestamp: recentTimestamp }); - - // Run cleanup (24 hour threshold) - const removedCount = await dlqHandler.cleanup(); - expect(removedCount).toBe(1); - - // Check remaining jobs - const remaining = await dlq.getCompleted(); - expect(remaining.length).toBe(1); - expect(remaining[0].data.originalJob.name).toBe('recent-job'); - - await dlq.close(); - }); - }); - - describe('Failed Job Inspection', () => { - test('should inspect failed jobs', async () => { - const dlq = new Queue(`test-queue-dlq`, { connection }); - - // Add failed jobs with different error types - await dlq.add('failed-job', { - originalJob: { - id: '1', - name: 'network-job', - data: { url: 'https://api.example.com' }, - attemptsMade: 3, - }, - error: { - message: 'Network timeout', - stack: 'Error: Network timeout\n at ...', - name: 'NetworkError', - }, - movedToDLQAt: '2024-01-01T10:00:00Z', - }); - - await dlq.add('failed-job', { - originalJob: { - id: '2', - name: 'parse-job', - data: { input: 'invalid-json' }, - attemptsMade: 2, - }, - error: { - message: 'Invalid JSON', - stack: 'SyntaxError: Invalid JSON\n at ...', - name: 'SyntaxError', - }, - movedToDLQAt: '2024-01-01T11:00:00Z', - }); - - const failedJobs = await dlqHandler.inspectFailedJobs(10); - expect(failedJobs.length).toBe(2); - - expect(failedJobs[0]).toMatchObject({ - id: '1', - name: 'network-job', - data: { url: 'https://api.example.com' }, - error: { - message: 'Network timeout', - name: 'NetworkError', - }, - failedAt: '2024-01-01T10:00:00Z', - attempts: 3, - }); - - await dlq.close(); - }); - }); - - describe('Alert Threshold', () => { - test('should detect when alert threshold is exceeded', async () => { - const dlq = new Queue(`test-queue-dlq`, { connection }); - - // Add jobs to exceed threshold (5) - for (let i = 0; i < 6; i++) { - await dlq.add('failed-job', { - originalJob: { - id: `${i}`, - name: `job-${i}`, - data: { index: i }, - }, - error: { message: 'Failed' }, - movedToDLQAt: new Date().toISOString(), - }); - } - - const stats = await dlqHandler.getStats(); - expect(stats.total).toBe(6); - // In a real implementation, this would trigger alerts - - await dlq.close(); - }); - }); -}); \ No newline at end of file +import { Queue, Worker } from 'bullmq'; +import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; +import { DeadLetterQueueHandler } from '../src/dlq-handler'; +import { getRedisConnection } from '../src/utils'; + +// Suppress Redis connection errors in tests +process.on('unhandledRejection', (reason, promise) => { + if (reason && typeof reason === 'object' && 'message' in reason) { + const message = (reason as Error).message; + if ( + message.includes('Connection is closed') || + message.includes('Connection is in monitoring mode') + ) { + return; + } + } + console.error('Unhandled Rejection at:', promise, 'reason:', reason); +}); + +describe('DeadLetterQueueHandler', () => { + let mainQueue: Queue; + let dlqHandler: DeadLetterQueueHandler; + let worker: Worker; + let connection: any; + + const redisConfig = { + host: 'localhost', + port: 6379, + password: '', + db: 0, + }; + + beforeEach(async () => { + connection = getRedisConnection(redisConfig); + + // Create main queue + mainQueue = new Queue('test-queue', { connection }); + + // Create DLQ handler + dlqHandler = new DeadLetterQueueHandler(mainQueue, connection, { + maxRetries: 3, + retryDelay: 100, + alertThreshold: 5, + cleanupAge: 24, + }); + }); + + afterEach(async () => { + try { + if (worker) { + await worker.close(); + } + await dlqHandler.shutdown(); + await mainQueue.close(); + } catch { + // Ignore cleanup errors + } + await new Promise(resolve => setTimeout(resolve, 50)); + }); + + describe('Failed Job Handling', () => { + test('should move job to DLQ after max retries', async () => { + let attemptCount = 0; + + // Create worker that always fails + worker = new Worker( + 'test-queue', + async () => { + attemptCount++; + throw new Error('Job failed'); + }, + { + connection, + autorun: false, + } + ); + + // Add job with limited attempts + const _job = await mainQueue.add( + 'failing-job', + { test: true }, + { + attempts: 3, + backoff: { type: 'fixed', delay: 50 }, + } + ); + + // Process job manually + await worker.run(); + + // Wait for retries + await new Promise(resolve => setTimeout(resolve, 300)); + + // Job should have failed 3 times + expect(attemptCount).toBe(3); + + // Check if job was moved to DLQ + const dlqStats = await dlqHandler.getStats(); + expect(dlqStats.total).toBe(1); + expect(dlqStats.byJobName['failing-job']).toBe(1); + }); + + test('should track failure count correctly', async () => { + const job = await mainQueue.add('test-job', { data: 'test' }); + const error = new Error('Test error'); + + // Simulate multiple failures + await dlqHandler.handleFailedJob(job, error); + await dlqHandler.handleFailedJob(job, error); + + // On third failure with max attempts reached, should move to DLQ + job.attemptsMade = 3; + job.opts.attempts = 3; + await dlqHandler.handleFailedJob(job, error); + + const stats = await dlqHandler.getStats(); + expect(stats.total).toBe(1); + }); + }); + + describe('DLQ Statistics', () => { + test('should provide detailed statistics', async () => { + // Add some failed jobs to DLQ + const dlq = new Queue(`test-queue-dlq`, { connection }); + + await dlq.add('failed-job', { + originalJob: { + id: '1', + name: 'job-type-a', + data: { test: true }, + attemptsMade: 3, + }, + error: { message: 'Error 1' }, + movedToDLQAt: new Date().toISOString(), + }); + + await dlq.add('failed-job', { + originalJob: { + id: '2', + name: 'job-type-b', + data: { test: true }, + attemptsMade: 3, + }, + error: { message: 'Error 2' }, + movedToDLQAt: new Date().toISOString(), + }); + + const stats = await dlqHandler.getStats(); + expect(stats.total).toBe(2); + expect(stats.recent).toBe(2); // Both are recent + expect(Object.keys(stats.byJobName).length).toBe(2); + expect(stats.oldestJob).toBeDefined(); + + await dlq.close(); + }); + + test('should count recent jobs correctly', async () => { + const dlq = new Queue(`test-queue-dlq`, { connection }); + + // Add old job (25 hours ago) + const oldTimestamp = Date.now() - 25 * 60 * 60 * 1000; + await dlq.add( + 'failed-job', + { + originalJob: { id: '1', name: 'old-job' }, + error: { message: 'Old error' }, + movedToDLQAt: new Date(oldTimestamp).toISOString(), + }, + { timestamp: oldTimestamp } + ); + + // Add recent job + await dlq.add('failed-job', { + originalJob: { id: '2', name: 'recent-job' }, + error: { message: 'Recent error' }, + movedToDLQAt: new Date().toISOString(), + }); + + const stats = await dlqHandler.getStats(); + expect(stats.total).toBe(2); + expect(stats.recent).toBe(1); // Only one is recent + + await dlq.close(); + }); + }); + + describe('DLQ Retry', () => { + test('should retry jobs from DLQ', async () => { + const dlq = new Queue(`test-queue-dlq`, { connection }); + + // Add failed jobs to DLQ + await dlq.add('failed-job', { + originalJob: { + id: '1', + name: 'retry-job', + data: { retry: true }, + opts: { priority: 1 }, + }, + error: { message: 'Failed' }, + movedToDLQAt: new Date().toISOString(), + }); + + await dlq.add('failed-job', { + originalJob: { + id: '2', + name: 'retry-job-2', + data: { retry: true }, + opts: {}, + }, + error: { message: 'Failed' }, + movedToDLQAt: new Date().toISOString(), + }); + + // Retry jobs + const retriedCount = await dlqHandler.retryDLQJobs(10); + expect(retriedCount).toBe(2); + + // Check main queue has the retried jobs + const mainQueueJobs = await mainQueue.getWaiting(); + expect(mainQueueJobs.length).toBe(2); + expect(mainQueueJobs[0].name).toBe('retry-job'); + expect(mainQueueJobs[0].data).toEqual({ retry: true }); + + // DLQ should be empty + const dlqJobs = await dlq.getCompleted(); + expect(dlqJobs.length).toBe(0); + + await dlq.close(); + }); + + test('should respect retry limit', async () => { + const dlq = new Queue(`test-queue-dlq`, { connection }); + + // Add 5 failed jobs + for (let i = 0; i < 5; i++) { + await dlq.add('failed-job', { + originalJob: { + id: `${i}`, + name: `job-${i}`, + data: { index: i }, + }, + error: { message: 'Failed' }, + movedToDLQAt: new Date().toISOString(), + }); + } + + // Retry only 3 jobs + const retriedCount = await dlqHandler.retryDLQJobs(3); + expect(retriedCount).toBe(3); + + // Check counts + const mainQueueJobs = await mainQueue.getWaiting(); + expect(mainQueueJobs.length).toBe(3); + + const remainingDLQ = await dlq.getCompleted(); + expect(remainingDLQ.length).toBe(2); + + await dlq.close(); + }); + }); + + describe('DLQ Cleanup', () => { + test('should cleanup old DLQ entries', async () => { + const dlq = new Queue(`test-queue-dlq`, { connection }); + + // Add old job (25 hours ago) + const oldTimestamp = Date.now() - 25 * 60 * 60 * 1000; + await dlq.add( + 'failed-job', + { + originalJob: { id: '1', name: 'old-job' }, + error: { message: 'Old error' }, + }, + { timestamp: oldTimestamp } + ); + + // Add recent job (1 hour ago) + const recentTimestamp = Date.now() - 1 * 60 * 60 * 1000; + await dlq.add( + 'failed-job', + { + originalJob: { id: '2', name: 'recent-job' }, + error: { message: 'Recent error' }, + }, + { timestamp: recentTimestamp } + ); + + // Run cleanup (24 hour threshold) + const removedCount = await dlqHandler.cleanup(); + expect(removedCount).toBe(1); + + // Check remaining jobs + const remaining = await dlq.getCompleted(); + expect(remaining.length).toBe(1); + expect(remaining[0].data.originalJob.name).toBe('recent-job'); + + await dlq.close(); + }); + }); + + describe('Failed Job Inspection', () => { + test('should inspect failed jobs', async () => { + const dlq = new Queue(`test-queue-dlq`, { connection }); + + // Add failed jobs with different error types + await dlq.add('failed-job', { + originalJob: { + id: '1', + name: 'network-job', + data: { url: 'https://api.example.com' }, + attemptsMade: 3, + }, + error: { + message: 'Network timeout', + stack: 'Error: Network timeout\n at ...', + name: 'NetworkError', + }, + movedToDLQAt: '2024-01-01T10:00:00Z', + }); + + await dlq.add('failed-job', { + originalJob: { + id: '2', + name: 'parse-job', + data: { input: 'invalid-json' }, + attemptsMade: 2, + }, + error: { + message: 'Invalid JSON', + stack: 'SyntaxError: Invalid JSON\n at ...', + name: 'SyntaxError', + }, + movedToDLQAt: '2024-01-01T11:00:00Z', + }); + + const failedJobs = await dlqHandler.inspectFailedJobs(10); + expect(failedJobs.length).toBe(2); + + expect(failedJobs[0]).toMatchObject({ + id: '1', + name: 'network-job', + data: { url: 'https://api.example.com' }, + error: { + message: 'Network timeout', + name: 'NetworkError', + }, + failedAt: '2024-01-01T10:00:00Z', + attempts: 3, + }); + + await dlq.close(); + }); + }); + + describe('Alert Threshold', () => { + test('should detect when alert threshold is exceeded', async () => { + const dlq = new Queue(`test-queue-dlq`, { connection }); + + // Add jobs to exceed threshold (5) + for (let i = 0; i < 6; i++) { + await dlq.add('failed-job', { + originalJob: { + id: `${i}`, + name: `job-${i}`, + data: { index: i }, + }, + error: { message: 'Failed' }, + movedToDLQAt: new Date().toISOString(), + }); + } + + const stats = await dlqHandler.getStats(); + expect(stats.total).toBe(6); + // In a real implementation, this would trigger alerts + + await dlq.close(); + }); + }); +}); diff --git a/libs/queue/test/queue-integration.test.ts b/libs/core/queue/test/queue-integration.test.ts similarity index 93% rename from libs/queue/test/queue-integration.test.ts rename to libs/core/queue/test/queue-integration.test.ts index 4bf1f63..3f633c8 100644 --- a/libs/queue/test/queue-integration.test.ts +++ b/libs/core/queue/test/queue-integration.test.ts @@ -1,12 +1,14 @@ -import { describe, test, expect, beforeEach, afterEach } from 'bun:test'; -import { QueueManager, handlerRegistry } from '../src'; +import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; +import { handlerRegistry, QueueManager } from '../src'; // Suppress Redis connection errors in tests process.on('unhandledRejection', (reason, promise) => { if (reason && typeof reason === 'object' && 'message' in reason) { const message = (reason as Error).message; - if (message.includes('Connection is closed') || - message.includes('Connection is in monitoring mode')) { + if ( + message.includes('Connection is closed') || + message.includes('Connection is in monitoring mode') + ) { // Suppress these specific Redis errors in tests return; } @@ -34,9 +36,7 @@ describe('QueueManager Integration Tests', () => { try { await Promise.race([ queueManager.shutdown(), - new Promise((_, reject) => - setTimeout(() => reject(new Error('Shutdown timeout')), 3000) - ) + new Promise((_, reject) => setTimeout(() => reject(new Error('Shutdown timeout')), 3000)), ]); } catch (error) { // Ignore shutdown errors in tests @@ -45,10 +45,10 @@ describe('QueueManager Integration Tests', () => { queueManager = null as any; } } - + // Clear handler registry to prevent conflicts handlerRegistry.clear(); - + // Add delay to allow connections to close await new Promise(resolve => setTimeout(resolve, 100)); }); diff --git a/libs/queue/test/queue-manager.test.ts b/libs/core/queue/test/queue-manager.test.ts similarity index 88% rename from libs/queue/test/queue-manager.test.ts rename to libs/core/queue/test/queue-manager.test.ts index becfb00..83f12c3 100644 --- a/libs/queue/test/queue-manager.test.ts +++ b/libs/core/queue/test/queue-manager.test.ts @@ -1,371 +1,371 @@ -import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; -import { handlerRegistry, QueueManager } from '../src'; - -// Suppress Redis connection errors in tests -process.on('unhandledRejection', (reason, promise) => { - if (reason && typeof reason === 'object' && 'message' in reason) { - const message = (reason as Error).message; - if (message.includes('Connection is closed') || - message.includes('Connection is in monitoring mode')) { - return; - } - } - console.error('Unhandled Rejection at:', promise, 'reason:', reason); -}); - -describe('QueueManager', () => { - let queueManager: QueueManager; - - // Use local Redis/Dragonfly - const redisConfig = { - host: 'localhost', - port: 6379, - password: '', - db: 0, - }; - - beforeEach(() => { - handlerRegistry.clear(); - }); - - afterEach(async () => { - if (queueManager) { - try { - await Promise.race([ - queueManager.shutdown(), - new Promise((_, reject) => - setTimeout(() => reject(new Error('Shutdown timeout')), 3000) - ) - ]); - } catch (error) { - console.warn('Shutdown error:', error.message); - } finally { - queueManager = null as any; - } - } - - handlerRegistry.clear(); - await new Promise(resolve => setTimeout(resolve, 100)); - }); - - describe('Basic Operations', () => { - test('should initialize queue manager', async () => { - queueManager = new QueueManager({ - queueName: 'test-queue', - redis: redisConfig, - workers: 1, - concurrency: 5, - }); - - await queueManager.initialize(); - expect(queueManager.queueName).toBe('test-queue'); - }); - - test('should add and process a job', async () => { - let processedPayload: any; - - // Register handler - handlerRegistry.register('test-handler', { - 'test-operation': async payload => { - processedPayload = payload; - return { success: true, data: payload }; - }, - }); - - queueManager = new QueueManager({ - queueName: 'test-queue', - redis: redisConfig, - workers: 1, - }); - - await queueManager.initialize(); - - // Add job - const job = await queueManager.add('test-job', { - handler: 'test-handler', - operation: 'test-operation', - payload: { message: 'Hello, Queue!' }, - }); - - expect(job.name).toBe('test-job'); - - // Wait for processing - await new Promise(resolve => setTimeout(resolve, 100)); - - expect(processedPayload).toEqual({ message: 'Hello, Queue!' }); - }); - - test('should handle missing handler gracefully', async () => { - queueManager = new QueueManager({ - queueName: 'test-queue', - redis: redisConfig, - workers: 1, - }); - - await queueManager.initialize(); - - const job = await queueManager.add('test-job', { - handler: 'non-existent', - operation: 'test-operation', - payload: { test: true }, - }); - - // Wait for job to fail - await new Promise(resolve => setTimeout(resolve, 100)); - - const failed = await job.isFailed(); - expect(failed).toBe(true); - }); - - test('should add multiple jobs in bulk', async () => { - let processedCount = 0; - - handlerRegistry.register('bulk-handler', { - process: async _payload => { - processedCount++; - return { processed: true }; - }, - }); - - queueManager = new QueueManager({ - queueName: 'test-queue', - redis: redisConfig, - workers: 2, - concurrency: 5, - }); - - await queueManager.initialize(); - - const jobs = await queueManager.addBulk([ - { - name: 'job1', - data: { handler: 'bulk-handler', operation: 'process', payload: { id: 1 } }, - }, - { - name: 'job2', - data: { handler: 'bulk-handler', operation: 'process', payload: { id: 2 } }, - }, - { - name: 'job3', - data: { handler: 'bulk-handler', operation: 'process', payload: { id: 3 } }, - }, - ]); - - expect(jobs.length).toBe(3); - - // Wait for processing - await new Promise(resolve => setTimeout(resolve, 200)); - - expect(processedCount).toBe(3); - }); - - test('should get queue statistics', async () => { - queueManager = new QueueManager({ - queueName: 'test-queue', - redis: redisConfig, - workers: 0, // No workers, jobs will stay in waiting - }); - - await queueManager.initialize(); - - // Add some jobs - await queueManager.add('job1', { - handler: 'test', - operation: 'test', - payload: { id: 1 }, - }); - - await queueManager.add('job2', { - handler: 'test', - operation: 'test', - payload: { id: 2 }, - }); - - const stats = await queueManager.getStats(); - - expect(stats.waiting).toBe(2); - expect(stats.active).toBe(0); - expect(stats.completed).toBe(0); - expect(stats.failed).toBe(0); - }); - - test('should pause and resume queue', async () => { - let processedCount = 0; - - handlerRegistry.register('pause-test', { - process: async () => { - processedCount++; - return { ok: true }; - }, - }); - - queueManager = new QueueManager({ - queueName: 'test-queue', - redis: redisConfig, - workers: 1, - }); - - await queueManager.initialize(); - - // Pause queue - await queueManager.pause(); - - // Add job while paused - await queueManager.add('job1', { - handler: 'pause-test', - operation: 'process', - payload: {}, - }); - - // Wait a bit - job should not be processed - await new Promise(resolve => setTimeout(resolve, 100)); - expect(processedCount).toBe(0); - - // Resume queue - await queueManager.resume(); - - // Wait for processing - await new Promise(resolve => setTimeout(resolve, 100)); - expect(processedCount).toBe(1); - }); - }); - - describe('Scheduled Jobs', () => { - test('should register and process scheduled jobs', async () => { - let executionCount = 0; - - handlerRegistry.registerWithSchedule({ - name: 'scheduled-handler', - operations: { - 'scheduled-task': async _payload => { - executionCount++; - return { executed: true, timestamp: Date.now() }; - }, - }, - scheduledJobs: [ - { - type: 'test-schedule', - operation: 'scheduled-task', - payload: { test: true }, - cronPattern: '*/1 * * * * *', // Every second - description: 'Test scheduled job', - }, - ], - }); - - queueManager = new QueueManager({ - queueName: 'test-queue', - redis: redisConfig, - workers: 1, - enableScheduledJobs: true, - }); - - await queueManager.initialize(); - - // Wait for scheduled job to execute - await new Promise(resolve => setTimeout(resolve, 2500)); - - expect(executionCount).toBeGreaterThanOrEqual(2); - }); - }); - - describe('Error Handling', () => { - test('should handle job errors with retries', async () => { - let attemptCount = 0; - - handlerRegistry.register('retry-handler', { - 'failing-operation': async () => { - attemptCount++; - if (attemptCount < 3) { - throw new Error(`Attempt ${attemptCount} failed`); - } - return { success: true }; - }, - }); - - queueManager = new QueueManager({ - queueName: 'test-queue', - redis: redisConfig, - workers: 1, - defaultJobOptions: { - attempts: 3, - backoff: { - type: 'fixed', - delay: 50, - }, - }, - }); - - await queueManager.initialize(); - - const job = await queueManager.add('retry-job', { - handler: 'retry-handler', - operation: 'failing-operation', - payload: {}, - }); - - // Wait for retries - await new Promise(resolve => setTimeout(resolve, 500)); - - const completed = await job.isCompleted(); - expect(completed).toBe(true); - expect(attemptCount).toBe(3); - }); - }); - - describe('Multiple Handlers', () => { - test('should handle multiple handlers with different operations', async () => { - const results: any[] = []; - - handlerRegistry.register('handler-a', { - 'operation-1': async payload => { - results.push({ handler: 'a', op: '1', payload }); - return { handler: 'a', op: '1' }; - }, - 'operation-2': async payload => { - results.push({ handler: 'a', op: '2', payload }); - return { handler: 'a', op: '2' }; - }, - }); - - handlerRegistry.register('handler-b', { - 'operation-1': async payload => { - results.push({ handler: 'b', op: '1', payload }); - return { handler: 'b', op: '1' }; - }, - }); - - queueManager = new QueueManager({ - queueName: 'test-queue', - redis: redisConfig, - workers: 2, - }); - - await queueManager.initialize(); - - // Add jobs for different handlers - await queueManager.addBulk([ - { - name: 'job1', - data: { handler: 'handler-a', operation: 'operation-1', payload: { id: 1 } }, - }, - { - name: 'job2', - data: { handler: 'handler-a', operation: 'operation-2', payload: { id: 2 } }, - }, - { - name: 'job3', - data: { handler: 'handler-b', operation: 'operation-1', payload: { id: 3 } }, - }, - ]); - - // Wait for processing - await new Promise(resolve => setTimeout(resolve, 200)); - - expect(results.length).toBe(3); - expect(results).toContainEqual({ handler: 'a', op: '1', payload: { id: 1 } }); - expect(results).toContainEqual({ handler: 'a', op: '2', payload: { id: 2 } }); - expect(results).toContainEqual({ handler: 'b', op: '1', payload: { id: 3 } }); - }); - }); -}); +import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; +import { handlerRegistry, QueueManager } from '../src'; + +// Suppress Redis connection errors in tests +process.on('unhandledRejection', (reason, promise) => { + if (reason && typeof reason === 'object' && 'message' in reason) { + const message = (reason as Error).message; + if ( + message.includes('Connection is closed') || + message.includes('Connection is in monitoring mode') + ) { + return; + } + } + console.error('Unhandled Rejection at:', promise, 'reason:', reason); +}); + +describe('QueueManager', () => { + let queueManager: QueueManager; + + // Use local Redis/Dragonfly + const redisConfig = { + host: 'localhost', + port: 6379, + password: '', + db: 0, + }; + + beforeEach(() => { + handlerRegistry.clear(); + }); + + afterEach(async () => { + if (queueManager) { + try { + await Promise.race([ + queueManager.shutdown(), + new Promise((_, reject) => setTimeout(() => reject(new Error('Shutdown timeout')), 3000)), + ]); + } catch (error) { + console.warn('Shutdown error:', error.message); + } finally { + queueManager = null as any; + } + } + + handlerRegistry.clear(); + await new Promise(resolve => setTimeout(resolve, 100)); + }); + + describe('Basic Operations', () => { + test('should initialize queue manager', async () => { + queueManager = new QueueManager({ + redis: redisConfig, + }); + + // No need to initialize anymore - constructor handles everything + // QueueManager now manages multiple queues, not just one + expect(queueManager).toBeDefined(); + }); + + test('should add and process a job', async () => { + let processedPayload: any; + + // Register handler + handlerRegistry.register('test-handler', { + 'test-operation': async payload => { + processedPayload = payload; + return { success: true, data: payload }; + }, + }); + + queueManager = new QueueManager({ + redis: redisConfig, + }); + + // No need to initialize anymore - constructor handles everything + // Get or create a queue + const queue = queueManager.getQueue('test-queue', { + workers: 1, + }); + + // Add job + const job = await queue.add('test-job', { + handler: 'test-handler', + operation: 'test-operation', + payload: { message: 'Hello, Queue!' }, + }); + + expect(job.name).toBe('test-job'); + + // Wait for processing + await new Promise(resolve => setTimeout(resolve, 100)); + + expect(processedPayload).toEqual({ message: 'Hello, Queue!' }); + }); + + test('should handle missing handler gracefully', async () => { + queueManager = new QueueManager({ + queueName: 'test-queue', + redis: redisConfig, + workers: 1, + }); + + // No need to initialize anymore - constructor handles everything + + const job = await queueManager.add('test-job', { + handler: 'non-existent', + operation: 'test-operation', + payload: { test: true }, + }); + + // Wait for job to fail + await new Promise(resolve => setTimeout(resolve, 100)); + + const failed = await job.isFailed(); + expect(failed).toBe(true); + }); + + test('should add multiple jobs in bulk', async () => { + let processedCount = 0; + + handlerRegistry.register('bulk-handler', { + process: async _payload => { + processedCount++; + return { processed: true }; + }, + }); + + queueManager = new QueueManager({ + queueName: 'test-queue', + redis: redisConfig, + workers: 2, + concurrency: 5, + }); + + // No need to initialize anymore - constructor handles everything + + const jobs = await queueManager.addBulk([ + { + name: 'job1', + data: { handler: 'bulk-handler', operation: 'process', payload: { id: 1 } }, + }, + { + name: 'job2', + data: { handler: 'bulk-handler', operation: 'process', payload: { id: 2 } }, + }, + { + name: 'job3', + data: { handler: 'bulk-handler', operation: 'process', payload: { id: 3 } }, + }, + ]); + + expect(jobs.length).toBe(3); + + // Wait for processing + await new Promise(resolve => setTimeout(resolve, 200)); + + expect(processedCount).toBe(3); + }); + + test('should get queue statistics', async () => { + queueManager = new QueueManager({ + queueName: 'test-queue', + redis: redisConfig, + workers: 0, // No workers, jobs will stay in waiting + }); + + // No need to initialize anymore - constructor handles everything + + // Add some jobs + await queueManager.add('job1', { + handler: 'test', + operation: 'test', + payload: { id: 1 }, + }); + + await queueManager.add('job2', { + handler: 'test', + operation: 'test', + payload: { id: 2 }, + }); + + const stats = await queueManager.getStats(); + + expect(stats.waiting).toBe(2); + expect(stats.active).toBe(0); + expect(stats.completed).toBe(0); + expect(stats.failed).toBe(0); + }); + + test('should pause and resume queue', async () => { + let processedCount = 0; + + handlerRegistry.register('pause-test', { + process: async () => { + processedCount++; + return { ok: true }; + }, + }); + + queueManager = new QueueManager({ + queueName: 'test-queue', + redis: redisConfig, + workers: 1, + }); + + // No need to initialize anymore - constructor handles everything + + // Pause queue + await queueManager.pause(); + + // Add job while paused + await queueManager.add('job1', { + handler: 'pause-test', + operation: 'process', + payload: {}, + }); + + // Wait a bit - job should not be processed + await new Promise(resolve => setTimeout(resolve, 100)); + expect(processedCount).toBe(0); + + // Resume queue + await queueManager.resume(); + + // Wait for processing + await new Promise(resolve => setTimeout(resolve, 100)); + expect(processedCount).toBe(1); + }); + }); + + describe('Scheduled Jobs', () => { + test('should register and process scheduled jobs', async () => { + let executionCount = 0; + + handlerRegistry.registerWithSchedule({ + name: 'scheduled-handler', + operations: { + 'scheduled-task': async _payload => { + executionCount++; + return { executed: true, timestamp: Date.now() }; + }, + }, + scheduledJobs: [ + { + type: 'test-schedule', + operation: 'scheduled-task', + payload: { test: true }, + cronPattern: '*/1 * * * * *', // Every second + description: 'Test scheduled job', + }, + ], + }); + + queueManager = new QueueManager({ + queueName: 'test-queue', + redis: redisConfig, + workers: 1, + enableScheduledJobs: true, + }); + + // No need to initialize anymore - constructor handles everything + + // Wait for scheduled job to execute + await new Promise(resolve => setTimeout(resolve, 2500)); + + expect(executionCount).toBeGreaterThanOrEqual(2); + }); + }); + + describe('Error Handling', () => { + test('should handle job errors with retries', async () => { + let attemptCount = 0; + + handlerRegistry.register('retry-handler', { + 'failing-operation': async () => { + attemptCount++; + if (attemptCount < 3) { + throw new Error(`Attempt ${attemptCount} failed`); + } + return { success: true }; + }, + }); + + queueManager = new QueueManager({ + queueName: 'test-queue', + redis: redisConfig, + workers: 1, + defaultJobOptions: { + attempts: 3, + backoff: { + type: 'fixed', + delay: 50, + }, + }, + }); + + // No need to initialize anymore - constructor handles everything + + const job = await queueManager.add('retry-job', { + handler: 'retry-handler', + operation: 'failing-operation', + payload: {}, + }); + + // Wait for retries + await new Promise(resolve => setTimeout(resolve, 500)); + + const completed = await job.isCompleted(); + expect(completed).toBe(true); + expect(attemptCount).toBe(3); + }); + }); + + describe('Multiple Handlers', () => { + test('should handle multiple handlers with different operations', async () => { + const results: any[] = []; + + handlerRegistry.register('handler-a', { + 'operation-1': async payload => { + results.push({ handler: 'a', op: '1', payload }); + return { handler: 'a', op: '1' }; + }, + 'operation-2': async payload => { + results.push({ handler: 'a', op: '2', payload }); + return { handler: 'a', op: '2' }; + }, + }); + + handlerRegistry.register('handler-b', { + 'operation-1': async payload => { + results.push({ handler: 'b', op: '1', payload }); + return { handler: 'b', op: '1' }; + }, + }); + + queueManager = new QueueManager({ + queueName: 'test-queue', + redis: redisConfig, + workers: 2, + }); + + // No need to initialize anymore - constructor handles everything + + // Add jobs for different handlers + await queueManager.addBulk([ + { + name: 'job1', + data: { handler: 'handler-a', operation: 'operation-1', payload: { id: 1 } }, + }, + { + name: 'job2', + data: { handler: 'handler-a', operation: 'operation-2', payload: { id: 2 } }, + }, + { + name: 'job3', + data: { handler: 'handler-b', operation: 'operation-1', payload: { id: 3 } }, + }, + ]); + + // Wait for processing + await new Promise(resolve => setTimeout(resolve, 200)); + + expect(results.length).toBe(3); + expect(results).toContainEqual({ handler: 'a', op: '1', payload: { id: 1 } }); + expect(results).toContainEqual({ handler: 'a', op: '2', payload: { id: 2 } }); + expect(results).toContainEqual({ handler: 'b', op: '1', payload: { id: 3 } }); + }); + }); +}); diff --git a/libs/queue/test/queue-metrics.test.ts b/libs/core/queue/test/queue-metrics.test.ts similarity index 81% rename from libs/queue/test/queue-metrics.test.ts rename to libs/core/queue/test/queue-metrics.test.ts index 4c8acb5..d6fd985 100644 --- a/libs/queue/test/queue-metrics.test.ts +++ b/libs/core/queue/test/queue-metrics.test.ts @@ -1,303 +1,327 @@ -import { describe, test, expect, beforeEach, afterEach } from 'bun:test'; -import { Queue, QueueEvents, Worker } from 'bullmq'; -import { QueueMetricsCollector } from '../src/queue-metrics'; -import { getRedisConnection } from '../src/utils'; - -// Suppress Redis connection errors in tests -process.on('unhandledRejection', (reason, promise) => { - if (reason && typeof reason === 'object' && 'message' in reason) { - const message = (reason as Error).message; - if (message.includes('Connection is closed') || - message.includes('Connection is in monitoring mode')) { - return; - } - } - console.error('Unhandled Rejection at:', promise, 'reason:', reason); -}); - -describe('QueueMetricsCollector', () => { - let queue: Queue; - let queueEvents: QueueEvents; - let metricsCollector: QueueMetricsCollector; - let worker: Worker; - let connection: any; - - const redisConfig = { - host: 'localhost', - port: 6379, - password: '', - db: 0, - }; - - beforeEach(async () => { - connection = getRedisConnection(redisConfig); - - // Create queue and events - queue = new Queue('metrics-test-queue', { connection }); - queueEvents = new QueueEvents('metrics-test-queue', { connection }); - - // Create metrics collector - metricsCollector = new QueueMetricsCollector(queue, queueEvents); - - // Wait for connections - await queue.waitUntilReady(); - await queueEvents.waitUntilReady(); - }); - - afterEach(async () => { - try { - if (worker) { - await worker.close(); - } - await queueEvents.close(); - await queue.close(); - } catch { - // Ignore cleanup errors - } - await new Promise(resolve => setTimeout(resolve, 50)); - }); - - describe('Job Count Metrics', () => { - test('should collect basic job counts', async () => { - // Add jobs in different states - await queue.add('waiting-job', { test: true }); - await queue.add('delayed-job', { test: true }, { delay: 60000 }); - - const metrics = await metricsCollector.collect(); - - expect(metrics.waiting).toBe(1); - expect(metrics.delayed).toBe(1); - expect(metrics.active).toBe(0); - expect(metrics.completed).toBe(0); - expect(metrics.failed).toBe(0); - }); - - test('should track completed and failed jobs', async () => { - let jobCount = 0; - - // Create worker that alternates between success and failure - worker = new Worker('metrics-test-queue', async () => { - jobCount++; - if (jobCount % 2 === 0) { - throw new Error('Test failure'); - } - return { success: true }; - }, { connection }); - - // Add jobs - await queue.add('job1', { test: 1 }); - await queue.add('job2', { test: 2 }); - await queue.add('job3', { test: 3 }); - await queue.add('job4', { test: 4 }); - - // Wait for processing - await new Promise(resolve => setTimeout(resolve, 200)); - - const metrics = await metricsCollector.collect(); - - expect(metrics.completed).toBe(2); - expect(metrics.failed).toBe(2); - }); - }); - - describe('Processing Time Metrics', () => { - test('should track processing times', async () => { - const processingTimes = [50, 100, 150, 200, 250]; - let jobIndex = 0; - - // Create worker with variable processing times - worker = new Worker('metrics-test-queue', async () => { - const delay = processingTimes[jobIndex++] || 100; - await new Promise(resolve => setTimeout(resolve, delay)); - return { processed: true }; - }, { connection }); - - // Add jobs - for (let i = 0; i < processingTimes.length; i++) { - await queue.add(`job${i}`, { index: i }); - } - - // Wait for processing - await new Promise(resolve => setTimeout(resolve, 1500)); - - const metrics = await metricsCollector.collect(); - - expect(metrics.processingTime.avg).toBeGreaterThan(0); - expect(metrics.processingTime.min).toBeGreaterThanOrEqual(50); - expect(metrics.processingTime.max).toBeLessThanOrEqual(300); - expect(metrics.processingTime.p95).toBeGreaterThan(metrics.processingTime.avg); - }); - - test('should handle empty processing times', async () => { - const metrics = await metricsCollector.collect(); - - expect(metrics.processingTime).toEqual({ - avg: 0, - min: 0, - max: 0, - p95: 0, - p99: 0, - }); - }); - }); - - describe('Throughput Metrics', () => { - test('should calculate throughput correctly', async () => { - // Create fast worker - worker = new Worker('metrics-test-queue', async () => { - return { success: true }; - }, { connection, concurrency: 5 }); - - // Add multiple jobs - const jobPromises = []; - for (let i = 0; i < 10; i++) { - jobPromises.push(queue.add(`job${i}`, { index: i })); - } - await Promise.all(jobPromises); - - // Wait for processing - await new Promise(resolve => setTimeout(resolve, 500)); - - const metrics = await metricsCollector.collect(); - - expect(metrics.throughput.completedPerMinute).toBeGreaterThan(0); - expect(metrics.throughput.totalPerMinute).toBe( - metrics.throughput.completedPerMinute + metrics.throughput.failedPerMinute - ); - }); - }); - - describe('Queue Health', () => { - test('should report healthy queue', async () => { - const metrics = await metricsCollector.collect(); - - expect(metrics.isHealthy).toBe(true); - expect(metrics.healthIssues).toEqual([]); - }); - - test('should detect high failure rate', async () => { - // Create worker that always fails - worker = new Worker('metrics-test-queue', async () => { - throw new Error('Always fails'); - }, { connection }); - - // Add jobs - for (let i = 0; i < 10; i++) { - await queue.add(`job${i}`, { index: i }); - } - - // Wait for failures - await new Promise(resolve => setTimeout(resolve, 500)); - - const metrics = await metricsCollector.collect(); - - expect(metrics.isHealthy).toBe(false); - expect(metrics.healthIssues).toContain( - expect.stringMatching(/High failure rate/) - ); - }); - - test('should detect large queue backlog', async () => { - // Add many jobs without workers - for (let i = 0; i < 1001; i++) { - await queue.add(`job${i}`, { index: i }); - } - - const metrics = await metricsCollector.collect(); - - expect(metrics.isHealthy).toBe(false); - expect(metrics.healthIssues).toContain( - expect.stringMatching(/Large queue backlog/) - ); - }); - }); - - describe('Oldest Waiting Job', () => { - test('should track oldest waiting job', async () => { - const beforeAdd = Date.now(); - - // Add jobs with delays - await queue.add('old-job', { test: true }); - await new Promise(resolve => setTimeout(resolve, 100)); - await queue.add('new-job', { test: true }); - - const metrics = await metricsCollector.collect(); - - expect(metrics.oldestWaitingJob).toBeDefined(); - expect(metrics.oldestWaitingJob!.getTime()).toBeGreaterThanOrEqual(beforeAdd); - }); - - test('should return null when no waiting jobs', async () => { - // Create worker that processes immediately - worker = new Worker('metrics-test-queue', async () => { - return { success: true }; - }, { connection }); - - const metrics = await metricsCollector.collect(); - expect(metrics.oldestWaitingJob).toBe(null); - }); - }); - - describe('Metrics Report', () => { - test('should generate formatted report', async () => { - // Add some jobs - await queue.add('job1', { test: true }); - await queue.add('job2', { test: true }, { delay: 5000 }); - - const report = await metricsCollector.getReport(); - - expect(report).toContain('Queue Metrics Report'); - expect(report).toContain('Status:'); - expect(report).toContain('Job Counts:'); - expect(report).toContain('Performance:'); - expect(report).toContain('Throughput:'); - expect(report).toContain('Waiting: 1'); - expect(report).toContain('Delayed: 1'); - }); - - test('should include health issues in report', async () => { - // Add many jobs to trigger health issue - for (let i = 0; i < 1001; i++) { - await queue.add(`job${i}`, { index: i }); - } - - const report = await metricsCollector.getReport(); - - expect(report).toContain('Issues Detected'); - expect(report).toContain('Health Issues:'); - expect(report).toContain('Large queue backlog'); - }); - }); - - describe('Prometheus Metrics', () => { - test('should export metrics in Prometheus format', async () => { - // Add some jobs and process them - worker = new Worker('metrics-test-queue', async () => { - await new Promise(resolve => setTimeout(resolve, 50)); - return { success: true }; - }, { connection }); - - await queue.add('job1', { test: true }); - await queue.add('job2', { test: true }); - - // Wait for processing - await new Promise(resolve => setTimeout(resolve, 200)); - - const prometheusMetrics = await metricsCollector.getPrometheusMetrics(); - - // Check format - expect(prometheusMetrics).toContain('# HELP queue_jobs_total'); - expect(prometheusMetrics).toContain('# TYPE queue_jobs_total gauge'); - expect(prometheusMetrics).toContain('queue_jobs_total{queue="metrics-test-queue",status="completed"}'); - - expect(prometheusMetrics).toContain('# HELP queue_processing_time_seconds'); - expect(prometheusMetrics).toContain('# TYPE queue_processing_time_seconds summary'); - - expect(prometheusMetrics).toContain('# HELP queue_throughput_per_minute'); - expect(prometheusMetrics).toContain('# TYPE queue_throughput_per_minute gauge'); - - expect(prometheusMetrics).toContain('# HELP queue_health'); - expect(prometheusMetrics).toContain('# TYPE queue_health gauge'); - }); - }); -}); \ No newline at end of file +import { Queue, QueueEvents, Worker } from 'bullmq'; +import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; +import { QueueMetricsCollector } from '../src/queue-metrics'; +import { getRedisConnection } from '../src/utils'; + +// Suppress Redis connection errors in tests +process.on('unhandledRejection', (reason, promise) => { + if (reason && typeof reason === 'object' && 'message' in reason) { + const message = (reason as Error).message; + if ( + message.includes('Connection is closed') || + message.includes('Connection is in monitoring mode') + ) { + return; + } + } + console.error('Unhandled Rejection at:', promise, 'reason:', reason); +}); + +describe('QueueMetricsCollector', () => { + let queue: Queue; + let queueEvents: QueueEvents; + let metricsCollector: QueueMetricsCollector; + let worker: Worker; + let connection: any; + + const redisConfig = { + host: 'localhost', + port: 6379, + password: '', + db: 0, + }; + + beforeEach(async () => { + connection = getRedisConnection(redisConfig); + + // Create queue and events + queue = new Queue('metrics-test-queue', { connection }); + queueEvents = new QueueEvents('metrics-test-queue', { connection }); + + // Create metrics collector + metricsCollector = new QueueMetricsCollector(queue, queueEvents); + + // Wait for connections + await queue.waitUntilReady(); + await queueEvents.waitUntilReady(); + }); + + afterEach(async () => { + try { + if (worker) { + await worker.close(); + } + await queueEvents.close(); + await queue.close(); + } catch { + // Ignore cleanup errors + } + await new Promise(resolve => setTimeout(resolve, 50)); + }); + + describe('Job Count Metrics', () => { + test('should collect basic job counts', async () => { + // Add jobs in different states + await queue.add('waiting-job', { test: true }); + await queue.add('delayed-job', { test: true }, { delay: 60000 }); + + const metrics = await metricsCollector.collect(); + + expect(metrics.waiting).toBe(1); + expect(metrics.delayed).toBe(1); + expect(metrics.active).toBe(0); + expect(metrics.completed).toBe(0); + expect(metrics.failed).toBe(0); + }); + + test('should track completed and failed jobs', async () => { + let jobCount = 0; + + // Create worker that alternates between success and failure + worker = new Worker( + 'metrics-test-queue', + async () => { + jobCount++; + if (jobCount % 2 === 0) { + throw new Error('Test failure'); + } + return { success: true }; + }, + { connection } + ); + + // Add jobs + await queue.add('job1', { test: 1 }); + await queue.add('job2', { test: 2 }); + await queue.add('job3', { test: 3 }); + await queue.add('job4', { test: 4 }); + + // Wait for processing + await new Promise(resolve => setTimeout(resolve, 200)); + + const metrics = await metricsCollector.collect(); + + expect(metrics.completed).toBe(2); + expect(metrics.failed).toBe(2); + }); + }); + + describe('Processing Time Metrics', () => { + test('should track processing times', async () => { + const processingTimes = [50, 100, 150, 200, 250]; + let jobIndex = 0; + + // Create worker with variable processing times + worker = new Worker( + 'metrics-test-queue', + async () => { + const delay = processingTimes[jobIndex++] || 100; + await new Promise(resolve => setTimeout(resolve, delay)); + return { processed: true }; + }, + { connection } + ); + + // Add jobs + for (let i = 0; i < processingTimes.length; i++) { + await queue.add(`job${i}`, { index: i }); + } + + // Wait for processing + await new Promise(resolve => setTimeout(resolve, 1500)); + + const metrics = await metricsCollector.collect(); + + expect(metrics.processingTime.avg).toBeGreaterThan(0); + expect(metrics.processingTime.min).toBeGreaterThanOrEqual(50); + expect(metrics.processingTime.max).toBeLessThanOrEqual(300); + expect(metrics.processingTime.p95).toBeGreaterThan(metrics.processingTime.avg); + }); + + test('should handle empty processing times', async () => { + const metrics = await metricsCollector.collect(); + + expect(metrics.processingTime).toEqual({ + avg: 0, + min: 0, + max: 0, + p95: 0, + p99: 0, + }); + }); + }); + + describe('Throughput Metrics', () => { + test('should calculate throughput correctly', async () => { + // Create fast worker + worker = new Worker( + 'metrics-test-queue', + async () => { + return { success: true }; + }, + { connection, concurrency: 5 } + ); + + // Add multiple jobs + const jobPromises = []; + for (let i = 0; i < 10; i++) { + jobPromises.push(queue.add(`job${i}`, { index: i })); + } + await Promise.all(jobPromises); + + // Wait for processing + await new Promise(resolve => setTimeout(resolve, 500)); + + const metrics = await metricsCollector.collect(); + + expect(metrics.throughput.completedPerMinute).toBeGreaterThan(0); + expect(metrics.throughput.totalPerMinute).toBe( + metrics.throughput.completedPerMinute + metrics.throughput.failedPerMinute + ); + }); + }); + + describe('Queue Health', () => { + test('should report healthy queue', async () => { + const metrics = await metricsCollector.collect(); + + expect(metrics.isHealthy).toBe(true); + expect(metrics.healthIssues).toEqual([]); + }); + + test('should detect high failure rate', async () => { + // Create worker that always fails + worker = new Worker( + 'metrics-test-queue', + async () => { + throw new Error('Always fails'); + }, + { connection } + ); + + // Add jobs + for (let i = 0; i < 10; i++) { + await queue.add(`job${i}`, { index: i }); + } + + // Wait for failures + await new Promise(resolve => setTimeout(resolve, 500)); + + const metrics = await metricsCollector.collect(); + + expect(metrics.isHealthy).toBe(false); + expect(metrics.healthIssues).toContain(expect.stringMatching(/High failure rate/)); + }); + + test('should detect large queue backlog', async () => { + // Add many jobs without workers + for (let i = 0; i < 1001; i++) { + await queue.add(`job${i}`, { index: i }); + } + + const metrics = await metricsCollector.collect(); + + expect(metrics.isHealthy).toBe(false); + expect(metrics.healthIssues).toContain(expect.stringMatching(/Large queue backlog/)); + }); + }); + + describe('Oldest Waiting Job', () => { + test('should track oldest waiting job', async () => { + const beforeAdd = Date.now(); + + // Add jobs with delays + await queue.add('old-job', { test: true }); + await new Promise(resolve => setTimeout(resolve, 100)); + await queue.add('new-job', { test: true }); + + const metrics = await metricsCollector.collect(); + + expect(metrics.oldestWaitingJob).toBeDefined(); + expect(metrics.oldestWaitingJob!.getTime()).toBeGreaterThanOrEqual(beforeAdd); + }); + + test('should return null when no waiting jobs', async () => { + // Create worker that processes immediately + worker = new Worker( + 'metrics-test-queue', + async () => { + return { success: true }; + }, + { connection } + ); + + const metrics = await metricsCollector.collect(); + expect(metrics.oldestWaitingJob).toBe(null); + }); + }); + + describe('Metrics Report', () => { + test('should generate formatted report', async () => { + // Add some jobs + await queue.add('job1', { test: true }); + await queue.add('job2', { test: true }, { delay: 5000 }); + + const report = await metricsCollector.getReport(); + + expect(report).toContain('Queue Metrics Report'); + expect(report).toContain('Status:'); + expect(report).toContain('Job Counts:'); + expect(report).toContain('Performance:'); + expect(report).toContain('Throughput:'); + expect(report).toContain('Waiting: 1'); + expect(report).toContain('Delayed: 1'); + }); + + test('should include health issues in report', async () => { + // Add many jobs to trigger health issue + for (let i = 0; i < 1001; i++) { + await queue.add(`job${i}`, { index: i }); + } + + const report = await metricsCollector.getReport(); + + expect(report).toContain('Issues Detected'); + expect(report).toContain('Health Issues:'); + expect(report).toContain('Large queue backlog'); + }); + }); + + describe('Prometheus Metrics', () => { + test('should export metrics in Prometheus format', async () => { + // Add some jobs and process them + worker = new Worker( + 'metrics-test-queue', + async () => { + await new Promise(resolve => setTimeout(resolve, 50)); + return { success: true }; + }, + { connection } + ); + + await queue.add('job1', { test: true }); + await queue.add('job2', { test: true }); + + // Wait for processing + await new Promise(resolve => setTimeout(resolve, 200)); + + const prometheusMetrics = await metricsCollector.getPrometheusMetrics(); + + // Check format + expect(prometheusMetrics).toContain('# HELP queue_jobs_total'); + expect(prometheusMetrics).toContain('# TYPE queue_jobs_total gauge'); + expect(prometheusMetrics).toContain( + 'queue_jobs_total{queue="metrics-test-queue",status="completed"}' + ); + + expect(prometheusMetrics).toContain('# HELP queue_processing_time_seconds'); + expect(prometheusMetrics).toContain('# TYPE queue_processing_time_seconds summary'); + + expect(prometheusMetrics).toContain('# HELP queue_throughput_per_minute'); + expect(prometheusMetrics).toContain('# TYPE queue_throughput_per_minute gauge'); + + expect(prometheusMetrics).toContain('# HELP queue_health'); + expect(prometheusMetrics).toContain('# TYPE queue_health gauge'); + }); + }); +}); diff --git a/libs/queue/test/queue-simple.test.ts b/libs/core/queue/test/queue-simple.test.ts similarity index 85% rename from libs/queue/test/queue-simple.test.ts rename to libs/core/queue/test/queue-simple.test.ts index 2820c21..31f14e8 100644 --- a/libs/queue/test/queue-simple.test.ts +++ b/libs/core/queue/test/queue-simple.test.ts @@ -1,81 +1,81 @@ -import { describe, test, expect, beforeEach, afterEach } from 'bun:test'; -import { QueueManager, handlerRegistry } from '../src'; - -describe('QueueManager Simple Tests', () => { - let queueManager: QueueManager; - - // Assumes Redis is running locally on default port - const redisConfig = { - host: 'localhost', - port: 6379, - }; - - beforeEach(() => { - handlerRegistry.clear(); - }); - - afterEach(async () => { - if (queueManager) { - try { - await queueManager.shutdown(); - } catch { - // Ignore errors during cleanup - } - } - }); - - test('should create queue manager instance', () => { - queueManager = new QueueManager({ - queueName: 'test-queue', - redis: redisConfig, - }); - - expect(queueManager.queueName).toBe('test-queue'); - }); - - test('should handle missing Redis gracefully', async () => { - // Use a port that's likely not running Redis - queueManager = new QueueManager({ - queueName: 'test-queue', - redis: { - host: 'localhost', - port: 9999, - }, - }); - - await expect(queueManager.initialize()).rejects.toThrow(); - }); - - test('handler registry should work', () => { - const testHandler = async (payload: any) => { - return { success: true, payload }; - }; - - handlerRegistry.register('test-handler', { - 'test-op': testHandler, - }); - - const handler = handlerRegistry.getHandler('test-handler', 'test-op'); - expect(handler).toBe(testHandler); - }); - - test('handler registry should return null for missing handler', () => { - const handler = handlerRegistry.getHandler('missing', 'op'); - expect(handler).toBe(null); - }); - - test('should get handler statistics', () => { - handlerRegistry.register('handler1', { - 'op1': async () => ({}), - 'op2': async () => ({}), - }); - - handlerRegistry.register('handler2', { - 'op1': async () => ({}), - }); - - const stats = handlerRegistry.getStats(); - expect(stats.handlers).toBe(2); - expect(stats.totalOperations).toBe(3); - }); -}); \ No newline at end of file +import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; +import { handlerRegistry, QueueManager } from '../src'; + +describe('QueueManager Simple Tests', () => { + let queueManager: QueueManager; + + // Assumes Redis is running locally on default port + const redisConfig = { + host: 'localhost', + port: 6379, + }; + + beforeEach(() => { + handlerRegistry.clear(); + }); + + afterEach(async () => { + if (queueManager) { + try { + await queueManager.shutdown(); + } catch { + // Ignore errors during cleanup + } + } + }); + + test('should create queue manager instance', () => { + queueManager = new QueueManager({ + queueName: 'test-queue', + redis: redisConfig, + }); + + expect(queueManager.queueName).toBe('test-queue'); + }); + + test('should handle missing Redis gracefully', async () => { + // Use a port that's likely not running Redis + queueManager = new QueueManager({ + queueName: 'test-queue', + redis: { + host: 'localhost', + port: 9999, + }, + }); + + await expect(queueManager.initialize()).rejects.toThrow(); + }); + + test('handler registry should work', () => { + const testHandler = async (payload: any) => { + return { success: true, payload }; + }; + + handlerRegistry.register('test-handler', { + 'test-op': testHandler, + }); + + const handler = handlerRegistry.getHandler('test-handler', 'test-op'); + expect(handler).toBe(testHandler); + }); + + test('handler registry should return null for missing handler', () => { + const handler = handlerRegistry.getHandler('missing', 'op'); + expect(handler).toBe(null); + }); + + test('should get handler statistics', () => { + handlerRegistry.register('handler1', { + op1: async () => ({}), + op2: async () => ({}), + }); + + handlerRegistry.register('handler2', { + op1: async () => ({}), + }); + + const stats = handlerRegistry.getStats(); + expect(stats.handlers).toBe(2); + expect(stats.totalOperations).toBe(3); + }); +}); diff --git a/libs/queue/test/rate-limiter.test.ts b/libs/core/queue/test/rate-limiter.test.ts similarity index 94% rename from libs/queue/test/rate-limiter.test.ts rename to libs/core/queue/test/rate-limiter.test.ts index 0007abb..255de6e 100644 --- a/libs/queue/test/rate-limiter.test.ts +++ b/libs/core/queue/test/rate-limiter.test.ts @@ -1,309 +1,311 @@ -import { describe, test, expect, beforeEach, afterEach } from 'bun:test'; -import { QueueRateLimiter } from '../src/rate-limiter'; -import { getRedisConnection } from '../src/utils'; -import Redis from 'ioredis'; - -// Suppress Redis connection errors in tests -process.on('unhandledRejection', (reason, promise) => { - if (reason && typeof reason === 'object' && 'message' in reason) { - const message = (reason as Error).message; - if (message.includes('Connection is closed') || - message.includes('Connection is in monitoring mode')) { - return; - } - } - console.error('Unhandled Rejection at:', promise, 'reason:', reason); -}); - -describe('QueueRateLimiter', () => { - let redisClient: Redis; - let rateLimiter: QueueRateLimiter; - - const redisConfig = { - host: 'localhost', - port: 6379, - password: '', - db: 0, - }; - - beforeEach(async () => { - // Create Redis client - redisClient = new Redis(getRedisConnection(redisConfig)); - - // Clear Redis keys for tests - try { - const keys = await redisClient.keys('rl:*'); - if (keys.length > 0) { - await redisClient.del(...keys); - } - } catch { - // Ignore cleanup errors - } - rateLimiter = new QueueRateLimiter(redisClient); - }); - - afterEach(async () => { - if (redisClient) { - try { - await redisClient.quit(); - } catch { - // Ignore cleanup errors - } - } - await new Promise(resolve => setTimeout(resolve, 50)); - }); - - describe('Rate Limit Rules', () => { - test('should add and enforce global rate limit', async () => { - rateLimiter.addRule({ - level: 'global', - config: { - points: 5, - duration: 1, // 1 second - }, - }); - - // Consume 5 points - for (let i = 0; i < 5; i++) { - const result = await rateLimiter.checkLimit('any-handler', 'any-operation'); - expect(result.allowed).toBe(true); - } - - // 6th request should be blocked - const blocked = await rateLimiter.checkLimit('any-handler', 'any-operation'); - expect(blocked.allowed).toBe(false); - expect(blocked.retryAfter).toBeGreaterThan(0); - }); - - test('should add and enforce handler-level rate limit', async () => { - rateLimiter.addRule({ - level: 'handler', - handler: 'api-handler', - config: { - points: 3, - duration: 1, - }, - }); - - // api-handler should be limited - for (let i = 0; i < 3; i++) { - const result = await rateLimiter.checkLimit('api-handler', 'any-operation'); - expect(result.allowed).toBe(true); - } - - const blocked = await rateLimiter.checkLimit('api-handler', 'any-operation'); - expect(blocked.allowed).toBe(false); - - // Other handlers should not be limited - const otherHandler = await rateLimiter.checkLimit('other-handler', 'any-operation'); - expect(otherHandler.allowed).toBe(true); - }); - - test('should add and enforce operation-level rate limit', async () => { - rateLimiter.addRule({ - level: 'operation', - handler: 'data-handler', - operation: 'fetch-prices', - config: { - points: 2, - duration: 1, - }, - }); - - // Specific operation should be limited - for (let i = 0; i < 2; i++) { - const result = await rateLimiter.checkLimit('data-handler', 'fetch-prices'); - expect(result.allowed).toBe(true); - } - - const blocked = await rateLimiter.checkLimit('data-handler', 'fetch-prices'); - expect(blocked.allowed).toBe(false); - - // Other operations on same handler should work - const otherOp = await rateLimiter.checkLimit('data-handler', 'fetch-volume'); - expect(otherOp.allowed).toBe(true); - }); - - test('should enforce multiple rate limits (most restrictive wins)', async () => { - // Global: 10/sec - rateLimiter.addRule({ - level: 'global', - config: { points: 10, duration: 1 }, - }); - - // Handler: 5/sec - rateLimiter.addRule({ - level: 'handler', - handler: 'test-handler', - config: { points: 5, duration: 1 }, - }); - - // Operation: 2/sec - rateLimiter.addRule({ - level: 'operation', - handler: 'test-handler', - operation: 'test-op', - config: { points: 2, duration: 1 }, - }); - - // Should be limited by operation level (most restrictive) - for (let i = 0; i < 2; i++) { - const result = await rateLimiter.checkLimit('test-handler', 'test-op'); - expect(result.allowed).toBe(true); - } - - const blocked = await rateLimiter.checkLimit('test-handler', 'test-op'); - expect(blocked.allowed).toBe(false); - }); - }); - - describe('Rate Limit Status', () => { - test('should get rate limit status', async () => { - rateLimiter.addRule({ - level: 'handler', - handler: 'status-test', - config: { points: 10, duration: 60 }, - }); - - // Consume some points - await rateLimiter.checkLimit('status-test', 'operation'); - await rateLimiter.checkLimit('status-test', 'operation'); - - const status = await rateLimiter.getStatus('status-test', 'operation'); - expect(status.handler).toBe('status-test'); - expect(status.operation).toBe('operation'); - expect(status.limits.length).toBe(1); - expect(status.limits[0].points).toBe(10); - expect(status.limits[0].remaining).toBe(8); - }); - - test('should show multiple applicable limits in status', async () => { - rateLimiter.addRule({ - level: 'global', - config: { points: 100, duration: 60 }, - }); - - rateLimiter.addRule({ - level: 'handler', - handler: 'multi-test', - config: { points: 50, duration: 60 }, - }); - - const status = await rateLimiter.getStatus('multi-test', 'operation'); - expect(status.limits.length).toBe(2); - - const globalLimit = status.limits.find(l => l.level === 'global'); - const handlerLimit = status.limits.find(l => l.level === 'handler'); - - expect(globalLimit?.points).toBe(100); - expect(handlerLimit?.points).toBe(50); - }); - }); - - describe('Rate Limit Management', () => { - test('should reset rate limits', async () => { - rateLimiter.addRule({ - level: 'handler', - handler: 'reset-test', - config: { points: 1, duration: 60 }, - }); - - // Consume the limit - await rateLimiter.checkLimit('reset-test', 'operation'); - const blocked = await rateLimiter.checkLimit('reset-test', 'operation'); - expect(blocked.allowed).toBe(false); - - // Reset limits - await rateLimiter.reset('reset-test'); - - // Should be allowed again - const afterReset = await rateLimiter.checkLimit('reset-test', 'operation'); - expect(afterReset.allowed).toBe(true); - }); - - test('should get all rules', async () => { - rateLimiter.addRule({ - level: 'global', - config: { points: 100, duration: 60 }, - }); - - rateLimiter.addRule({ - level: 'handler', - handler: 'test', - config: { points: 50, duration: 60 }, - }); - - const rules = rateLimiter.getRules(); - expect(rules.length).toBe(2); - expect(rules[0].level).toBe('global'); - expect(rules[1].level).toBe('handler'); - }); - - test('should remove specific rule', async () => { - rateLimiter.addRule({ - level: 'handler', - handler: 'remove-test', - config: { points: 1, duration: 1 }, - }); - - // Verify rule exists - await rateLimiter.checkLimit('remove-test', 'op'); - const blocked = await rateLimiter.checkLimit('remove-test', 'op'); - expect(blocked.allowed).toBe(false); - - // Remove rule - const removed = rateLimiter.removeRule('handler', 'remove-test'); - expect(removed).toBe(true); - - // Should not be limited anymore - const afterRemove = await rateLimiter.checkLimit('remove-test', 'op'); - expect(afterRemove.allowed).toBe(true); - }); - }); - - describe('Block Duration', () => { - test('should block for specified duration after limit exceeded', async () => { - rateLimiter.addRule({ - level: 'handler', - handler: 'block-test', - config: { - points: 1, - duration: 1, - blockDuration: 2, // Block for 2 seconds - }, - }); - - // Consume limit - await rateLimiter.checkLimit('block-test', 'op'); - - // Should be blocked - const blocked = await rateLimiter.checkLimit('block-test', 'op'); - expect(blocked.allowed).toBe(false); - expect(blocked.retryAfter).toBeGreaterThanOrEqual(1000); // At least 1 second - }); - }); - - describe('Error Handling', () => { - test('should allow requests when rate limiter fails', async () => { - // Create a rate limiter with invalid redis client - const badRedis = new Redis({ - host: 'invalid-host', - port: 9999, - retryStrategy: () => null, // Disable retries - }); - - const failingLimiter = new QueueRateLimiter(badRedis); - - failingLimiter.addRule({ - level: 'global', - config: { points: 1, duration: 1 }, - }); - - // Should allow even though Redis is not available - const result = await failingLimiter.checkLimit('test', 'test'); - expect(result.allowed).toBe(true); - - badRedis.disconnect(); - }); - }); -}); \ No newline at end of file +import { afterEach, beforeEach, describe, expect, test } from 'bun:test'; +import Redis from 'ioredis'; +import { QueueRateLimiter } from '../src/rate-limiter'; +import { getRedisConnection } from '../src/utils'; + +// Suppress Redis connection errors in tests +process.on('unhandledRejection', (reason, promise) => { + if (reason && typeof reason === 'object' && 'message' in reason) { + const message = (reason as Error).message; + if ( + message.includes('Connection is closed') || + message.includes('Connection is in monitoring mode') + ) { + return; + } + } + console.error('Unhandled Rejection at:', promise, 'reason:', reason); +}); + +describe('QueueRateLimiter', () => { + let redisClient: Redis; + let rateLimiter: QueueRateLimiter; + + const redisConfig = { + host: 'localhost', + port: 6379, + password: '', + db: 0, + }; + + beforeEach(async () => { + // Create Redis client + redisClient = new Redis(getRedisConnection(redisConfig)); + + // Clear Redis keys for tests + try { + const keys = await redisClient.keys('rl:*'); + if (keys.length > 0) { + await redisClient.del(...keys); + } + } catch { + // Ignore cleanup errors + } + rateLimiter = new QueueRateLimiter(redisClient); + }); + + afterEach(async () => { + if (redisClient) { + try { + await redisClient.quit(); + } catch { + // Ignore cleanup errors + } + } + await new Promise(resolve => setTimeout(resolve, 50)); + }); + + describe('Rate Limit Rules', () => { + test('should add and enforce global rate limit', async () => { + rateLimiter.addRule({ + level: 'global', + config: { + points: 5, + duration: 1, // 1 second + }, + }); + + // Consume 5 points + for (let i = 0; i < 5; i++) { + const result = await rateLimiter.checkLimit('any-handler', 'any-operation'); + expect(result.allowed).toBe(true); + } + + // 6th request should be blocked + const blocked = await rateLimiter.checkLimit('any-handler', 'any-operation'); + expect(blocked.allowed).toBe(false); + expect(blocked.retryAfter).toBeGreaterThan(0); + }); + + test('should add and enforce handler-level rate limit', async () => { + rateLimiter.addRule({ + level: 'handler', + handler: 'api-handler', + config: { + points: 3, + duration: 1, + }, + }); + + // api-handler should be limited + for (let i = 0; i < 3; i++) { + const result = await rateLimiter.checkLimit('api-handler', 'any-operation'); + expect(result.allowed).toBe(true); + } + + const blocked = await rateLimiter.checkLimit('api-handler', 'any-operation'); + expect(blocked.allowed).toBe(false); + + // Other handlers should not be limited + const otherHandler = await rateLimiter.checkLimit('other-handler', 'any-operation'); + expect(otherHandler.allowed).toBe(true); + }); + + test('should add and enforce operation-level rate limit', async () => { + rateLimiter.addRule({ + level: 'operation', + handler: 'data-handler', + operation: 'fetch-prices', + config: { + points: 2, + duration: 1, + }, + }); + + // Specific operation should be limited + for (let i = 0; i < 2; i++) { + const result = await rateLimiter.checkLimit('data-handler', 'fetch-prices'); + expect(result.allowed).toBe(true); + } + + const blocked = await rateLimiter.checkLimit('data-handler', 'fetch-prices'); + expect(blocked.allowed).toBe(false); + + // Other operations on same handler should work + const otherOp = await rateLimiter.checkLimit('data-handler', 'fetch-volume'); + expect(otherOp.allowed).toBe(true); + }); + + test('should enforce multiple rate limits (most restrictive wins)', async () => { + // Global: 10/sec + rateLimiter.addRule({ + level: 'global', + config: { points: 10, duration: 1 }, + }); + + // Handler: 5/sec + rateLimiter.addRule({ + level: 'handler', + handler: 'test-handler', + config: { points: 5, duration: 1 }, + }); + + // Operation: 2/sec + rateLimiter.addRule({ + level: 'operation', + handler: 'test-handler', + operation: 'test-op', + config: { points: 2, duration: 1 }, + }); + + // Should be limited by operation level (most restrictive) + for (let i = 0; i < 2; i++) { + const result = await rateLimiter.checkLimit('test-handler', 'test-op'); + expect(result.allowed).toBe(true); + } + + const blocked = await rateLimiter.checkLimit('test-handler', 'test-op'); + expect(blocked.allowed).toBe(false); + }); + }); + + describe('Rate Limit Status', () => { + test('should get rate limit status', async () => { + rateLimiter.addRule({ + level: 'handler', + handler: 'status-test', + config: { points: 10, duration: 60 }, + }); + + // Consume some points + await rateLimiter.checkLimit('status-test', 'operation'); + await rateLimiter.checkLimit('status-test', 'operation'); + + const status = await rateLimiter.getStatus('status-test', 'operation'); + expect(status.handler).toBe('status-test'); + expect(status.operation).toBe('operation'); + expect(status.limits.length).toBe(1); + expect(status.limits[0].points).toBe(10); + expect(status.limits[0].remaining).toBe(8); + }); + + test('should show multiple applicable limits in status', async () => { + rateLimiter.addRule({ + level: 'global', + config: { points: 100, duration: 60 }, + }); + + rateLimiter.addRule({ + level: 'handler', + handler: 'multi-test', + config: { points: 50, duration: 60 }, + }); + + const status = await rateLimiter.getStatus('multi-test', 'operation'); + expect(status.limits.length).toBe(2); + + const globalLimit = status.limits.find(l => l.level === 'global'); + const handlerLimit = status.limits.find(l => l.level === 'handler'); + + expect(globalLimit?.points).toBe(100); + expect(handlerLimit?.points).toBe(50); + }); + }); + + describe('Rate Limit Management', () => { + test('should reset rate limits', async () => { + rateLimiter.addRule({ + level: 'handler', + handler: 'reset-test', + config: { points: 1, duration: 60 }, + }); + + // Consume the limit + await rateLimiter.checkLimit('reset-test', 'operation'); + const blocked = await rateLimiter.checkLimit('reset-test', 'operation'); + expect(blocked.allowed).toBe(false); + + // Reset limits + await rateLimiter.reset('reset-test'); + + // Should be allowed again + const afterReset = await rateLimiter.checkLimit('reset-test', 'operation'); + expect(afterReset.allowed).toBe(true); + }); + + test('should get all rules', async () => { + rateLimiter.addRule({ + level: 'global', + config: { points: 100, duration: 60 }, + }); + + rateLimiter.addRule({ + level: 'handler', + handler: 'test', + config: { points: 50, duration: 60 }, + }); + + const rules = rateLimiter.getRules(); + expect(rules.length).toBe(2); + expect(rules[0].level).toBe('global'); + expect(rules[1].level).toBe('handler'); + }); + + test('should remove specific rule', async () => { + rateLimiter.addRule({ + level: 'handler', + handler: 'remove-test', + config: { points: 1, duration: 1 }, + }); + + // Verify rule exists + await rateLimiter.checkLimit('remove-test', 'op'); + const blocked = await rateLimiter.checkLimit('remove-test', 'op'); + expect(blocked.allowed).toBe(false); + + // Remove rule + const removed = rateLimiter.removeRule('handler', 'remove-test'); + expect(removed).toBe(true); + + // Should not be limited anymore + const afterRemove = await rateLimiter.checkLimit('remove-test', 'op'); + expect(afterRemove.allowed).toBe(true); + }); + }); + + describe('Block Duration', () => { + test('should block for specified duration after limit exceeded', async () => { + rateLimiter.addRule({ + level: 'handler', + handler: 'block-test', + config: { + points: 1, + duration: 1, + blockDuration: 2, // Block for 2 seconds + }, + }); + + // Consume limit + await rateLimiter.checkLimit('block-test', 'op'); + + // Should be blocked + const blocked = await rateLimiter.checkLimit('block-test', 'op'); + expect(blocked.allowed).toBe(false); + expect(blocked.retryAfter).toBeGreaterThanOrEqual(1000); // At least 1 second + }); + }); + + describe('Error Handling', () => { + test('should allow requests when rate limiter fails', async () => { + // Create a rate limiter with invalid redis client + const badRedis = new Redis({ + host: 'invalid-host', + port: 9999, + retryStrategy: () => null, // Disable retries + }); + + const failingLimiter = new QueueRateLimiter(badRedis); + + failingLimiter.addRule({ + level: 'global', + config: { points: 1, duration: 1 }, + }); + + // Should allow even though Redis is not available + const result = await failingLimiter.checkLimit('test', 'test'); + expect(result.allowed).toBe(true); + + badRedis.disconnect(); + }); + }); +}); diff --git a/libs/queue/tsconfig.json b/libs/core/queue/tsconfig.json similarity index 61% rename from libs/queue/tsconfig.json rename to libs/core/queue/tsconfig.json index f6dd3d7..8a95639 100644 --- a/libs/queue/tsconfig.json +++ b/libs/core/queue/tsconfig.json @@ -1,12 +1,14 @@ { - "extends": "../../tsconfig.lib.json", + "extends": "../../../tsconfig.json", "compilerOptions": { "outDir": "./dist", - "rootDir": "./src" + "rootDir": "./src", + "composite": true }, "include": ["src/**/*"], "references": [ { "path": "../cache" }, + { "path": "../handlers" }, { "path": "../logger" }, { "path": "../types" } ] diff --git a/libs/queue/turbo.json b/libs/core/queue/turbo.json similarity index 100% rename from libs/queue/turbo.json rename to libs/core/queue/turbo.json diff --git a/libs/shutdown/README.md b/libs/core/shutdown/README.md similarity index 100% rename from libs/shutdown/README.md rename to libs/core/shutdown/README.md diff --git a/libs/shutdown/package.json b/libs/core/shutdown/package.json similarity index 100% rename from libs/shutdown/package.json rename to libs/core/shutdown/package.json diff --git a/libs/shutdown/src/index.ts b/libs/core/shutdown/src/index.ts similarity index 92% rename from libs/shutdown/src/index.ts rename to libs/core/shutdown/src/index.ts index b498a06..14319cd 100644 --- a/libs/shutdown/src/index.ts +++ b/libs/core/shutdown/src/index.ts @@ -9,7 +9,12 @@ import type { ShutdownResult } from './types'; // Core shutdown classes and types export { Shutdown } from './shutdown'; -export type { ShutdownCallback, ShutdownOptions, ShutdownResult, PrioritizedShutdownCallback } from './types'; +export type { + ShutdownCallback, + ShutdownOptions, + ShutdownResult, + PrioritizedShutdownCallback, +} from './types'; // Global singleton instance let globalInstance: Shutdown | null = null; @@ -31,7 +36,11 @@ function getGlobalInstance(): Shutdown { /** * Register a cleanup callback that will be executed during shutdown */ -export function onShutdown(callback: () => Promise | void, priority?: number, name?: string): void { +export function onShutdown( + callback: () => Promise | void, + priority?: number, + name?: string +): void { getGlobalInstance().onShutdown(callback, priority, name); } diff --git a/libs/shutdown/src/shutdown.ts b/libs/core/shutdown/src/shutdown.ts similarity index 95% rename from libs/shutdown/src/shutdown.ts rename to libs/core/shutdown/src/shutdown.ts index 78a0be8..d31a853 100644 --- a/libs/shutdown/src/shutdown.ts +++ b/libs/core/shutdown/src/shutdown.ts @@ -8,7 +8,13 @@ * - Platform-specific signal support (Windows/Unix) */ -import type { PrioritizedShutdownCallback, ShutdownCallback, ShutdownOptions, ShutdownResult } from './types'; +import type { + PrioritizedShutdownCallback, + ShutdownCallback, + ShutdownOptions, + ShutdownResult, +} from './types'; +import { getLogger } from '@stock-bot/logger'; // Global flag that works across all processes/workers declare global { @@ -17,6 +23,7 @@ declare global { export class Shutdown { private static instance: Shutdown | null = null; + private readonly logger = getLogger('shutdown'); private isShuttingDown = false; private signalReceived = false; // Track if shutdown signal was received private shutdownTimeout = 30000; // 30 seconds default @@ -195,7 +202,7 @@ export class Shutdown { } catch (error) { failed++; if (name) { - console.error(`✗ Shutdown failed: ${name} (priority: ${priority})`, error); + this.logger.error(`Shutdown failed: ${name} (priority: ${priority})`, error); } } } diff --git a/libs/shutdown/src/types.ts b/libs/core/shutdown/src/types.ts similarity index 100% rename from libs/shutdown/src/types.ts rename to libs/core/shutdown/src/types.ts diff --git a/libs/core/shutdown/tsconfig.json b/libs/core/shutdown/tsconfig.json new file mode 100644 index 0000000..9405533 --- /dev/null +++ b/libs/core/shutdown/tsconfig.json @@ -0,0 +1,10 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "composite": true + }, + "include": ["src/**/*"], + "references": [] +} diff --git a/libs/types/package.json b/libs/core/types/package.json similarity index 100% rename from libs/types/package.json rename to libs/core/types/package.json diff --git a/libs/types/src/backtesting.ts b/libs/core/types/src/backtesting.ts similarity index 92% rename from libs/types/src/backtesting.ts rename to libs/core/types/src/backtesting.ts index b7700c6..0aa505d 100644 --- a/libs/types/src/backtesting.ts +++ b/libs/core/types/src/backtesting.ts @@ -3,9 +3,9 @@ * Types for strategy backtesting and analysis */ -import type { TradeExecution, TradePerformance } from './trading'; import type { PortfolioAnalysis } from './portfolio'; -import type { RiskMetrics, DrawdownAnalysis } from './risk-metrics'; +import type { DrawdownAnalysis, RiskMetrics } from './risk-metrics'; +import type { TradeExecution, TradePerformance } from './trading'; /** * Backtesting results @@ -31,4 +31,4 @@ export interface BacktestResults { initialCapital: number; /** Final value */ finalValue: number; -} \ No newline at end of file +} diff --git a/libs/core/types/src/decorators.ts b/libs/core/types/src/decorators.ts new file mode 100644 index 0000000..1b004f3 --- /dev/null +++ b/libs/core/types/src/decorators.ts @@ -0,0 +1,41 @@ +/** + * Decorator Type Definitions + * Type definitions for handler decorators + */ + +/** + * Schedule configuration for operations + */ +export interface ScheduleConfig { + cronPattern: string; + priority?: number; + immediately?: boolean; + description?: string; +} + +/** + * Decorator metadata stored on classes + */ +export interface DecoratorMetadata { + handlerName?: string; + operations?: Array<{ + name: string; + methodName: string; + schedules?: ScheduleConfig[]; + }>; + disabled?: boolean; +} + +/** + * Type for decorator factories + */ +export type DecoratorFactory = (target: T, context?: any) => T | void; + +/** + * Type for method decorators + */ +export type MethodDecoratorFactory = ( + target: any, + propertyKey: string, + descriptor?: PropertyDescriptor +) => any; \ No newline at end of file diff --git a/libs/types/src/financial-statements.ts b/libs/core/types/src/financial-statements.ts similarity index 98% rename from libs/types/src/financial-statements.ts rename to libs/core/types/src/financial-statements.ts index 20f5fdb..dd4bda6 100644 --- a/libs/types/src/financial-statements.ts +++ b/libs/core/types/src/financial-statements.ts @@ -13,7 +13,7 @@ export interface BalanceSheet { period: string; /** Currency */ currency: string; - + // Assets /** Total current assets */ totalCurrentAssets: number; @@ -29,7 +29,7 @@ export interface BalanceSheet { prepaidExpenses?: number; /** Other current assets */ otherCurrentAssets?: number; - + /** Total non-current assets */ totalNonCurrentAssets: number; /** Property, plant & equipment (net) */ @@ -42,10 +42,10 @@ export interface BalanceSheet { longTermInvestments?: number; /** Other non-current assets */ otherNonCurrentAssets?: number; - + /** Total assets */ totalAssets: number; - + // Liabilities /** Total current liabilities */ totalCurrentLiabilities: number; @@ -57,7 +57,7 @@ export interface BalanceSheet { accruedLiabilities?: number; /** Other current liabilities */ otherCurrentLiabilities?: number; - + /** Total non-current liabilities */ totalNonCurrentLiabilities: number; /** Long-term debt */ @@ -66,10 +66,10 @@ export interface BalanceSheet { deferredTaxLiabilities?: number; /** Other non-current liabilities */ otherNonCurrentLiabilities?: number; - + /** Total liabilities */ totalLiabilities: number; - + // Equity /** Total stockholders' equity */ totalStockholdersEquity: number; @@ -95,14 +95,14 @@ export interface IncomeStatement { period: string; /** Currency */ currency: string; - + /** Total revenue/net sales */ totalRevenue: number; /** Cost of goods sold */ costOfGoodsSold: number; /** Gross profit */ grossProfit: number; - + /** Operating expenses */ operatingExpenses: number; /** Research and development */ @@ -113,24 +113,24 @@ export interface IncomeStatement { depreciationAmortization?: number; /** Other operating expenses */ otherOperatingExpenses?: number; - + /** Operating income */ operatingIncome: number; - + /** Interest income */ interestIncome?: number; /** Interest expense */ interestExpense?: number; /** Other income/expense */ otherIncomeExpense?: number; - + /** Income before taxes */ incomeBeforeTaxes: number; /** Income tax expense */ incomeTaxExpense: number; /** Net income */ netIncome: number; - + /** Earnings per share (basic) */ earningsPerShareBasic: number; /** Earnings per share (diluted) */ @@ -151,7 +151,7 @@ export interface CashFlowStatement { period: string; /** Currency */ currency: string; - + // Operating Activities /** Net income */ netIncome: number; @@ -163,8 +163,8 @@ export interface CashFlowStatement { otherOperatingActivities?: number; /** Net cash from operating activities */ netCashFromOperatingActivities: number; - - // Investing Activities + + // Investing Activities /** Capital expenditures */ capitalExpenditures: number; /** Acquisitions */ @@ -175,7 +175,7 @@ export interface CashFlowStatement { otherInvestingActivities?: number; /** Net cash from investing activities */ netCashFromInvestingActivities: number; - + // Financing Activities /** Debt issuance/repayment */ debtIssuanceRepayment?: number; @@ -187,11 +187,11 @@ export interface CashFlowStatement { otherFinancingActivities?: number; /** Net cash from financing activities */ netCashFromFinancingActivities: number; - + /** Net change in cash */ netChangeInCash: number; /** Cash at beginning of period */ cashAtBeginningOfPeriod: number; /** Cash at end of period */ cashAtEndOfPeriod: number; -} \ No newline at end of file +} diff --git a/libs/core/types/src/handlers.ts b/libs/core/types/src/handlers.ts new file mode 100644 index 0000000..9985efc --- /dev/null +++ b/libs/core/types/src/handlers.ts @@ -0,0 +1,73 @@ +/** + * Handler and Queue Types + * Shared types for handler system and queue operations + */ + +// Generic execution context - decoupled from service implementations +export interface ExecutionContext { + type: 'http' | 'queue' | 'scheduled' | 'event'; + metadata: { + source?: string; + jobId?: string; + attempts?: number; + timestamp?: number; + traceId?: string; + [key: string]: unknown; + }; +} + +// Simple handler interface +export interface IHandler { + execute(operation: string, input: unknown, context: ExecutionContext): Promise; +} + +// Job handler type for queue operations +export interface JobHandler { + (payload: TPayload): Promise; +} + +// Type-safe wrapper for creating job handlers +export type TypedJobHandler = (payload: TPayload) => Promise; + +// Scheduled job configuration +export interface ScheduledJob { + type: string; + operation: string; + payload?: T; + cronPattern: string; + priority?: number; + description?: string; + immediately?: boolean; + delay?: number; +} + +// Handler configuration +export interface HandlerConfig { + [operation: string]: JobHandler; +} + +// Handler configuration with schedule +export interface HandlerConfigWithSchedule { + name: string; + operations: Record; + scheduledJobs?: ScheduledJob[]; +} + +// Handler metadata for registry +export interface HandlerMetadata { + name: string; + version?: string; + description?: string; + operations: string[]; + scheduledJobs?: ScheduledJob[]; +} + +// Operation metadata for decorators +export interface OperationMetadata { + name: string; + schedules?: string[]; + operation?: string; + description?: string; + validation?: (input: unknown) => boolean; +} + diff --git a/libs/types/src/helpers.ts b/libs/core/types/src/helpers.ts similarity index 99% rename from libs/types/src/helpers.ts rename to libs/core/types/src/helpers.ts index f4cc2ed..835d73b 100644 --- a/libs/types/src/helpers.ts +++ b/libs/core/types/src/helpers.ts @@ -33,4 +33,4 @@ export interface HasVolume { */ export interface HasTimestamp { timestamp: number; -} \ No newline at end of file +} diff --git a/libs/types/src/index.ts b/libs/core/types/src/index.ts similarity index 68% rename from libs/types/src/index.ts rename to libs/core/types/src/index.ts index e66c07e..8c5c2ae 100644 --- a/libs/types/src/index.ts +++ b/libs/core/types/src/index.ts @@ -47,3 +47,36 @@ export type { BacktestResults } from './backtesting'; // Export helper types export type { HasClose, HasOHLC, HasTimestamp, HasVolume } from './helpers'; + +// Export handler types +export type { + ExecutionContext, + HandlerConfig, + HandlerConfigWithSchedule, + HandlerMetadata, + IHandler, + JobHandler, + OperationMetadata, + ScheduledJob, + TypedJobHandler, +} from './handlers'; + +// Export service container interface +export type { IServiceContainer } from './service-container'; + +// Export decorator types +export type { + ScheduleConfig, + DecoratorMetadata, + DecoratorFactory, + MethodDecoratorFactory, +} from './decorators'; + +// Export queue types +export type { + JobData, + JobOptions, + QueueStats, + BatchJobData, + QueueWorkerConfig, +} from './queue'; diff --git a/libs/types/src/market-data.ts b/libs/core/types/src/market-data.ts similarity index 99% rename from libs/types/src/market-data.ts rename to libs/core/types/src/market-data.ts index ec82774..4d51fd7 100644 --- a/libs/types/src/market-data.ts +++ b/libs/core/types/src/market-data.ts @@ -104,4 +104,4 @@ export interface MarketRegime { trendDirection?: 'up' | 'down'; /** Volatility level */ volatilityLevel: 'low' | 'medium' | 'high'; -} \ No newline at end of file +} diff --git a/libs/types/src/options.ts b/libs/core/types/src/options.ts similarity index 99% rename from libs/types/src/options.ts rename to libs/core/types/src/options.ts index ea7cd9b..b56fb5d 100644 --- a/libs/types/src/options.ts +++ b/libs/core/types/src/options.ts @@ -55,4 +55,4 @@ export interface GreeksCalculation { vega: number; /** Rho - interest rate sensitivity */ rho: number; -} \ No newline at end of file +} diff --git a/libs/types/src/portfolio.ts b/libs/core/types/src/portfolio.ts similarity index 99% rename from libs/types/src/portfolio.ts rename to libs/core/types/src/portfolio.ts index d034e48..bbf137b 100644 --- a/libs/types/src/portfolio.ts +++ b/libs/core/types/src/portfolio.ts @@ -105,4 +105,4 @@ export interface KellyParams { averageLoss: number; /** Risk-free rate */ riskFreeRate?: number; -} \ No newline at end of file +} diff --git a/libs/core/types/src/queue.ts b/libs/core/types/src/queue.ts new file mode 100644 index 0000000..bf8bfa0 --- /dev/null +++ b/libs/core/types/src/queue.ts @@ -0,0 +1,64 @@ +/** + * Queue Type Definitions + * Types specific to queue operations + */ + +/** + * Job data structure for queue operations + */ +export interface JobData { + handler: string; + operation: string; + payload: T; + priority?: number; +} + +/** + * Queue job options + */ +export interface JobOptions { + priority?: number; + delay?: number; + attempts?: number; + backoff?: { + type: 'exponential' | 'fixed'; + delay: number; + }; + removeOnComplete?: boolean | number; + removeOnFail?: boolean | number; + timeout?: number; +} + +/** + * Queue statistics + */ +export interface QueueStats { + waiting: number; + active: number; + completed: number; + failed: number; + delayed: number; + paused: boolean; + workers?: number; +} + +/** + * Batch job configuration + */ +export interface BatchJobData { + payloadKey: string; + batchIndex: number; + totalBatches: number; + items: unknown[]; +} + +/** + * Queue worker configuration + */ +export interface QueueWorkerConfig { + concurrency?: number; + maxStalledCount?: number; + stalledInterval?: number; + lockDuration?: number; + lockRenewTime?: number; +} \ No newline at end of file diff --git a/libs/types/src/risk-metrics.ts b/libs/core/types/src/risk-metrics.ts similarity index 99% rename from libs/types/src/risk-metrics.ts rename to libs/core/types/src/risk-metrics.ts index b3d7ca8..7d8ac7b 100644 --- a/libs/types/src/risk-metrics.ts +++ b/libs/core/types/src/risk-metrics.ts @@ -83,4 +83,4 @@ export interface ReturnAnalysis { averagePositiveReturn: number; /** Average negative return */ averageNegativeReturn: number; -} \ No newline at end of file +} diff --git a/libs/core/types/src/service-container.ts b/libs/core/types/src/service-container.ts new file mode 100644 index 0000000..6a69bee --- /dev/null +++ b/libs/core/types/src/service-container.ts @@ -0,0 +1,28 @@ +/** + * Service Container Interface + * Pure interface definition with no dependencies + * Used by both DI and Handlers packages + */ + +/** + * Universal service container interface + * Provides access to all common services in a type-safe manner + * Designed to work across different service contexts + */ +export interface IServiceContainer { + // Core infrastructure + readonly logger: any; // Logger instance + readonly cache?: any; // Cache provider (Redis/Dragonfly) - optional + readonly globalCache?: any; // Global cache provider (shared across services) - optional + readonly queue?: any; // Queue manager (BullMQ) - optional + readonly proxy?: any; // Proxy manager service - optional (depends on cache) + readonly browser?: any; // Browser automation (Playwright) + + // Database clients - all optional to support selective enabling + readonly mongodb?: any; // MongoDB client + readonly postgres?: any; // PostgreSQL client + readonly questdb?: any; // QuestDB client (time-series) + + // Optional extensions for future use + readonly custom?: Record; +} \ No newline at end of file diff --git a/libs/types/src/technical-analysis.ts b/libs/core/types/src/technical-analysis.ts similarity index 90% rename from libs/types/src/technical-analysis.ts rename to libs/core/types/src/technical-analysis.ts index cd3d2d7..cd86638 100644 --- a/libs/types/src/technical-analysis.ts +++ b/libs/core/types/src/technical-analysis.ts @@ -14,23 +14,23 @@ export interface TechnicalIndicators { /** Relative Strength Index */ rsi: number[]; /** MACD indicator */ - macd: { - macd: number[]; - signal: number[]; - histogram: number[]; + macd: { + macd: number[]; + signal: number[]; + histogram: number[]; }; /** Bollinger Bands */ - bollinger: { - upper: number[]; - middle: number[]; - lower: number[]; + bollinger: { + upper: number[]; + middle: number[]; + lower: number[]; }; /** Average True Range */ atr: number[]; /** Stochastic Oscillator */ - stochastic: { - k: number[]; - d: number[]; + stochastic: { + k: number[]; + d: number[]; }; /** Williams %R */ williams_r: number[]; @@ -106,4 +106,4 @@ export interface GARCHParameters { aic: number; /** BIC (Bayesian Information Criterion) */ bic: number; -} \ No newline at end of file +} diff --git a/libs/types/src/trading.ts b/libs/core/types/src/trading.ts similarity index 99% rename from libs/types/src/trading.ts rename to libs/core/types/src/trading.ts index e7a3eac..4c8cf6f 100644 --- a/libs/types/src/trading.ts +++ b/libs/core/types/src/trading.ts @@ -59,4 +59,4 @@ export interface TradePerformance { grossLoss: number; /** Net profit */ netProfit: number; -} \ No newline at end of file +} diff --git a/libs/core/types/tsconfig.json b/libs/core/types/tsconfig.json new file mode 100644 index 0000000..9405533 --- /dev/null +++ b/libs/core/types/tsconfig.json @@ -0,0 +1,10 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "composite": true + }, + "include": ["src/**/*"], + "references": [] +} diff --git a/libs/mongodb-client/README.md b/libs/data/mongodb/README.md similarity index 93% rename from libs/mongodb-client/README.md rename to libs/data/mongodb/README.md index 204df2d..34019ce 100644 --- a/libs/mongodb-client/README.md +++ b/libs/data/mongodb/README.md @@ -15,7 +15,7 @@ A comprehensive MongoDB client library for the Stock Bot trading platform, desig ## Usage ```typescript -import { MongoDBClient } from '@stock-bot/mongodb-client'; +import { MongoDBClient } from '@stock-bot/mongodb'; // Initialize client const mongoClient = new MongoDBClient(); diff --git a/libs/mongodb-client/package.json b/libs/data/mongodb/package.json similarity index 96% rename from libs/mongodb-client/package.json rename to libs/data/mongodb/package.json index ee9957a..c39d9f4 100644 --- a/libs/mongodb-client/package.json +++ b/libs/data/mongodb/package.json @@ -1,5 +1,5 @@ { - "name": "@stock-bot/mongodb-client", + "name": "@stock-bot/mongodb", "version": "1.0.0", "description": "MongoDB client library for Stock Bot platform", "main": "dist/index.js", diff --git a/libs/mongodb-client/src/client.ts b/libs/data/mongodb/src/client.ts similarity index 51% rename from libs/mongodb-client/src/client.ts rename to libs/data/mongodb/src/client.ts index 1bed8c7..251b6b7 100644 --- a/libs/mongodb-client/src/client.ts +++ b/libs/data/mongodb/src/client.ts @@ -1,6 +1,13 @@ -import { getLogger } from '@stock-bot/logger'; -import { Collection, Db, MongoClient, OptionalUnlessRequiredId } from 'mongodb'; -import type { DocumentBase, MongoDBClientConfig } from './types'; +import type { OptionalUnlessRequiredId } from 'mongodb'; +import { Collection, Db, MongoClient } from 'mongodb'; +import type { Logger } from '@stock-bot/core/logger'; +import type { + ConnectionEvents, + DocumentBase, + DynamicPoolConfig, + MongoDBClientConfig, + PoolMetrics, +} from './types'; /** * MongoDB Client for Stock Bot Data Service @@ -13,50 +20,109 @@ export class MongoDBClient { private db: Db | null = null; private readonly config: MongoDBClientConfig; private defaultDatabase: string; - private readonly logger = getLogger('mongodb-client'); + private readonly logger: Logger; private isConnected = false; + private readonly metrics: PoolMetrics; + private readonly events?: ConnectionEvents; + private dynamicPoolConfig?: DynamicPoolConfig; + private poolMonitorInterval?: Timer; - constructor(config: MongoDBClientConfig) { - this.config = config; - this.defaultDatabase = config.database || 'stock'; + constructor(mongoConfig: MongoDBClientConfig, logger: Logger, events?: ConnectionEvents) { + this.config = mongoConfig; + this.defaultDatabase = mongoConfig.database || 'stock'; + this.logger = logger; + this.events = events; + this.metrics = { + totalConnections: 0, + activeConnections: 0, + idleConnections: 0, + waitingRequests: 0, + errors: 0, + created: new Date(), + }; } /** * Connect to MongoDB with simple configuration */ - async connect(): Promise { + async connect(retryAttempts: number = 3, retryDelay: number = 1000): Promise { if (this.isConnected && this.client) { return; } - try { - const uri = this.buildConnectionUri(); - this.logger.info('Connecting to MongoDB...'); + let lastError: Error | null = null; - this.client = new MongoClient(uri, { - maxPoolSize: this.config.poolSettings?.maxPoolSize || 10, - minPoolSize: this.config.poolSettings?.minPoolSize || 1, - connectTimeoutMS: this.config.timeouts?.connectTimeout || 10000, - socketTimeoutMS: this.config.timeouts?.socketTimeout || 30000, - serverSelectionTimeoutMS: this.config.timeouts?.serverSelectionTimeout || 5000, - }); + for (let attempt = 1; attempt <= retryAttempts; attempt++) { + try { + const uri = this.buildConnectionUri(); + this.logger.info(`Connecting to MongoDB (attempt ${attempt}/${retryAttempts})...`); - await this.client.connect(); - await this.client.db(this.defaultDatabase).admin().ping(); + this.client = new MongoClient(uri, { + maxPoolSize: this.config.poolSettings?.maxPoolSize || 10, + minPoolSize: this.config.poolSettings?.minPoolSize || 1, + connectTimeoutMS: this.config.timeouts?.connectTimeout || 10000, + socketTimeoutMS: this.config.timeouts?.socketTimeout || 30000, + serverSelectionTimeoutMS: this.config.timeouts?.serverSelectionTimeout || 5000, + }); - // Set default database from config - this.db = this.client.db(this.defaultDatabase); - this.isConnected = true; + await this.client.connect(); + await this.client.db(this.defaultDatabase).admin().ping(); - this.logger.info('Successfully connected to MongoDB'); - } catch (error) { - this.logger.error('MongoDB connection failed:', error); - if (this.client) { - await this.client.close(); - this.client = null; + // Set default database from config + this.db = this.client.db(this.defaultDatabase); + this.isConnected = true; + + // Update metrics + this.metrics.totalConnections = this.config.poolSettings?.maxPoolSize || 10; + this.metrics.idleConnections = this.metrics.totalConnections; + + // Fire connection event + if (this.events?.onConnect) { + await Promise.resolve(this.events.onConnect()); + } + + // Fire pool created event + if (this.events?.onPoolCreated) { + await Promise.resolve(this.events.onPoolCreated()); + } + + this.logger.info('Successfully connected to MongoDB', { + database: this.defaultDatabase, + poolSize: this.metrics.totalConnections, + }); + + // Start pool monitoring if dynamic sizing is enabled + if (this.dynamicPoolConfig?.enabled) { + this.startPoolMonitoring(); + } + + return; + } catch (error) { + lastError = error as Error; + this.metrics.errors++; + this.metrics.lastError = lastError.message; + + // Fire error event + if (this.events?.onError) { + await Promise.resolve(this.events.onError(lastError)); + } + + this.logger.error(`MongoDB connection attempt ${attempt} failed:`, error); + + if (this.client) { + await this.client.close(); + this.client = null; + } + + if (attempt < retryAttempts) { + await new Promise(resolve => setTimeout(resolve, retryDelay * attempt)); + } } - throw error; } + + throw new Error( + `Failed to connect to MongoDB after ${retryAttempts} attempts: ${lastError?.message}` + ); } /** @@ -68,10 +134,22 @@ export class MongoDBClient { } try { + // Stop pool monitoring + if (this.poolMonitorInterval) { + clearInterval(this.poolMonitorInterval); + this.poolMonitorInterval = undefined; + } + await this.client.close(); this.isConnected = false; this.client = null; this.db = null; + + // Fire disconnect event + if (this.events?.onDisconnect) { + await Promise.resolve(this.events.onDisconnect()); + } + this.logger.info('Disconnected from MongoDB'); } catch (error) { this.logger.error('Error disconnecting from MongoDB:', error); @@ -149,13 +227,16 @@ export class MongoDBClient { let totalUpdated = 0; const errors: unknown[] = []; - this.logger.info(`Starting batch upsert operation [${collectionName}-${documents.length}][${operationId}]`, { - database: dbName, - collection: collectionName, - totalDocuments: documents.length, - uniqueKeys: keyFields, - chunkSize, - }); + this.logger.info( + `Starting batch upsert operation [${collectionName}-${documents.length}][${operationId}]`, + { + database: dbName, + collection: collectionName, + totalDocuments: documents.length, + uniqueKeys: keyFields, + chunkSize, + } + ); // Process documents in chunks to avoid memory issues for (let i = 0; i < documents.length; i += chunkSize) { @@ -252,6 +333,14 @@ export class MongoDBClient { return db.collection(name); } + /** + * Get a collection (interface compatibility method) + * This method provides compatibility with the IMongoDBClient interface + */ + collection(name: string, database?: string): Collection { + return this.getCollection(name, database); + } + /** * Simple insert operation */ @@ -350,4 +439,129 @@ export class MongoDBClient { return `mongodb://${auth}${host}:${port}/${database}${authParam}`; } + + /** + * Get current pool metrics + */ + getPoolMetrics(): PoolMetrics { + // Update last used timestamp + this.metrics.lastUsed = new Date(); + + // Note: MongoDB driver doesn't expose detailed pool metrics + // These are estimates based on configuration + return { ...this.metrics }; + } + + /** + * Set dynamic pool configuration + */ + setDynamicPoolConfig(config: DynamicPoolConfig): void { + this.dynamicPoolConfig = config; + + if (config.enabled && this.isConnected && !this.poolMonitorInterval) { + this.startPoolMonitoring(); + } else if (!config.enabled && this.poolMonitorInterval) { + clearInterval(this.poolMonitorInterval); + this.poolMonitorInterval = undefined; + } + } + + /** + * Start monitoring pool and adjust size dynamically + */ + private startPoolMonitoring(): void { + if (!this.dynamicPoolConfig || this.poolMonitorInterval) { + return; + } + + this.poolMonitorInterval = setInterval(() => { + this.evaluatePoolSize(); + }, this.dynamicPoolConfig.evaluationInterval); + } + + /** + * Evaluate and adjust pool size based on usage + */ + private async evaluatePoolSize(): Promise { + if (!this.dynamicPoolConfig || !this.client) { + return; + } + + const { minSize, maxSize, scaleUpThreshold, scaleDownThreshold } = this.dynamicPoolConfig; + const currentSize = this.metrics.totalConnections; + const utilization = (this.metrics.activeConnections / currentSize) * 100; + + this.logger.debug('Pool utilization', { + utilization: `${utilization.toFixed(1)}%`, + active: this.metrics.activeConnections, + total: currentSize, + }); + + // Scale up if utilization is high + if (utilization > scaleUpThreshold && currentSize < maxSize) { + const newSize = Math.min(currentSize + this.dynamicPoolConfig.scaleUpIncrement, maxSize); + await this.resizePool(newSize); + this.logger.info('Scaling up connection pool', { + from: currentSize, + to: newSize, + utilization, + }); + } + // Scale down if utilization is low + else if (utilization < scaleDownThreshold && currentSize > minSize) { + const newSize = Math.max(currentSize - this.dynamicPoolConfig.scaleDownIncrement, minSize); + await this.resizePool(newSize); + this.logger.info('Scaling down connection pool', { + from: currentSize, + to: newSize, + utilization, + }); + } + } + + /** + * Resize the connection pool + * Note: MongoDB driver doesn't support dynamic resizing, this would require reconnection + */ + private async resizePool(newSize: number): Promise { + // MongoDB doesn't support dynamic pool resizing + // This is a placeholder for future implementation + this.logger.warn('Dynamic pool resizing not yet implemented for MongoDB', { + requestedSize: newSize, + }); + + // Update metrics to reflect desired state + this.metrics.totalConnections = newSize; + } + + /** + * Enable pool warmup on connect + */ + async warmupPool(): Promise { + if (!this.client || !this.isConnected) { + throw new Error('Client not connected'); + } + + const minSize = this.config.poolSettings?.minPoolSize || 1; + const promises: Promise[] = []; + + // Create minimum connections by running parallel pings + for (let i = 0; i < minSize; i++) { + promises.push( + this.client + .db(this.defaultDatabase) + .admin() + .ping() + .then(() => { + this.logger.debug(`Warmed up connection ${i + 1}/${minSize}`); + }) + .catch(error => { + this.logger.warn(`Failed to warm up connection ${i + 1}`, { error }); + }) + ); + } + + await Promise.allSettled(promises); + this.logger.info('Connection pool warmup complete', { connections: minSize }); + } } diff --git a/libs/mongodb-client/src/index.ts b/libs/data/mongodb/src/index.ts similarity index 66% rename from libs/mongodb-client/src/index.ts rename to libs/data/mongodb/src/index.ts index ead5669..5e93d61 100644 --- a/libs/mongodb-client/src/index.ts +++ b/libs/data/mongodb/src/index.ts @@ -10,28 +10,20 @@ export { MongoDBClient } from './client'; // Types export type { AnalystReport, + ConnectionEvents, DocumentBase, + DynamicPoolConfig, EarningsTranscript, ExchangeSourceMapping, MasterExchange, MongoDBClientConfig, MongoDBConnectionOptions, NewsArticle, + PoolMetrics, RawDocument, SecFiling, SentimentData, } from './types'; -// Factory functions -export { - createMongoDBClient, - createAndConnectMongoDBClient, -} from './factory'; - -// Singleton instance -export { - getMongoDBClient, - connectMongoDB, - getDatabase, - disconnectMongoDB, -} from './singleton'; +// Note: Factory functions removed - use Awilix DI container instead +// See: libs/core/di/src/awilix-container.ts diff --git a/libs/mongodb-client/src/types.ts b/libs/data/mongodb/src/types.ts similarity index 85% rename from libs/mongodb-client/src/types.ts rename to libs/data/mongodb/src/types.ts index 17dd445..a66995d 100644 --- a/libs/mongodb-client/src/types.ts +++ b/libs/data/mongodb/src/types.ts @@ -43,6 +43,36 @@ export interface MongoDBConnectionOptions { healthCheckInterval?: number; } +export interface PoolMetrics { + totalConnections: number; + activeConnections: number; + idleConnections: number; + waitingRequests: number; + errors: number; + lastError?: string; + avgResponseTime?: number; + created: Date; + lastUsed?: Date; +} + +export interface ConnectionEvents { + onConnect?: () => void | Promise; + onDisconnect?: () => void | Promise; + onError?: (error: Error) => void | Promise; + onPoolCreated?: () => void | Promise; +} + +export interface DynamicPoolConfig { + enabled: boolean; + minSize: number; + maxSize: number; + scaleUpThreshold: number; // % of pool in use (0-100) + scaleDownThreshold: number; // % of pool idle (0-100) + scaleUpIncrement: number; // connections to add + scaleDownIncrement: number; // connections to remove + evaluationInterval: number; // ms between checks +} + /** * Health Status Types */ diff --git a/libs/data/mongodb/tsconfig.json b/libs/data/mongodb/tsconfig.json new file mode 100644 index 0000000..75d5929 --- /dev/null +++ b/libs/data/mongodb/tsconfig.json @@ -0,0 +1,10 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "composite": true + }, + "include": ["src/**/*"], + "references": [{ "path": "../../core/logger" }, { "path": "../../core/types" }] +} diff --git a/libs/postgres-client/README.md b/libs/data/postgres/README.md similarity index 93% rename from libs/postgres-client/README.md rename to libs/data/postgres/README.md index ad2abab..b0d9ad0 100644 --- a/libs/postgres-client/README.md +++ b/libs/data/postgres/README.md @@ -15,7 +15,7 @@ A comprehensive PostgreSQL client library for the Stock Bot trading platform, de ## Usage ```typescript -import { PostgreSQLClient } from '@stock-bot/postgres-client'; +import { PostgreSQLClient } from '@stock-bot/postgres'; // Initialize client const pgClient = new PostgreSQLClient(); diff --git a/libs/postgres-client/package.json b/libs/data/postgres/package.json similarity index 96% rename from libs/postgres-client/package.json rename to libs/data/postgres/package.json index 781e696..2e6e531 100644 --- a/libs/postgres-client/package.json +++ b/libs/data/postgres/package.json @@ -1,5 +1,5 @@ { - "name": "@stock-bot/postgres-client", + "name": "@stock-bot/postgres", "version": "1.0.0", "description": "PostgreSQL client library for Stock Bot platform", "main": "dist/index.js", diff --git a/libs/postgres-client/src/client.ts b/libs/data/postgres/src/client.ts similarity index 62% rename from libs/postgres-client/src/client.ts rename to libs/data/postgres/src/client.ts index 4974469..c54df1c 100644 --- a/libs/postgres-client/src/client.ts +++ b/libs/data/postgres/src/client.ts @@ -1,9 +1,12 @@ -import { Pool, QueryResultRow } from 'pg'; -import { getLogger } from '@stock-bot/logger'; +import { Pool } from 'pg'; +import type { QueryResultRow } from 'pg'; import { PostgreSQLHealthMonitor } from './health'; import { PostgreSQLQueryBuilder } from './query-builder'; import { PostgreSQLTransactionManager } from './transactions'; import type { + ConnectionEvents, + DynamicPoolConfig, + PoolMetrics, PostgreSQLClientConfig, PostgreSQLConnectionOptions, QueryResult, @@ -20,12 +23,21 @@ export class PostgreSQLClient { private pool: Pool | null = null; private readonly config: PostgreSQLClientConfig; private readonly options: PostgreSQLConnectionOptions; - private readonly logger: ReturnType; + private readonly logger: any; private readonly healthMonitor: PostgreSQLHealthMonitor; private readonly transactionManager: PostgreSQLTransactionManager; private isConnected = false; + private readonly metrics: PoolMetrics; + private readonly events?: ConnectionEvents; + private dynamicPoolConfig?: DynamicPoolConfig; + private poolMonitorInterval?: NodeJS.Timeout; - constructor(config: PostgreSQLClientConfig, options?: PostgreSQLConnectionOptions) { + constructor( + config: PostgreSQLClientConfig, + logger?: any, + options?: PostgreSQLConnectionOptions, + events?: ConnectionEvents + ) { this.config = config; this.options = { retryAttempts: 3, @@ -33,10 +45,20 @@ export class PostgreSQLClient { healthCheckInterval: 30000, ...options, }; + this.events = events; - this.logger = getLogger('postgres-client'); + this.logger = logger || console; this.healthMonitor = new PostgreSQLHealthMonitor(this); this.transactionManager = new PostgreSQLTransactionManager(this); + + this.metrics = { + totalConnections: 0, + activeConnections: 0, + idleConnections: 0, + waitingRequests: 0, + errors: 0, + created: new Date(), + }; } /** @@ -63,7 +85,25 @@ export class PostgreSQLClient { client.release(); this.isConnected = true; - this.logger.info('Successfully connected to PostgreSQL'); + + // Update metrics + const poolConfig = this.config.poolSettings; + this.metrics.totalConnections = poolConfig?.max || 10; + this.metrics.idleConnections = poolConfig?.min || 2; + + // Fire connection event + if (this.events?.onConnect) { + await Promise.resolve(this.events.onConnect()); + } + + // Fire pool created event + if (this.events?.onPoolCreated) { + await Promise.resolve(this.events.onPoolCreated()); + } + + this.logger.info('Successfully connected to PostgreSQL', { + poolSize: this.metrics.totalConnections, + }); // Start health monitoring this.healthMonitor.start(); @@ -71,9 +111,25 @@ export class PostgreSQLClient { // Setup error handlers this.setupErrorHandlers(); + // Setup pool event listeners for metrics + this.setupPoolMetrics(); + + // Start dynamic pool monitoring if enabled + if (this.dynamicPoolConfig?.enabled) { + this.startPoolMonitoring(); + } + return; } catch (error) { lastError = error as Error; + this.metrics.errors++; + this.metrics.lastError = lastError.message; + + // Fire error event + if (this.events?.onError) { + await Promise.resolve(this.events.onError(lastError)); + } + this.logger.error(`PostgreSQL connection attempt ${attempt} failed:`, error); if (this.pool) { @@ -101,10 +157,22 @@ export class PostgreSQLClient { } try { + // Stop pool monitoring + if (this.poolMonitorInterval) { + clearInterval(this.poolMonitorInterval); + this.poolMonitorInterval = undefined; + } + this.healthMonitor.stop(); await this.pool.end(); this.isConnected = false; this.pool = null; + + // Fire disconnect event + if (this.events?.onDisconnect) { + await Promise.resolve(this.events.onDisconnect()); + } + this.logger.info('Disconnected from PostgreSQL'); } catch (error) { this.logger.error('Error disconnecting from PostgreSQL:', error); @@ -366,14 +434,23 @@ export class PostgreSQLClient { return this.pool; } - private buildPoolConfig(): any { - return { + this.logger.debug('Building PostgreSQL pool config:', { host: this.config.host, port: this.config.port, database: this.config.database, user: this.config.username, - password: this.config.password, + passwordLength: this.config.password?.length, + passwordType: typeof this.config.password, + passwordValue: this.config.password ? `${this.config.password.substring(0, 3)}***` : 'NO_PASSWORD', + }); + + const poolConfig = { + host: this.config.host, + port: this.config.port, + database: this.config.database, + user: this.config.username, + password: typeof this.config.password === 'string' ? this.config.password : String(this.config.password || ''), min: this.config.poolSettings?.min, max: this.config.poolSettings?.max, idleTimeoutMillis: this.config.poolSettings?.idleTimeoutMillis, @@ -388,6 +465,8 @@ export class PostgreSQLClient { } : false, }; + + return poolConfig; } private setupErrorHandlers(): void { @@ -411,4 +490,141 @@ export class PostgreSQLClient { private delay(ms: number): Promise { return new Promise(resolve => setTimeout(resolve, ms)); } + + /** + * Get current pool metrics + */ + getPoolMetrics(): PoolMetrics { + // Update last used timestamp + this.metrics.lastUsed = new Date(); + + // Update metrics from pool if available + if (this.pool) { + this.metrics.totalConnections = this.pool.totalCount; + this.metrics.idleConnections = this.pool.idleCount; + this.metrics.waitingRequests = this.pool.waitingCount; + this.metrics.activeConnections = this.metrics.totalConnections - this.metrics.idleConnections; + } + + return { ...this.metrics }; + } + + /** + * Set dynamic pool configuration + */ + setDynamicPoolConfig(config: DynamicPoolConfig): void { + this.dynamicPoolConfig = config; + + if (config.enabled && this.isConnected && !this.poolMonitorInterval) { + this.startPoolMonitoring(); + } else if (!config.enabled && this.poolMonitorInterval) { + clearInterval(this.poolMonitorInterval); + this.poolMonitorInterval = undefined; + } + } + + /** + * Start monitoring pool and adjust size dynamically + */ + private startPoolMonitoring(): void { + if (!this.dynamicPoolConfig || this.poolMonitorInterval) { + return; + } + + this.poolMonitorInterval = setInterval(() => { + this.evaluatePoolSize(); + }, this.dynamicPoolConfig.evaluationInterval); + } + + /** + * Setup pool event listeners for metrics + */ + private setupPoolMetrics(): void { + if (!this.pool) { + return; + } + + // Track when connections are acquired + this.pool.on('acquire', () => { + this.metrics.activeConnections++; + this.metrics.idleConnections--; + }); + + // Track when connections are released + this.pool.on('release', () => { + this.metrics.activeConnections--; + this.metrics.idleConnections++; + }); + } + + /** + * Evaluate and adjust pool size based on usage + */ + private async evaluatePoolSize(): Promise { + if (!this.dynamicPoolConfig || !this.pool) { + return; + } + + const metrics = this.getPoolMetrics(); + const { minSize, maxSize, scaleUpThreshold, scaleDownThreshold } = this.dynamicPoolConfig; + const currentSize = metrics.totalConnections; + const utilization = currentSize > 0 ? (metrics.activeConnections / currentSize) * 100 : 0; + + this.logger.debug('Pool utilization', { + utilization: `${utilization.toFixed(1)}%`, + active: metrics.activeConnections, + total: currentSize, + waiting: metrics.waitingRequests, + }); + + // Scale up if utilization is high or there are waiting requests + if ((utilization > scaleUpThreshold || metrics.waitingRequests > 0) && currentSize < maxSize) { + const newSize = Math.min(currentSize + this.dynamicPoolConfig.scaleUpIncrement, maxSize); + this.logger.info('Would scale up connection pool', { + from: currentSize, + to: newSize, + utilization, + }); + // Note: pg module doesn't support dynamic resizing, would need reconnection + } + // Scale down if utilization is low + else if (utilization < scaleDownThreshold && currentSize > minSize) { + const newSize = Math.max(currentSize - this.dynamicPoolConfig.scaleDownIncrement, minSize); + this.logger.info('Would scale down connection pool', { + from: currentSize, + to: newSize, + utilization, + }); + // Note: pg module doesn't support dynamic resizing, would need reconnection + } + } + + /** + * Enable pool warmup on connect + */ + async warmupPool(): Promise { + if (!this.pool || !this.isConnected) { + throw new Error('Client not connected'); + } + + const minSize = this.config.poolSettings?.min || 2; + const promises: Promise[] = []; + + // Create minimum connections by running parallel queries + for (let i = 0; i < minSize; i++) { + promises.push( + this.pool + .query('SELECT 1') + .then(() => { + this.logger.debug(`Warmed up connection ${i + 1}/${minSize}`); + }) + .catch(error => { + this.logger.warn(`Failed to warm up connection ${i + 1}`, { error }); + }) + ); + } + + await Promise.allSettled(promises); + this.logger.info('Connection pool warmup complete', { connections: minSize }); + } } diff --git a/libs/postgres-client/src/health.ts b/libs/data/postgres/src/health.ts similarity index 100% rename from libs/postgres-client/src/health.ts rename to libs/data/postgres/src/health.ts diff --git a/libs/postgres-client/src/index.ts b/libs/data/postgres/src/index.ts similarity index 76% rename from libs/postgres-client/src/index.ts rename to libs/data/postgres/src/index.ts index 495e20d..6218f41 100644 --- a/libs/postgres-client/src/index.ts +++ b/libs/data/postgres/src/index.ts @@ -28,17 +28,10 @@ export type { Strategy, RiskLimit, AuditLog, + PoolMetrics, + ConnectionEvents, + DynamicPoolConfig, } from './types'; -// Factory functions -export { - createPostgreSQLClient, - createAndConnectPostgreSQLClient, -} from './factory'; - -// Singleton instance -export { - getPostgreSQLClient, - connectPostgreSQL, - disconnectPostgreSQL, -} from './singleton'; +// Note: Factory functions removed - instantiate directly with new PostgreSQLClient() +// or use the Awilix DI container (recommended) diff --git a/libs/postgres-client/src/query-builder.ts b/libs/data/postgres/src/query-builder.ts similarity index 100% rename from libs/postgres-client/src/query-builder.ts rename to libs/data/postgres/src/query-builder.ts diff --git a/libs/postgres-client/src/transactions.ts b/libs/data/postgres/src/transactions.ts similarity index 100% rename from libs/postgres-client/src/transactions.ts rename to libs/data/postgres/src/transactions.ts diff --git a/libs/postgres-client/src/types.ts b/libs/data/postgres/src/types.ts similarity index 83% rename from libs/postgres-client/src/types.ts rename to libs/data/postgres/src/types.ts index 0caf612..7a129f2 100644 --- a/libs/postgres-client/src/types.ts +++ b/libs/data/postgres/src/types.ts @@ -36,6 +36,36 @@ export interface PostgreSQLConnectionOptions { healthCheckInterval?: number; } +export interface PoolMetrics { + totalConnections: number; + activeConnections: number; + idleConnections: number; + waitingRequests: number; + errors: number; + lastError?: string; + avgResponseTime?: number; + created: Date; + lastUsed?: Date; +} + +export interface ConnectionEvents { + onConnect?: () => void | Promise; + onDisconnect?: () => void | Promise; + onError?: (error: Error) => void | Promise; + onPoolCreated?: () => void | Promise; +} + +export interface DynamicPoolConfig { + enabled: boolean; + minSize: number; + maxSize: number; + scaleUpThreshold: number; // % of pool in use (0-100) + scaleDownThreshold: number; // % of pool idle (0-100) + scaleUpIncrement: number; // connections to add + scaleDownIncrement: number; // connections to remove + evaluationInterval: number; // ms between checks +} + /** * Health Status Types */ diff --git a/libs/data/postgres/tsconfig.json b/libs/data/postgres/tsconfig.json new file mode 100644 index 0000000..75d5929 --- /dev/null +++ b/libs/data/postgres/tsconfig.json @@ -0,0 +1,10 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "composite": true + }, + "include": ["src/**/*"], + "references": [{ "path": "../../core/logger" }, { "path": "../../core/types" }] +} diff --git a/libs/questdb-client/README.md b/libs/data/questdb/README.md similarity index 94% rename from libs/questdb-client/README.md rename to libs/data/questdb/README.md index e69884d..469ab8b 100644 --- a/libs/questdb-client/README.md +++ b/libs/data/questdb/README.md @@ -15,7 +15,7 @@ A comprehensive QuestDB client library for the Stock Bot trading platform, optim ## Usage ```typescript -import { QuestDBClient } from '@stock-bot/questdb-client'; +import { QuestDBClient } from '@stock-bot/questdb'; // Initialize client const questClient = new QuestDBClient(); diff --git a/libs/questdb-client/bunfig.toml b/libs/data/questdb/bunfig.toml similarity index 100% rename from libs/questdb-client/bunfig.toml rename to libs/data/questdb/bunfig.toml diff --git a/libs/questdb-client/package.json b/libs/data/questdb/package.json similarity index 96% rename from libs/questdb-client/package.json rename to libs/data/questdb/package.json index a1ec85b..d173cdd 100644 --- a/libs/questdb-client/package.json +++ b/libs/data/questdb/package.json @@ -1,5 +1,5 @@ { - "name": "@stock-bot/questdb-client", + "name": "@stock-bot/questdb", "version": "1.0.0", "description": "QuestDB client library for Stock Bot platform", "main": "dist/index.js", diff --git a/libs/questdb-client/src/client.ts b/libs/data/questdb/src/client.ts similarity index 90% rename from libs/questdb-client/src/client.ts rename to libs/data/questdb/src/client.ts index c279d3d..e7ead4b 100644 --- a/libs/questdb-client/src/client.ts +++ b/libs/data/questdb/src/client.ts @@ -1,5 +1,4 @@ import { Pool } from 'pg'; -import { getLogger } from '@stock-bot/logger'; import { QuestDBHealthMonitor } from './health'; import { QuestDBInfluxWriter } from './influx-writer'; import { QuestDBQueryBuilder } from './query-builder'; @@ -21,14 +20,15 @@ export class QuestDBClient { private pgPool: Pool | null = null; private readonly config: QuestDBClientConfig; private readonly options: QuestDBConnectionOptions; - private readonly logger = getLogger('QuestDBClient'); + private readonly logger: any; private readonly healthMonitor: QuestDBHealthMonitor; private readonly influxWriter: QuestDBInfluxWriter; private readonly schemaManager: QuestDBSchemaManager; private isConnected = false; - constructor(config: QuestDBClientConfig, options?: QuestDBConnectionOptions) { + constructor(config: QuestDBClientConfig, logger?: any, options?: QuestDBConnectionOptions) { this.config = config; + this.logger = logger || console; this.options = { protocol: 'pg', retryAttempts: 3, @@ -37,6 +37,13 @@ export class QuestDBClient { ...options, }; + // Debug: log the received config + this.logger.debug('QuestDB client created with config:', { + ...config, + user: config.user || '[NOT PROVIDED]', + password: config.password ? '[PROVIDED]' : '[NOT PROVIDED]', + }); + this.healthMonitor = new QuestDBHealthMonitor(this); this.influxWriter = new QuestDBInfluxWriter(this); this.schemaManager = new QuestDBSchemaManager(this); @@ -405,14 +412,11 @@ export class QuestDBClient { return { ...this.config }; } - private buildPgPoolConfig(): any { - return { + const config: any = { host: this.config.host, port: this.config.pgPort, database: this.config.database, - user: this.config.user, - password: this.config.password, connectionTimeoutMillis: this.config.timeouts?.connection, query_timeout: this.config.timeouts?.request, ssl: this.config.tls?.enabled @@ -423,6 +427,32 @@ export class QuestDBClient { min: 2, max: 10, }; + + // Only add user/password if they are provided + if (this.config.user) { + + this.logger.debug('Adding user to QuestDB pool config:', this.config.user); + config.user = this.config.user; + } else { + + this.logger.debug('No user provided for QuestDB connection'); + } + + if (this.config.password) { + + this.logger.debug('Adding password to QuestDB pool config'); + config.password = this.config.password; + } else { + + this.logger.debug('No password provided for QuestDB connection'); + } + + + this.logger.debug('Final QuestDB pool config:', { + ...config, + password: config.password ? '[REDACTED]' : undefined, + }); + return config; } private mapDataType(typeId: number): string { diff --git a/libs/questdb-client/src/health.ts b/libs/data/questdb/src/health.ts similarity index 100% rename from libs/questdb-client/src/health.ts rename to libs/data/questdb/src/health.ts diff --git a/libs/questdb-client/src/index.ts b/libs/data/questdb/src/index.ts similarity index 84% rename from libs/questdb-client/src/index.ts rename to libs/data/questdb/src/index.ts index 1add108..46ab368 100644 --- a/libs/questdb-client/src/index.ts +++ b/libs/data/questdb/src/index.ts @@ -28,5 +28,5 @@ export type { InsertResult, } from './types'; -// Utils -export { createQuestDBClient, createAndConnectQuestDBClient } from './factory'; +// Note: Factory functions removed - instantiate directly with new QuestDBClient() +// or use the Awilix DI container (recommended) diff --git a/libs/questdb-client/src/influx-writer.ts b/libs/data/questdb/src/influx-writer.ts similarity index 100% rename from libs/questdb-client/src/influx-writer.ts rename to libs/data/questdb/src/influx-writer.ts diff --git a/libs/questdb-client/src/query-builder.ts b/libs/data/questdb/src/query-builder.ts similarity index 99% rename from libs/questdb-client/src/query-builder.ts rename to libs/data/questdb/src/query-builder.ts index 2dc1a19..7770ac0 100644 --- a/libs/questdb-client/src/query-builder.ts +++ b/libs/data/questdb/src/query-builder.ts @@ -1,9 +1,5 @@ import { getLogger } from '@stock-bot/logger'; -import type { - QueryResult, - TableNames, - TimeRange, -} from './types'; +import type { QueryResult, TableNames, TimeRange } from './types'; // Interface to avoid circular dependency interface QuestDBClientInterface { diff --git a/libs/questdb-client/src/schema.ts b/libs/data/questdb/src/schema.ts similarity index 99% rename from libs/questdb-client/src/schema.ts rename to libs/data/questdb/src/schema.ts index 281f3f5..be91b3e 100644 --- a/libs/questdb-client/src/schema.ts +++ b/libs/data/questdb/src/schema.ts @@ -326,7 +326,7 @@ export class QuestDBSchemaManager { // Add designated timestamp const timestampColumn = schema.columns.find(col => col.designated); if (timestampColumn) { - sql += ` timestamp(${timestampColumn.name})`; + sql += ` TIMESTAMP(${timestampColumn.name})`; } // Add partition by @@ -337,7 +337,6 @@ export class QuestDBSchemaManager { return sql; } - /** * Validate schema definition */ diff --git a/libs/questdb-client/src/types.ts b/libs/data/questdb/src/types.ts similarity index 100% rename from libs/questdb-client/src/types.ts rename to libs/data/questdb/src/types.ts diff --git a/libs/questdb-client/test/integration.test.ts b/libs/data/questdb/test/integration.test.ts similarity index 95% rename from libs/questdb-client/test/integration.test.ts rename to libs/data/questdb/test/integration.test.ts index 7960577..49a02da 100644 --- a/libs/questdb-client/test/integration.test.ts +++ b/libs/data/questdb/test/integration.test.ts @@ -7,7 +7,6 @@ import { afterEach, describe, expect, it } from 'bun:test'; import { - createQuestDBClient, QuestDBClient, QuestDBHealthMonitor, QuestDBInfluxWriter, @@ -40,9 +39,17 @@ describe('QuestDB Client Integration', () => { }); describe('Client Initialization', () => { - it('should create client with factory function', () => { - const factoryClient = createQuestDBClient(); - expect(factoryClient).toBeInstanceOf(QuestDBClient); + it('should create client with constructor', () => { + const newClient = new QuestDBClient({ + host: 'localhost', + httpPort: 9000, + pgPort: 8812, + influxPort: 9009, + database: 'questdb', + user: 'admin', + password: 'quest', + }); + expect(newClient).toBeInstanceOf(QuestDBClient); }); it('should initialize all supporting classes', () => { diff --git a/libs/questdb-client/test/setup.ts b/libs/data/questdb/test/setup.ts similarity index 100% rename from libs/questdb-client/test/setup.ts rename to libs/data/questdb/test/setup.ts diff --git a/libs/data/questdb/tsconfig.json b/libs/data/questdb/tsconfig.json new file mode 100644 index 0000000..75d5929 --- /dev/null +++ b/libs/data/questdb/tsconfig.json @@ -0,0 +1,10 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "composite": true + }, + "include": ["src/**/*"], + "references": [{ "path": "../../core/logger" }, { "path": "../../core/types" }] +} diff --git a/libs/event-bus/tsconfig.json b/libs/event-bus/tsconfig.json deleted file mode 100644 index eae3dc0..0000000 --- a/libs/event-bus/tsconfig.json +++ /dev/null @@ -1,11 +0,0 @@ -{ - "extends": "../../tsconfig.lib.json", - "compilerOptions": { - "outDir": "./dist", - "rootDir": "./src" - }, - "include": ["src/**/*"], - "references": [ - { "path": "../logger" } - ] -} diff --git a/libs/http/README.md b/libs/http/README.md deleted file mode 100644 index 678bf15..0000000 --- a/libs/http/README.md +++ /dev/null @@ -1,283 +0,0 @@ -# HTTP Client Library - -A comprehensive HTTP client library for the Stock Bot platform with built-in support for: - -- ✅ **Fetch API** - Modern, promise-based HTTP requests -- ✅ **Proxy Support** - HTTP, HTTPS, SOCKS4, and SOCKS5 proxies -- ✅ **Rate Limiting** - Configurable request rate limiting -- ✅ **Timeout Handling** - Request timeouts with abort controllers -- ✅ **Retry Logic** - Automatic retries with exponential backoff -- ✅ **TypeScript** - Full TypeScript support with type safety -- ✅ **Logging Integration** - Optional logger integration - -## Installation - -```bash -bun add @stock-bot/http -``` - -## Basic Usage - -```typescript -import { HttpClient } from '@stock-bot/http'; - -// Create a client with default configuration -const client = new HttpClient(); - -// Make a GET request -const response = await client.get('https://api.example.com/data'); -console.log(response.data); - -// Make a POST request -const postResponse = await client.post('https://api.example.com/users', { - name: 'John Doe', - email: 'john@example.com' -}); -``` - -## Advanced Configuration - -```typescript -import { HttpClient } from '@stock-bot/http'; -import { logger } from '@stock-bot/logger'; - -const client = new HttpClient({ - baseURL: 'https://api.example.com', - timeout: 10000, // 10 seconds - retries: 3, - retryDelay: 1000, // 1 second base delay - defaultHeaders: { - 'Authorization': 'Bearer token', - 'User-Agent': 'Stock-Bot/1.0' - }, - validateStatus: (status) => status < 400 -}, logger); -``` - -## Proxy Support - -### HTTP/HTTPS Proxy - -```typescript -const client = new HttpClient({ - proxy: { - type: 'http', - host: 'proxy.example.com', - port: 8080, - username: 'user', // optional - password: 'pass' // optional - } -}); -``` - -### SOCKS Proxy - -```typescript -const client = new HttpClient({ - proxy: { - type: 'socks5', - host: 'socks-proxy.example.com', - port: 1080, - username: 'user', // optional - password: 'pass' // optional - } -}); -``` - -## Rate Limiting - -```typescript -const client = new HttpClient({ - rateLimit: { - maxRequests: 100, // Max 100 requests - windowMs: 60 * 1000, // Per 1 minute - skipSuccessfulRequests: false, - skipFailedRequests: true // Don't count failed requests - } -}); - -// Check rate limit status -const status = client.getRateLimitStatus(); -console.log(`${status.currentCount}/${status.maxRequests} requests used`); -``` - -## Request Methods - -```typescript -// GET request -const getData = await client.get('/api/data'); - -// POST request with body -const postData = await client.post('/api/users', { - name: 'John', - email: 'john@example.com' -}); - -// PUT request -const putData = await client.put('/api/users/1', updatedUser); - -// DELETE request -const deleteData = await client.delete('/api/users/1'); - -// PATCH request -const patchData = await client.patch('/api/users/1', { name: 'Jane' }); - -// Custom request -const customResponse = await client.request({ - method: 'POST', - url: '/api/custom', - headers: { 'X-Custom': 'value' }, - body: { data: 'custom' }, - timeout: 5000 -}); -``` - -## Error Handling - -```typescript -import { HttpError, TimeoutError, RateLimitError } from '@stock-bot/http'; - -try { - const response = await client.get('/api/data'); -} catch (error) { - if (error instanceof TimeoutError) { - console.log('Request timed out'); - } else if (error instanceof RateLimitError) { - console.log(`Rate limited: retry after ${error.retryAfter}ms`); - } else if (error instanceof HttpError) { - console.log(`HTTP error ${error.status}: ${error.message}`); - } -} -``` - -## Retry Configuration - -```typescript -const client = new HttpClient({ - retries: 3, // Retry up to 3 times - retryDelay: 1000, // Base delay of 1 second - // Exponential backoff: 1s, 2s, 4s -}); - -// Or per-request retry configuration -const response = await client.get('/api/data', { - retries: 5, - retryDelay: 500 -}); -``` - -## Timeout Handling - -```typescript -// Global timeout -const client = new HttpClient({ - timeout: 30000 // 30 seconds -}); - -// Per-request timeout -const response = await client.get('/api/data', { - timeout: 5000 // 5 seconds for this request -}); -``` - -## Custom Status Validation - -```typescript -const client = new HttpClient({ - validateStatus: (status) => { - // Accept 2xx and 3xx status codes - return status >= 200 && status < 400; - } -}); - -// Or per-request validation -const response = await client.get('/api/data', { - validateStatus: (status) => status === 200 || status === 404 -}); -``` - -## TypeScript Support - -The library is fully typed with TypeScript: - -```typescript -interface User { - id: number; - name: string; - email: string; -} - -// Response data is properly typed -const response = await client.get('/api/users'); -const users: User[] = response.data; - -// Request configuration is validated -const config: RequestConfig = { - method: 'POST', - url: '/api/users', - body: { name: 'John' }, - timeout: 5000 -}; -``` - -## Integration with Logger - -```typescript -import { logger } from '@stock-bot/logger'; -import { HttpClient } from '@stock-bot/http'; - -const client = new HttpClient({ - baseURL: 'https://api.example.com' -}, logger); - -// All requests will be logged with debug/warn/error levels -``` - -## Testing - -```bash -# Run tests -bun test - -# Run with coverage -bun test --coverage - -# Watch mode -bun test --watch -``` - -## Features - -### Proxy Support -- HTTP and HTTPS proxies -- SOCKS4 and SOCKS5 proxies -- Authentication support -- Automatic agent creation - -### Rate Limiting -- Token bucket algorithm -- Configurable window and request limits -- Skip successful/failed requests options -- Real-time status monitoring - -### Retry Logic -- Exponential backoff -- Configurable retry attempts -- Smart retry conditions (5xx errors only) -- Per-request retry override - -### Error Handling -- Typed error classes -- Detailed error information -- Request/response context -- Timeout detection - -### Performance -- Built on modern Fetch API -- Minimal dependencies -- Tree-shakeable exports -- TypeScript optimization - -## License - -MIT License - see LICENSE file for details. diff --git a/libs/http/bunfig.toml b/libs/http/bunfig.toml deleted file mode 100644 index e69de29..0000000 diff --git a/libs/http/package.json b/libs/http/package.json deleted file mode 100644 index 08dfbd3..0000000 --- a/libs/http/package.json +++ /dev/null @@ -1,46 +0,0 @@ -{ - "name": "@stock-bot/http", - "version": "1.0.0", - "description": "HTTP client library with proxy support, rate limiting, and timeout for Stock Bot platform", - "main": "dist/index.js", - "types": "dist/index.d.ts", - "type": "module", - "scripts": { - "build": "tsc", - "test": "bun test", - "test:watch": "bun test --watch", - "test:coverage": "bun test --coverage", - "lint": "eslint src/**/*.ts", - "type-check": "tsc --noEmit", - "clean": "rimraf dist" - }, - "dependencies": { - "@stock-bot/logger": "*", - "@stock-bot/types": "*", - "axios": "^1.9.0", - "http-proxy-agent": "^7.0.2", - "https-proxy-agent": "^7.0.6", - "socks-proxy-agent": "^8.0.5", - "user-agents": "^1.1.567" - }, - "devDependencies": { - "@types/node": "^20.11.0", - "@types/user-agents": "^1.0.4", - "@typescript-eslint/eslint-plugin": "^6.19.0", - "@typescript-eslint/parser": "^6.19.0", - "bun-types": "^1.2.15", - "eslint": "^8.56.0", - "typescript": "^5.3.0" - }, - "exports": { - ".": { - "import": "./dist/index.js", - "require": "./dist/index.js", - "types": "./dist/index.d.ts" - } - }, - "files": [ - "dist", - "README.md" - ] -} diff --git a/libs/http/src/adapters/axios-adapter.ts b/libs/http/src/adapters/axios-adapter.ts deleted file mode 100644 index 42eb73a..0000000 --- a/libs/http/src/adapters/axios-adapter.ts +++ /dev/null @@ -1,58 +0,0 @@ -import axios, { type AxiosRequestConfig, type AxiosResponse } from 'axios'; -import { ProxyManager } from '../proxy-manager'; -import type { HttpResponse, RequestConfig } from '../types'; -import { HttpError } from '../types'; -import type { RequestAdapter } from './types'; - -/** - * Axios adapter for SOCKS proxies - */ -export class AxiosAdapter implements RequestAdapter { - canHandle(config: RequestConfig): boolean { - // Axios handles SOCKS proxies - return Boolean( - config.proxy && - typeof config.proxy !== 'string' && - (config.proxy.protocol === 'socks4' || config.proxy.protocol === 'socks5') - ); - } - - async request(config: RequestConfig, signal: AbortSignal): Promise> { - const { url, method = 'GET', headers, data, proxy } = config; - - if (!proxy || typeof proxy === 'string') { - throw new Error('Axios adapter requires ProxyInfo configuration'); - } - - // Create proxy configuration using ProxyManager - const axiosConfig: AxiosRequestConfig = { - ...ProxyManager.createAxiosConfig(proxy), - url, - method, - headers, - data, - signal, - // Don't throw on non-2xx status codes - let caller handle - validateStatus: () => true, - }; - const response: AxiosResponse = await axios(axiosConfig); - - const httpResponse: HttpResponse = { - data: response.data, - status: response.status, - headers: response.headers as Record, - ok: response.status >= 200 && response.status < 300, - }; - - // Throw HttpError for non-2xx status codes - if (!httpResponse.ok) { - throw new HttpError( - `Request failed with status ${response.status}`, - response.status, - httpResponse - ); - } - - return httpResponse; - } -} diff --git a/libs/http/src/adapters/factory.ts b/libs/http/src/adapters/factory.ts deleted file mode 100644 index c185e5c..0000000 --- a/libs/http/src/adapters/factory.ts +++ /dev/null @@ -1,28 +0,0 @@ -import type { RequestConfig } from '../types'; -import { AxiosAdapter } from './axios-adapter'; -import { FetchAdapter } from './fetch-adapter'; -import type { RequestAdapter } from './types'; - -/** - * Factory for creating the appropriate request adapter - */ -export class AdapterFactory { - private static adapters: RequestAdapter[] = [ - new AxiosAdapter(), // Check SOCKS first - new FetchAdapter(), // Fallback to fetch for everything else - ]; - - /** - * Get the appropriate adapter for the given configuration - */ - static getAdapter(config: RequestConfig): RequestAdapter { - for (const adapter of this.adapters) { - if (adapter.canHandle(config)) { - return adapter; - } - } - - // Fallback to fetch adapter - return new FetchAdapter(); - } -} diff --git a/libs/http/src/adapters/fetch-adapter.ts b/libs/http/src/adapters/fetch-adapter.ts deleted file mode 100644 index 2a172c9..0000000 --- a/libs/http/src/adapters/fetch-adapter.ts +++ /dev/null @@ -1,74 +0,0 @@ -import { ProxyManager } from '../proxy-manager'; -import type { HttpResponse, RequestConfig } from '../types'; -import { HttpError } from '../types'; -import type { RequestAdapter } from './types'; - -/** - * Fetch adapter for HTTP/HTTPS proxies and non-proxy requests - */ -export class FetchAdapter implements RequestAdapter { - canHandle(config: RequestConfig): boolean { - // Fetch handles non-proxy requests and HTTP/HTTPS proxies - if (typeof config.proxy === 'string') { - return config.proxy.startsWith('http'); - } - return !config.proxy || config.proxy.protocol === 'http' || config.proxy.protocol === 'https'; - } - - async request(config: RequestConfig, signal: AbortSignal): Promise> { - const { url, method = 'GET', headers, data, proxy } = config; - - // Prepare fetch options - const fetchOptions: RequestInit = { - method, - headers, - signal, - }; - - // Add body for non-GET requests - if (data && method !== 'GET') { - fetchOptions.body = typeof data === 'string' ? data : JSON.stringify(data); - if (typeof data === 'object') { - fetchOptions.headers = { 'Content-Type': 'application/json', ...fetchOptions.headers }; - } - } - - // Add proxy if needed (using Bun's built-in proxy support) - if (typeof proxy === 'string') { - // If proxy is a URL string, use it directly - (fetchOptions as any).proxy = proxy; - } else if (proxy) { - // If proxy is a ProxyInfo object, create a proxy URL - (fetchOptions as any).proxy = ProxyManager.createProxyUrl(proxy); - } - const response = await fetch(url, fetchOptions); - - // Parse response based on content type - let responseData: T; - const contentType = response.headers.get('content-type') || ''; - - if (contentType.includes('application/json')) { - responseData = (await response.json()) as T; - } else { - responseData = (await response.text()) as T; - } - - const httpResponse: HttpResponse = { - data: responseData, - status: response.status, - headers: Object.fromEntries(response.headers.entries()), - ok: response.ok, - }; - - // Throw HttpError for non-2xx status codes - if (!response.ok) { - throw new HttpError( - `Request failed with status ${response.status}`, - response.status, - httpResponse - ); - } - - return httpResponse; - } -} diff --git a/libs/http/src/adapters/index.ts b/libs/http/src/adapters/index.ts deleted file mode 100644 index b28aa12..0000000 --- a/libs/http/src/adapters/index.ts +++ /dev/null @@ -1,4 +0,0 @@ -export * from './types'; -export * from './fetch-adapter'; -export * from './axios-adapter'; -export * from './factory'; diff --git a/libs/http/src/adapters/types.ts b/libs/http/src/adapters/types.ts deleted file mode 100644 index f363f7f..0000000 --- a/libs/http/src/adapters/types.ts +++ /dev/null @@ -1,16 +0,0 @@ -import type { HttpResponse, RequestConfig } from '../types'; - -/** - * Request adapter interface for different HTTP implementations - */ -export interface RequestAdapter { - /** - * Execute an HTTP request - */ - request(config: RequestConfig, signal: AbortSignal): Promise>; - - /** - * Check if this adapter can handle the given configuration - */ - canHandle(config: RequestConfig): boolean; -} diff --git a/libs/http/src/client.ts b/libs/http/src/client.ts deleted file mode 100644 index c02382a..0000000 --- a/libs/http/src/client.ts +++ /dev/null @@ -1,183 +0,0 @@ -import type { Logger } from '@stock-bot/logger'; -import { AdapterFactory } from './adapters/index'; -import type { HttpClientConfig, HttpResponse, RequestConfig } from './types'; -import { HttpError } from './types'; -import { getRandomUserAgent } from './user-agent'; - -export class HttpClient { - private readonly config: HttpClientConfig; - private readonly logger?: Logger; - - constructor(config: HttpClientConfig = {}, logger?: Logger) { - this.config = config; - this.logger = logger?.child('http-client'); - } - - // Convenience methods - async get( - url: string, - config: Omit = {} - ): Promise> { - return this.request({ ...config, method: 'GET', url }); - } - - async post( - url: string, - data?: unknown, - config: Omit = {} - ): Promise> { - return this.request({ ...config, method: 'POST', url, data }); - } - - async put( - url: string, - data?: unknown, - config: Omit = {} - ): Promise> { - return this.request({ ...config, method: 'PUT', url, data }); - } - - async del( - url: string, - config: Omit = {} - ): Promise> { - return this.request({ ...config, method: 'DELETE', url }); - } - - async patch( - url: string, - data?: unknown, - config: Omit = {} - ): Promise> { - return this.request({ ...config, method: 'PATCH', url, data }); - } - - /** - * Main request method - clean and simple - */ - async request(config: RequestConfig): Promise> { - const finalConfig = this.mergeConfig(config); - const startTime = Date.now(); - - this.logger?.debug('Making HTTP request', { - method: finalConfig.method, - url: finalConfig.url, - hasProxy: !!finalConfig.proxy, - }); - - try { - const response = await this.executeRequest(finalConfig); - response.responseTime = Date.now() - startTime; - - this.logger?.debug('HTTP request successful', { - method: finalConfig.method, - url: finalConfig.url, - status: response.status, - responseTime: response.responseTime, - }); - - return response; - } catch (error) { - if (this.logger?.getServiceName() === 'proxy-tasks') { - this.logger?.debug('HTTP request failed', { - method: finalConfig.method, - url: finalConfig.url, - error: (error as Error).message, - }); - } else { - this.logger?.warn('HTTP request failed', { - method: finalConfig.method, - url: finalConfig.url, - error: (error as Error).message, - }); - } - throw error; - } - } - - /** - * Execute request with timeout handling - no race conditions - */ private async executeRequest(config: RequestConfig): Promise> { - const timeout = config.timeout ?? this.config.timeout ?? 30000; - const controller = new AbortController(); - const startTime = Date.now(); - let timeoutId: NodeJS.Timeout | undefined; - - // Set up timeout - // Create a timeout promise that will reject - const timeoutPromise = new Promise((_, reject) => { - timeoutId = setTimeout(() => { - const elapsed = Date.now() - startTime; - this.logger?.warn('Request timeout triggered', { - url: config.url, - method: config.method, - timeout, - elapsed, - }); - - // Attempt to abort (may or may not work with Bun) - controller.abort(); - - // Force rejection regardless of signal behavior - reject(new HttpError(`Request timeout after ${timeout}ms (elapsed: ${elapsed}ms)`)); - }, timeout); - }); - - try { - // Get the appropriate adapter - const adapter = AdapterFactory.getAdapter(config); - - const response = await Promise.race([ - adapter.request(config, controller.signal), - timeoutPromise, - ]); - - this.logger?.debug('Adapter request successful', { - url: config.url, - elapsedMs: Date.now() - startTime, - }); - // Clear timeout on success - clearTimeout(timeoutId); - - return response; - } catch (error) { - const elapsed = Date.now() - startTime; - this.logger?.debug('Adapter request failed', { - url: config.url, - elapsedMs: elapsed, - }); - clearTimeout(timeoutId); - - // Handle timeout - if (controller.signal.aborted) { - throw new HttpError(`Request timeout after ${timeout}ms`); - } - - // Re-throw other errors - if (error instanceof HttpError) { - throw error; - } - - throw new HttpError(`Request failed: ${(error as Error).message}`); - } - } - - /** - * Merge configs with defaults - */ - private mergeConfig(config: RequestConfig): RequestConfig { - // Merge headers with automatic User-Agent assignment - const mergedHeaders = { ...this.config.headers, ...config.headers }; - - // Add random User-Agent if not specified - if (!mergedHeaders['User-Agent'] && !mergedHeaders['user-agent']) { - mergedHeaders['User-Agent'] = getRandomUserAgent(); - } - - return { - ...config, - headers: mergedHeaders, - timeout: config.timeout ?? this.config.timeout, - }; - } -} diff --git a/libs/http/src/index.ts b/libs/http/src/index.ts deleted file mode 100644 index ad1daa1..0000000 --- a/libs/http/src/index.ts +++ /dev/null @@ -1,9 +0,0 @@ -// Re-export all types and classes -export * from './adapters/index'; -export * from './client'; -export * from './proxy-manager'; -export * from './types'; -export * from './user-agent'; - -// Default export -export { HttpClient as default } from './client'; diff --git a/libs/http/src/proxy-manager.ts b/libs/http/src/proxy-manager.ts deleted file mode 100644 index 451c52b..0000000 --- a/libs/http/src/proxy-manager.ts +++ /dev/null @@ -1,65 +0,0 @@ -import { AxiosRequestConfig } from 'axios'; -import { HttpProxyAgent } from 'http-proxy-agent'; -import { HttpsProxyAgent } from 'https-proxy-agent'; -import { SocksProxyAgent } from 'socks-proxy-agent'; -import type { ProxyInfo } from './types'; - -export class ProxyManager { - /** - * Determine if we should use Bun fetch (HTTP/HTTPS) or Axios (SOCKS) - */ - static shouldUseBunFetch(proxy: ProxyInfo): boolean { - return proxy.protocol === 'http' || proxy.protocol === 'https'; - } - /** - * Create proxy URL for both Bun fetch and Axios proxy agents - */ - static createProxyUrl(proxy: ProxyInfo): string { - const { protocol, host, port, username, password } = proxy; - if (username && password) { - return `${protocol}://${encodeURIComponent(username)}:${encodeURIComponent(password)}@${host}:${port}`; - } - return `${protocol}://${host}:${port}`; - } - - /** - * Create appropriate agent for Axios based on proxy type - */ - static createProxyAgent(proxy: ProxyInfo) { - this.validateConfig(proxy); - - const proxyUrl = this.createProxyUrl(proxy); - switch (proxy.protocol) { - case 'socks4': - case 'socks5': - return new SocksProxyAgent(proxyUrl); - case 'http': - return new HttpProxyAgent(proxyUrl); - case 'https': - return new HttpsProxyAgent(proxyUrl); - default: - throw new Error(`Unsupported proxy protocol: ${proxy.protocol}`); - } - } - /** - * Create Axios instance with proxy configuration - */ - static createAxiosConfig(proxy: ProxyInfo): AxiosRequestConfig { - const agent = this.createProxyAgent(proxy); - return { - httpAgent: agent, - httpsAgent: agent, - }; - } - /** - * Simple proxy config validation - */ - static validateConfig(proxy: ProxyInfo): void { - if (!proxy.host || !proxy.port) { - throw new Error('Proxy host and port are required'); - } - if (!['http', 'https', 'socks4', 'socks5'].includes(proxy.protocol)) { - throw new Error(`Unsupported proxy protocol: ${proxy.protocol}`); - } - } -} diff --git a/libs/http/src/types.ts b/libs/http/src/types.ts deleted file mode 100644 index 30f2e09..0000000 --- a/libs/http/src/types.ts +++ /dev/null @@ -1,55 +0,0 @@ -// Minimal types for fast HTTP client -export type HttpMethod = 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'; - -export interface ProxyInfo { - source?: string; - protocol: 'http' | 'https' | 'socks4' | 'socks5'; - host: string; - port: number; - username?: string; - password?: string; - url?: string; // Full proxy URL for adapters - isWorking?: boolean; - responseTime?: number; - error?: string; - // Enhanced tracking properties - working?: number; // Number of successful checks - total?: number; // Total number of checks - successRate?: number; // Success rate percentage - averageResponseTime?: number; // Average response time in milliseconds - firstSeen?: Date; // When the proxy was first added to cache - lastChecked?: Date; // When the proxy was last checked -} - -export interface HttpClientConfig { - timeout?: number; - headers?: Record; -} - -export interface RequestConfig { - method?: HttpMethod; - url: string; - headers?: Record; - data?: unknown; // Changed from 'body' to 'data' for consistency - timeout?: number; - proxy?: ProxyInfo | string; // Proxy can be a ProxyInfo object or a URL string -} - -export interface HttpResponse { - data: T; - status: number; - headers: Record; - ok: boolean; - responseTime?: number; -} - -export class HttpError extends Error { - constructor( - message: string, - public status?: number, - public response?: HttpResponse - ) { - super(message); - this.name = 'HttpError'; - } -} diff --git a/libs/http/src/user-agent.ts b/libs/http/src/user-agent.ts deleted file mode 100644 index 1b25dd1..0000000 --- a/libs/http/src/user-agent.ts +++ /dev/null @@ -1,6 +0,0 @@ -import UserAgent from 'user-agents'; - -export function getRandomUserAgent(): string { - const userAgent = new UserAgent(); - return userAgent.toString(); -} diff --git a/libs/http/test/http-integration.test.ts b/libs/http/test/http-integration.test.ts deleted file mode 100644 index aad154e..0000000 --- a/libs/http/test/http-integration.test.ts +++ /dev/null @@ -1,161 +0,0 @@ -import { afterAll, beforeAll, describe, expect, test } from 'bun:test'; -import { HttpClient, HttpError } from '../src/index'; -import { MockServer } from './mock-server'; - -/** - * Integration tests for HTTP client with real network scenarios - * These tests use external services and may be affected by network conditions - */ - -let mockServer: MockServer; -let mockServerBaseUrl: string; - -beforeAll(async () => { - mockServer = new MockServer(); - await mockServer.start(); - mockServerBaseUrl = mockServer.getBaseUrl(); -}); - -afterAll(async () => { - await mockServer.stop(); -}); - -describe('HTTP Integration Tests', () => { - let client: HttpClient; - - beforeAll(() => { - client = new HttpClient({ - timeout: 10000, - }); - }); - - describe('Real-world scenarios', () => { - test('should handle JSON API responses', async () => { - try { - const response = await client.get('https://jsonplaceholder.typicode.com/posts/1'); - - expect(response.status).toBe(200); - expect(response.data).toHaveProperty('id'); - expect(response.data).toHaveProperty('title'); - expect(response.data).toHaveProperty('body'); - } catch (error) { - console.warn('External API test skipped due to network issues:', (error as Error).message); - } - }); - - test('should handle large responses', async () => { - try { - const response = await client.get('https://jsonplaceholder.typicode.com/posts'); - - expect(response.status).toBe(200); - expect(Array.isArray(response.data)).toBe(true); - expect(response.data.length).toBeGreaterThan(0); - } catch (error) { - console.warn( - 'Large response test skipped due to network issues:', - (error as Error).message - ); - } - }); - - test('should handle POST with JSON data', async () => { - try { - const postData = { - title: 'Integration Test Post', - body: 'This is a test post from integration tests', - userId: 1, - }; - - const response = await client.post('https://jsonplaceholder.typicode.com/posts', postData); - - expect(response.status).toBe(201); - expect(response.data).toHaveProperty('id'); - expect(response.data.title).toBe(postData.title); - } catch (error) { - console.warn( - 'POST integration test skipped due to network issues:', - (error as Error).message - ); - } - }); - }); - - describe('Error scenarios with mock server', () => { - test('should handle various HTTP status codes', async () => { - const successCodes = [200, 201]; - const errorCodes = [400, 401, 403, 404, 500, 503]; - - // Test success codes - for (const statusCode of successCodes) { - const response = await client.get(`${mockServerBaseUrl}/status/${statusCode}`); - expect(response.status).toBe(statusCode); - } - - // Test error codes (should throw HttpError) - for (const statusCode of errorCodes) { - await expect(client.get(`${mockServerBaseUrl}/status/${statusCode}`)).rejects.toThrow( - HttpError - ); - } - }); - - test('should handle malformed responses gracefully', async () => { - // Mock server returns valid JSON, so this test verifies our client handles it properly - const response = await client.get(`${mockServerBaseUrl}/`); - expect(response.status).toBe(200); - expect(typeof response.data).toBe('object'); - }); - - test('should handle concurrent requests', async () => { - const requests = Array.from({ length: 5 }, (_, i) => - client.get(`${mockServerBaseUrl}/`, { - headers: { 'X-Request-ID': `req-${i}` }, - }) - ); - - const responses = await Promise.all(requests); - - responses.forEach((response, index) => { - expect(response.status).toBe(200); - expect(response.data.headers).toHaveProperty('x-request-id', `req-${index}`); - }); - }); - }); - - describe('Performance and reliability', () => { - test('should handle rapid sequential requests', async () => { - const startTime = Date.now(); - const requests = []; - - for (let i = 0; i < 10; i++) { - requests.push(client.get(`${mockServerBaseUrl}/`)); - } - - const responses = await Promise.all(requests); - const endTime = Date.now(); - - expect(responses).toHaveLength(10); - responses.forEach(response => { - expect(response.status).toBe(200); - }); - - console.log(`Completed 10 requests in ${endTime - startTime}ms`); - }); - - test('should maintain connection efficiency', async () => { - const clientWithKeepAlive = new HttpClient({ - timeout: 5000, - }); - - const requests = Array.from({ length: 3 }, () => - clientWithKeepAlive.get(`${mockServerBaseUrl}/`) - ); - - const responses = await Promise.all(requests); - - responses.forEach(response => { - expect(response.status).toBe(200); - }); - }); - }); -}); diff --git a/libs/http/test/http.test.ts b/libs/http/test/http.test.ts deleted file mode 100644 index 34543f7..0000000 --- a/libs/http/test/http.test.ts +++ /dev/null @@ -1,155 +0,0 @@ -import { afterAll, beforeAll, beforeEach, describe, expect, test } from 'bun:test'; -import { HttpClient, HttpError, ProxyManager } from '../src/index'; -import type { ProxyInfo } from '../src/types'; -import { MockServer } from './mock-server'; - -// Global mock server instance -let mockServer: MockServer; -let mockServerBaseUrl: string; - -beforeAll(async () => { - // Start mock server for all tests - mockServer = new MockServer(); - await mockServer.start(); - mockServerBaseUrl = mockServer.getBaseUrl(); -}); - -afterAll(async () => { - // Stop mock server - await mockServer.stop(); -}); - -describe('HttpClient', () => { - let client: HttpClient; - - beforeEach(() => { - client = new HttpClient(); - }); - - describe('Basic functionality', () => { - test('should create client with default config', () => { - expect(client).toBeInstanceOf(HttpClient); - }); - - test('should make GET request', async () => { - const response = await client.get(`${mockServerBaseUrl}/`); - - expect(response.status).toBe(200); - expect(response.data).toHaveProperty('url'); - expect(response.data).toHaveProperty('method', 'GET'); - }); - - test('should make POST request with body', async () => { - const testData = { - title: 'Test Post', - body: 'Test body', - userId: 1, - }; - - const response = await client.post(`${mockServerBaseUrl}/post`, testData); - - expect(response.status).toBe(200); - expect(response.data).toHaveProperty('data'); - expect(response.data.data).toEqual(testData); - }); - - test('should handle custom headers', async () => { - const customHeaders = { - 'X-Custom-Header': 'test-value', - 'User-Agent': 'StockBot-HTTP-Client/1.0', - }; - - const response = await client.get(`${mockServerBaseUrl}/headers`, { - headers: customHeaders, - }); - - expect(response.status).toBe(200); - expect(response.data.headers).toHaveProperty('x-custom-header', 'test-value'); - expect(response.data.headers).toHaveProperty('user-agent', 'StockBot-HTTP-Client/1.0'); - }); - - test('should handle timeout', async () => { - const clientWithTimeout = new HttpClient({ timeout: 1 }); // 1ms timeout - - await expect(clientWithTimeout.get('https://httpbin.org/delay/1')).rejects.toThrow(); - }); - }); - describe('Error handling', () => { - test('should handle HTTP errors', async () => { - await expect(client.get(`${mockServerBaseUrl}/status/404`)).rejects.toThrow(HttpError); - }); - - test('should handle network errors gracefully', async () => { - await expect( - client.get('https://nonexistent-domain-that-will-fail-12345.test') - ).rejects.toThrow(); - }); - - test('should handle invalid URLs', async () => { - await expect(client.get('not:/a:valid/url')).rejects.toThrow(); - }); - }); - - describe('HTTP methods', () => { - test('should make PUT request', async () => { - const testData = { id: 1, name: 'Updated' }; - const response = await client.put(`${mockServerBaseUrl}/post`, testData); - expect(response.status).toBe(200); - }); - - test('should make DELETE request', async () => { - const response = await client.del(`${mockServerBaseUrl}/`); - expect(response.status).toBe(200); - expect(response.data.method).toBe('DELETE'); - }); - - test('should make PATCH request', async () => { - const testData = { name: 'Patched' }; - const response = await client.patch(`${mockServerBaseUrl}/post`, testData); - expect(response.status).toBe(200); - }); - }); -}); - -describe('ProxyManager', () => { - test('should determine when to use Bun fetch', () => { - const httpProxy: ProxyInfo = { - protocol: 'http', - host: 'proxy.example.com', - port: 8080, - }; - - const socksProxy: ProxyInfo = { - protocol: 'socks5', - host: 'proxy.example.com', - port: 1080, - }; - - expect(ProxyManager.shouldUseBunFetch(httpProxy)).toBe(true); - expect(ProxyManager.shouldUseBunFetch(socksProxy)).toBe(false); - }); - - test('should create proxy URL for Bun fetch', () => { - const proxy: ProxyInfo = { - protocol: 'http', - host: 'proxy.example.com', - port: 8080, - username: 'user', - password: 'pass', - }; - - const proxyUrl = ProxyManager.createProxyUrl(proxy); - expect(proxyUrl).toBe('http://user:pass@proxy.example.com:8080'); - }); - - test('should create proxy URL without credentials', () => { - const proxy: ProxyInfo = { - protocol: 'https', - host: 'proxy.example.com', - port: 8080, - }; - - const proxyUrl = ProxyManager.createProxyUrl(proxy); - expect(proxyUrl).toBe('https://proxy.example.com:8080'); - }); -}); diff --git a/libs/http/test/mock-server.test.ts b/libs/http/test/mock-server.test.ts deleted file mode 100644 index c46e7e0..0000000 --- a/libs/http/test/mock-server.test.ts +++ /dev/null @@ -1,132 +0,0 @@ -import { afterAll, beforeAll, describe, expect, test } from 'bun:test'; -import { MockServer } from './mock-server'; - -/** - * Tests for the MockServer utility - * Ensures our test infrastructure works correctly - */ - -describe('MockServer', () => { - let mockServer: MockServer; - let baseUrl: string; - - beforeAll(async () => { - mockServer = new MockServer(); - await mockServer.start(); - baseUrl = mockServer.getBaseUrl(); - }); - - afterAll(async () => { - await mockServer.stop(); - }); - - describe('Server lifecycle', () => { - test('should start and provide base URL', () => { - expect(baseUrl).toMatch(/^http:\/\/localhost:\d+$/); - expect(mockServer.getBaseUrl()).toBe(baseUrl); - }); - - test('should be reachable', async () => { - const response = await fetch(`${baseUrl}/`); - expect(response.ok).toBe(true); - }); - }); - - describe('Status endpoints', () => { - test('should return correct status codes', async () => { - const statusCodes = [200, 201, 400, 401, 403, 404, 500, 503]; - - for (const status of statusCodes) { - const response = await fetch(`${baseUrl}/status/${status}`); - expect(response.status).toBe(status); - } - }); - }); - - describe('Headers endpoint', () => { - test('should echo request headers', async () => { - const response = await fetch(`${baseUrl}/headers`, { - headers: { - 'X-Test-Header': 'test-value', - 'User-Agent': 'MockServer-Test', - }, - }); - - expect(response.ok).toBe(true); - const data = await response.json(); - expect(data.headers).toHaveProperty('x-test-header', 'test-value'); - expect(data.headers).toHaveProperty('user-agent', 'MockServer-Test'); - }); - }); - - describe('Basic auth endpoint', () => { - test('should authenticate valid credentials', async () => { - const username = 'testuser'; - const password = 'testpass'; - const credentials = btoa(`${username}:${password}`); - - const response = await fetch(`${baseUrl}/basic-auth/${username}/${password}`, { - headers: { - Authorization: `Basic ${credentials}`, - }, - }); - - expect(response.ok).toBe(true); - const data = await response.json(); - expect(data.authenticated).toBe(true); - expect(data.user).toBe(username); - }); - - test('should reject invalid credentials', async () => { - const credentials = btoa('wrong:credentials'); - - const response = await fetch(`${baseUrl}/basic-auth/user/pass`, { - headers: { - Authorization: `Basic ${credentials}`, - }, - }); - - expect(response.status).toBe(401); - }); - - test('should reject missing auth header', async () => { - const response = await fetch(`${baseUrl}/basic-auth/user/pass`); - expect(response.status).toBe(401); - }); - }); - - describe('POST endpoint', () => { - test('should echo POST data', async () => { - const testData = { - message: 'Hello, MockServer!', - timestamp: Date.now(), - }; - - const response = await fetch(`${baseUrl}/post`, { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - body: JSON.stringify(testData), - }); - - expect(response.ok).toBe(true); - const data = await response.json(); - expect(data.data).toEqual(testData); - expect(data.method).toBe('POST'); - expect(data.headers).toHaveProperty('content-type', 'application/json'); - }); - }); - - describe('Default endpoint', () => { - test('should return request information', async () => { - const response = await fetch(`${baseUrl}/unknown-endpoint`); - - expect(response.ok).toBe(true); - const data = await response.json(); - expect(data.url).toBe(`${baseUrl}/unknown-endpoint`); - expect(data.method).toBe('GET'); - expect(data.headers).toBeDefined(); - }); - }); -}); diff --git a/libs/http/test/mock-server.ts b/libs/http/test/mock-server.ts deleted file mode 100644 index ea8c443..0000000 --- a/libs/http/test/mock-server.ts +++ /dev/null @@ -1,116 +0,0 @@ -/** - * Mock HTTP server for testing the HTTP client - * Replaces external dependency on httpbin.org with a local server - */ -export class MockServer { - private server: ReturnType | null = null; - private port: number = 0; - - /** - * Start the mock server on a random port - */ - async start(): Promise { - this.server = Bun.serve({ - port: 1, // Use any available port - fetch: this.handleRequest.bind(this), - error: this.handleError.bind(this), - }); - - this.port = this.server.port || 1; - console.log(`Mock server started on port ${this.port}`); - } - - /** - * Stop the mock server - */ - async stop(): Promise { - if (this.server) { - this.server.stop(true); - this.server = null; - this.port = 0; - console.log('Mock server stopped'); - } - } - - /** - * Get the base URL of the mock server - */ - getBaseUrl(): string { - if (!this.server) { - throw new Error('Server not started'); - } - return `http://localhost:${this.port}`; - } - - /** - * Handle incoming requests - */ private async handleRequest(req: Request): Promise { - const url = new URL(req.url); - const path = url.pathname; - - console.log(`Mock server handling request: ${req.method} ${path}`); - - // Status endpoints - if (path.startsWith('/status/')) { - const status = parseInt(path.replace('/status/', ''), 10); - console.log(`Returning status: ${status}`); - return new Response(null, { status }); - } // Headers endpoint - if (path === '/headers') { - const headers = Object.fromEntries([...req.headers.entries()]); - console.log('Headers endpoint called, received headers:', headers); - return Response.json({ headers }); - } // Basic auth endpoint - if (path.startsWith('/basic-auth/')) { - const parts = path.split('/').filter(Boolean); - const expectedUsername = parts[1]; - const expectedPassword = parts[2]; - console.log( - `Basic auth endpoint called: expected user=${expectedUsername}, pass=${expectedPassword}` - ); - - const authHeader = req.headers.get('authorization'); - if (!authHeader || !authHeader.startsWith('Basic ')) { - console.log('Missing or invalid Authorization header'); - return new Response('Unauthorized', { status: 401 }); - } - - const base64Credentials = authHeader.split(' ')[1]; - const credentials = atob(base64Credentials); - const [username, password] = credentials.split(':'); - - if (username === expectedUsername && password === expectedPassword) { - return Response.json({ - authenticated: true, - user: username, - }); - } - - return new Response('Unauthorized', { status: 401 }); - } - - // Echo request body - if (path === '/post' && req.method === 'POST') { - const data = await req.json(); - return Response.json({ - data, - headers: Object.fromEntries([...req.headers.entries()]), - method: req.method, - }); - } - - // Default response - return Response.json({ - url: req.url, - method: req.method, - headers: Object.fromEntries([...req.headers.entries()]), - }); - } - - /** - * Handle errors - */ - private handleError(_error: Error): Response { - return new Response('Server error', { status: 500 }); - } -} diff --git a/libs/http/tsconfig.json b/libs/http/tsconfig.json deleted file mode 100644 index bdc180d..0000000 --- a/libs/http/tsconfig.json +++ /dev/null @@ -1,12 +0,0 @@ -{ - "extends": "../../tsconfig.lib.json", - "compilerOptions": { - "outDir": "./dist", - "rootDir": "./src" - }, - "include": ["src/**/*"], - "references": [ - { "path": "../logger" }, - { "path": "../types" } - ] -} diff --git a/libs/logger/tsconfig.json b/libs/logger/tsconfig.json deleted file mode 100644 index 969ce3b..0000000 --- a/libs/logger/tsconfig.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "extends": "../../tsconfig.lib.json", - "compilerOptions": { - "outDir": "./dist", - "rootDir": "./src" - }, - "include": ["src/**/*"], - "references": [ - ] -} diff --git a/libs/mongodb-client/src/factory.ts b/libs/mongodb-client/src/factory.ts deleted file mode 100644 index 8134baa..0000000 --- a/libs/mongodb-client/src/factory.ts +++ /dev/null @@ -1,20 +0,0 @@ -import { MongoDBClient } from './client'; -import type { MongoDBClientConfig } from './types'; - -/** - * Factory function to create a MongoDB client instance - */ -export function createMongoDBClient(config: MongoDBClientConfig): MongoDBClient { - return new MongoDBClient(config); -} - -/** - * Create and connect a MongoDB client - */ -export async function createAndConnectMongoDBClient( - config: MongoDBClientConfig -): Promise { - const client = createMongoDBClient(config); - await client.connect(); - return client; -} \ No newline at end of file diff --git a/libs/mongodb-client/src/singleton.ts b/libs/mongodb-client/src/singleton.ts deleted file mode 100644 index 8bd84d5..0000000 --- a/libs/mongodb-client/src/singleton.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { MongoDBClient } from './client'; -import type { MongoDBClientConfig } from './types'; -import type { Db } from 'mongodb'; - -/** - * Singleton MongoDB client instance - * Provides global access to a single MongoDB connection - */ -let instance: MongoDBClient | null = null; -let initPromise: Promise | null = null; - -/** - * Initialize the singleton MongoDB client - */ -export async function connectMongoDB(config?: MongoDBClientConfig): Promise { - if (instance) { - return instance; - } - - if (initPromise) { - return initPromise; - } - - if (!config) { - throw new Error('MongoDB client not initialized. Call connectMongoDB(config) first.'); - } - - initPromise = (async () => { - const client = new MongoDBClient(config); - await client.connect(); - instance = client; - return client; - })(); - - try { - return await initPromise; - } catch (error) { - // Reset promise on error so next call can retry - initPromise = null; - throw error; - } -} - -/** - * Get the singleton MongoDB client instance - * @throws Error if not initialized - */ -export function getMongoDBClient(): MongoDBClient { - if (!instance) { - throw new Error('MongoDB client not initialized. Call connectMongoDB(config) first.'); - } - return instance; -} - -/** - * Get the MongoDB database instance - * @throws Error if not initialized - */ -export function getDatabase(): Db { - if (!instance) { - throw new Error('MongoDB client not initialized. Call connectMongoDB(config) first.'); - } - return instance.getDatabase(); -} - -/** - * Check if the MongoDB client is initialized - */ -export function isInitialized(): boolean { - return instance !== null && instance.connected; -} - -/** - * Disconnect and reset the singleton instance - */ -export async function disconnectMongoDB(): Promise { - if (instance) { - await instance.disconnect(); - instance = null; - } - initPromise = null; -} \ No newline at end of file diff --git a/libs/mongodb-client/tsconfig.json b/libs/mongodb-client/tsconfig.json deleted file mode 100644 index bdc180d..0000000 --- a/libs/mongodb-client/tsconfig.json +++ /dev/null @@ -1,12 +0,0 @@ -{ - "extends": "../../tsconfig.lib.json", - "compilerOptions": { - "outDir": "./dist", - "rootDir": "./src" - }, - "include": ["src/**/*"], - "references": [ - { "path": "../logger" }, - { "path": "../types" } - ] -} diff --git a/libs/postgres-client/src/factory.ts b/libs/postgres-client/src/factory.ts deleted file mode 100644 index 81158f6..0000000 --- a/libs/postgres-client/src/factory.ts +++ /dev/null @@ -1,25 +0,0 @@ -import { PostgreSQLClient } from './client'; -import type { PostgreSQLClientConfig, PostgreSQLConnectionOptions } from './types'; - -/** - * Factory function to create a PostgreSQL client instance - */ -export function createPostgreSQLClient( - config: PostgreSQLClientConfig, - options?: PostgreSQLConnectionOptions -): PostgreSQLClient { - return new PostgreSQLClient(config, options); -} - -/** - * Create and connect a PostgreSQL client - */ -export async function createAndConnectPostgreSQLClient( - config: PostgreSQLClientConfig, - options?: PostgreSQLConnectionOptions -): Promise { - const client = createPostgreSQLClient(config, options); - await client.connect(); - return client; -} - diff --git a/libs/postgres-client/src/singleton.ts b/libs/postgres-client/src/singleton.ts deleted file mode 100644 index 3a1ded8..0000000 --- a/libs/postgres-client/src/singleton.ts +++ /dev/null @@ -1,50 +0,0 @@ -import { PostgreSQLClient } from './client'; -import type { PostgreSQLClientConfig } from './types'; - -/** - * Singleton PostgreSQL client instance - * Provides global access to a single PostgreSQL connection pool - */ -let instance: PostgreSQLClient | null = null; - -/** - * Initialize the singleton PostgreSQL client - */ -export async function connectPostgreSQL(config?: PostgreSQLClientConfig): Promise { - if (!instance) { - if (!config) { - throw new Error('PostgreSQL client not initialized. Call connectPostgreSQL(config) first.'); - } - instance = new PostgreSQLClient(config); - await instance.connect(); - } - return instance; -} - -/** - * Get the singleton PostgreSQL client instance - * @throws Error if not initialized - */ -export function getPostgreSQLClient(): PostgreSQLClient { - if (!instance) { - throw new Error('PostgreSQL client not initialized. Call connectPostgreSQL(config) first.'); - } - return instance; -} - -/** - * Check if the PostgreSQL client is initialized - */ -export function isInitialized(): boolean { - return instance !== null && instance.connected; -} - -/** - * Disconnect and reset the singleton instance - */ -export async function disconnectPostgreSQL(): Promise { - if (instance) { - await instance.disconnect(); - instance = null; - } -} \ No newline at end of file diff --git a/libs/postgres-client/tsconfig.json b/libs/postgres-client/tsconfig.json deleted file mode 100644 index bdc180d..0000000 --- a/libs/postgres-client/tsconfig.json +++ /dev/null @@ -1,12 +0,0 @@ -{ - "extends": "../../tsconfig.lib.json", - "compilerOptions": { - "outDir": "./dist", - "rootDir": "./src" - }, - "include": ["src/**/*"], - "references": [ - { "path": "../logger" }, - { "path": "../types" } - ] -} diff --git a/libs/questdb-client/src/factory.ts b/libs/questdb-client/src/factory.ts deleted file mode 100644 index d349810..0000000 --- a/libs/questdb-client/src/factory.ts +++ /dev/null @@ -1,24 +0,0 @@ -import { QuestDBClient } from './client'; -import type { QuestDBClientConfig, QuestDBConnectionOptions } from './types'; - -/** - * Factory function to create a QuestDB client instance - */ -export function createQuestDBClient( - config: QuestDBClientConfig, - options?: QuestDBConnectionOptions -): QuestDBClient { - return new QuestDBClient(config, options); -} - -/** - * Create and connect a QuestDB client - */ -export async function createAndConnectQuestDBClient( - config: QuestDBClientConfig, - options?: QuestDBConnectionOptions -): Promise { - const client = createQuestDBClient(config, options); - await client.connect(); - return client; -} \ No newline at end of file diff --git a/libs/questdb-client/tsconfig.json b/libs/questdb-client/tsconfig.json deleted file mode 100644 index bdc180d..0000000 --- a/libs/questdb-client/tsconfig.json +++ /dev/null @@ -1,12 +0,0 @@ -{ - "extends": "../../tsconfig.lib.json", - "compilerOptions": { - "outDir": "./dist", - "rootDir": "./src" - }, - "include": ["src/**/*"], - "references": [ - { "path": "../logger" }, - { "path": "../types" } - ] -} diff --git a/libs/queue/src/handler-registry.ts b/libs/queue/src/handler-registry.ts deleted file mode 100644 index c8d0808..0000000 --- a/libs/queue/src/handler-registry.ts +++ /dev/null @@ -1,191 +0,0 @@ -import { getLogger } from '@stock-bot/logger'; -import type { JobHandler, HandlerConfig, HandlerConfigWithSchedule, ScheduledJob } from './types'; - -const logger = getLogger('handler-registry'); - -class HandlerRegistry { - private handlers = new Map(); - private handlerSchedules = new Map(); - - /** - * Register a handler with its operations (simple config) - */ - register(handlerName: string, config: HandlerConfig): void { - logger.info(`Registering handler: ${handlerName}`, { - operations: Object.keys(config), - }); - - this.handlers.set(handlerName, config); - } - - /** - * Register a handler with operations and scheduled jobs (full config) - */ - registerWithSchedule(config: HandlerConfigWithSchedule): void { - logger.info(`Registering handler with schedule: ${config.name}`, { - operations: Object.keys(config.operations), - scheduledJobs: config.scheduledJobs?.length || 0, - }); - - this.handlers.set(config.name, config.operations); - - if (config.scheduledJobs && config.scheduledJobs.length > 0) { - this.handlerSchedules.set(config.name, config.scheduledJobs); - } - } - - /** - * Get a handler for a specific handler and operation - */ - getHandler(handler: string, operation: string): JobHandler | null { - const handlerConfig = this.handlers.get(handler); - if (!handlerConfig) { - logger.warn(`Handler not found: ${handler}`); - return null; - } - - const jobHandler = handlerConfig[operation]; - if (!jobHandler) { - logger.warn(`Operation not found: ${handler}:${operation}`, { - availableOperations: Object.keys(handlerConfig), - }); - return null; - } - - return jobHandler; - } - - /** - * Get all scheduled jobs from all handlers - */ - getAllScheduledJobs(): Array<{ handler: string; job: ScheduledJob }> { - const allJobs: Array<{ handler: string; job: ScheduledJob }> = []; - - for (const [handlerName, jobs] of this.handlerSchedules) { - for (const job of jobs) { - allJobs.push({ - handler: handlerName, - job, - }); - } - } - - return allJobs; - } - - /** - * Get scheduled jobs for a specific handler - */ - getScheduledJobs(handler: string): ScheduledJob[] { - return this.handlerSchedules.get(handler) || []; - } - - /** - * Check if a handler has scheduled jobs - */ - hasScheduledJobs(handler: string): boolean { - return this.handlerSchedules.has(handler); - } - - /** - * Get all registered handlers with their configurations - */ - getHandlerConfigs(): Array<{ name: string; operations: string[]; scheduledJobs: number }> { - return Array.from(this.handlers.keys()).map(name => ({ - name, - operations: Object.keys(this.handlers.get(name) || {}), - scheduledJobs: this.handlerSchedules.get(name)?.length || 0, - })); - } - - /** - * Get all handlers with their full configurations for queue manager registration - */ - getAllHandlers(): Map { - const result = new Map< - string, - { operations: HandlerConfig; scheduledJobs?: ScheduledJob[] } - >(); - - for (const [name, operations] of this.handlers) { - const scheduledJobs = this.handlerSchedules.get(name); - result.set(name, { - operations, - scheduledJobs, - }); - } - - return result; - } - - /** - * Get all registered handlers - */ - getHandlers(): string[] { - return Array.from(this.handlers.keys()); - } - - /** - * Get operations for a specific handler - */ - getOperations(handler: string): string[] { - const handlerConfig = this.handlers.get(handler); - return handlerConfig ? Object.keys(handlerConfig) : []; - } - - /** - * Check if a handler exists - */ - hasHandler(handler: string): boolean { - return this.handlers.has(handler); - } - - /** - * Check if a handler has a specific operation - */ - hasOperation(handler: string, operation: string): boolean { - const handlerConfig = this.handlers.get(handler); - return handlerConfig ? operation in handlerConfig : false; - } - - /** - * Remove a handler - */ - unregister(handler: string): boolean { - this.handlerSchedules.delete(handler); - return this.handlers.delete(handler); - } - - /** - * Clear all handlers - */ - clear(): void { - this.handlers.clear(); - this.handlerSchedules.clear(); - } - - /** - * Get registry statistics - */ - getStats(): { handlers: number; totalOperations: number; totalScheduledJobs: number } { - let totalOperations = 0; - let totalScheduledJobs = 0; - - for (const config of this.handlers.values()) { - totalOperations += Object.keys(config).length; - } - - for (const jobs of this.handlerSchedules.values()) { - totalScheduledJobs += jobs.length; - } - - return { - handlers: this.handlers.size, - totalOperations, - totalScheduledJobs, - }; - } -} - -// Export singleton instance -export const handlerRegistry = new HandlerRegistry(); \ No newline at end of file diff --git a/libs/queue/src/types.ts b/libs/queue/src/types.ts deleted file mode 100644 index be562b8..0000000 --- a/libs/queue/src/types.ts +++ /dev/null @@ -1,208 +0,0 @@ -// Types for queue operations -export interface JobData { - handler: string; - operation: string; - payload: T; - priority?: number; -} - -export interface ProcessOptions { - totalDelayHours: number; - batchSize?: number; - priority?: number; - useBatching?: boolean; - retries?: number; - ttl?: number; - removeOnComplete?: number; - removeOnFail?: number; - // Job routing information - handler?: string; - operation?: string; -} - -export interface BatchResult { - jobsCreated: number; - mode: 'direct' | 'batch'; - totalItems: number; - batchesCreated?: number; - duration: number; -} - -// New improved types for the refactored architecture -export interface RedisConfig { - host: string; - port: number; - password?: string; - db?: number; -} - -export interface JobOptions { - priority?: number; - delay?: number; - attempts?: number; - removeOnComplete?: number; - removeOnFail?: number; - backoff?: { - type: 'exponential' | 'fixed'; - delay: number; - }; - repeat?: { - pattern?: string; - key?: string; - limit?: number; - every?: number; - immediately?: boolean; - }; -} - -export interface QueueOptions { - defaultJobOptions?: JobOptions; - workers?: number; - concurrency?: number; - enableMetrics?: boolean; - enableDLQ?: boolean; - enableRateLimit?: boolean; - rateLimitRules?: RateLimitRule[]; // Queue-specific rate limit rules -} - -export interface QueueManagerConfig { - redis: RedisConfig; - defaultQueueOptions?: QueueOptions; - enableScheduledJobs?: boolean; - globalRateLimit?: RateLimitConfig; - rateLimitRules?: RateLimitRule[]; // Global rate limit rules - delayWorkerStart?: boolean; // If true, workers won't start automatically -} - -export interface QueueStats { - waiting: number; - active: number; - completed: number; - failed: number; - delayed: number; - paused: boolean; - workers?: number; -} - -export interface GlobalStats { - queues: Record; - totalJobs: number; - totalWorkers: number; - uptime: number; -} - -// Legacy type for backward compatibility -export interface QueueConfig extends QueueManagerConfig { - queueName?: string; - workers?: number; - concurrency?: number; - handlers?: HandlerInitializer[]; - dlqConfig?: DLQConfig; - enableMetrics?: boolean; -} - -export interface JobHandler { - (payload: TPayload): Promise; -} - -// Type-safe wrapper for creating job handlers -export type TypedJobHandler = (payload: TPayload) => Promise; - -// Helper to create type-safe job handlers -export function createJobHandler( - handler: TypedJobHandler -): JobHandler { - return async (payload: unknown): Promise => { - return handler(payload as TPayload); - }; -} - -export interface ScheduledJob { - type: string; - operation: string; - payload?: T; - cronPattern: string; - priority?: number; - description?: string; - immediately?: boolean; - delay?: number; -} - -export interface HandlerConfig { - [operation: string]: JobHandler; -} - -// Type-safe handler configuration -export type TypedHandlerConfig = Record> = { - [K in keyof T]: T[K]; -}; - -export interface HandlerConfigWithSchedule { - name: string; - operations: Record; - scheduledJobs?: ScheduledJob[]; - // Rate limiting - rateLimit?: RateLimitConfig; - operationLimits?: Record; -} - -// Type-safe version of HandlerConfigWithSchedule -export interface TypedHandlerConfigWithSchedule = Record> { - name: string; - operations: T; - scheduledJobs?: ScheduledJob[]; - // Rate limiting - rateLimit?: RateLimitConfig; - operationLimits?: Record; -} - -export interface BatchJobData { - payloadKey: string; - batchIndex: number; - totalBatches: number; - itemCount: number; - totalDelayHours: number; // Total time to distribute all batches -} - -export interface HandlerInitializer { - (): void | Promise; -} - -// Rate limiting types -export interface RateLimitConfig { - points: number; - duration: number; - blockDuration?: number; -} - -export interface RateLimitRule { - level: 'global' | 'queue' | 'handler' | 'operation'; - queueName?: string; // For queue-level limits - handler?: string; // For handler-level limits - operation?: string; // For operation-level limits (most specific) - config: RateLimitConfig; -} - -// DLQ types -export interface DLQConfig { - maxRetries?: number; - retryDelay?: number; - alertThreshold?: number; - cleanupAge?: number; -} - -export interface DLQJobInfo { - id: string; - name: string; - failedReason: string; - attemptsMade: number; - timestamp: number; - data: unknown; -} - -export interface ScheduleConfig { - pattern: string; - jobName: string; - data?: unknown; - options?: JobOptions; -} diff --git a/libs/browser/package.json b/libs/services/browser/package.json similarity index 90% rename from libs/browser/package.json rename to libs/services/browser/package.json index affb57b..b961634 100644 --- a/libs/browser/package.json +++ b/libs/services/browser/package.json @@ -28,7 +28,6 @@ "typescript": "^5.0.0" }, "peerDependencies": { - "@stock-bot/logger": "workspace:*", - "@stock-bot/http": "workspace:*" + "@stock-bot/logger": "workspace:*" } } diff --git a/libs/browser/src/browser.ts b/libs/services/browser/src/browser.ts similarity index 95% rename from libs/browser/src/browser.ts rename to libs/services/browser/src/browser.ts index bc7597c..7295048 100644 --- a/libs/browser/src/browser.ts +++ b/libs/services/browser/src/browser.ts @@ -1,20 +1,22 @@ -import { BrowserContext, chromium, Page, Browser as PlaywrightBrowser } from 'playwright'; -import { getLogger } from '@stock-bot/logger'; +import { chromium } from 'playwright'; +import type { BrowserContext, Page, Browser as PlaywrightBrowser } from 'playwright'; import type { BrowserOptions, NetworkEvent, NetworkEventHandler } from './types'; -class BrowserSingleton { +export class Browser { private browser?: PlaywrightBrowser; private contexts: Map = new Map(); - private logger = getLogger('browser'); + private logger: any; private options: BrowserOptions; private initialized = false; - constructor() { + constructor(logger?: any, defaultOptions?: BrowserOptions) { + this.logger = logger || console; this.options = { headless: true, timeout: 30000, blockResources: false, enableNetworkLogging: false, + ...defaultOptions, }; } @@ -172,9 +174,11 @@ class BrowserSingleton { if (proxy) { const [protocol, rest] = proxy.split('://'); if (!rest) { - throw new Error('Invalid proxy format. Expected protocol://host:port or protocol://user:pass@host:port'); + throw new Error( + 'Invalid proxy format. Expected protocol://host:port or protocol://user:pass@host:port' + ); } - + const [auth, hostPort] = rest.includes('@') ? rest.split('@') : [null, rest]; const finalHostPort = hostPort || rest; const [host, port] = finalHostPort.split(':'); @@ -359,8 +363,5 @@ class BrowserSingleton { } } -// Export singleton instance -export const Browser = new BrowserSingleton(); - -// Also export the class for typing if needed -export { BrowserSingleton as BrowserClass }; +// Export default for backward compatibility +export default Browser; diff --git a/libs/services/browser/src/index.ts b/libs/services/browser/src/index.ts new file mode 100644 index 0000000..e555b6a --- /dev/null +++ b/libs/services/browser/src/index.ts @@ -0,0 +1,7 @@ +export { Browser } from './browser'; +// TODO: Update BrowserTabManager to work with non-singleton Browser +// export { BrowserTabManager } from './tab-manager'; +export type { BrowserOptions, ScrapingResult } from './types'; + +// Default export for the class +export { default as BrowserClass } from './browser'; diff --git a/libs/browser/src/tab-manager.ts b/libs/services/browser/src/tab-manager.ts.bak similarity index 100% rename from libs/browser/src/tab-manager.ts rename to libs/services/browser/src/tab-manager.ts.bak diff --git a/libs/browser/src/types.ts b/libs/services/browser/src/types.ts similarity index 100% rename from libs/browser/src/types.ts rename to libs/services/browser/src/types.ts diff --git a/libs/browser/src/utils.ts b/libs/services/browser/src/utils.ts similarity index 100% rename from libs/browser/src/utils.ts rename to libs/services/browser/src/utils.ts diff --git a/libs/services/browser/tsconfig.json b/libs/services/browser/tsconfig.json new file mode 100644 index 0000000..55c59a8 --- /dev/null +++ b/libs/services/browser/tsconfig.json @@ -0,0 +1,10 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "composite": true + }, + "include": ["src/**/*"], + "references": [{ "path": "../../core/logger" }] +} diff --git a/libs/browser/turbo.json b/libs/services/browser/turbo.json similarity index 80% rename from libs/browser/turbo.json rename to libs/services/browser/turbo.json index 9cf45c3..76e5043 100644 --- a/libs/browser/turbo.json +++ b/libs/services/browser/turbo.json @@ -2,7 +2,7 @@ "extends": ["//"], "tasks": { "build": { - "dependsOn": ["@stock-bot/logger#build", "@stock-bot/http#build"], + "dependsOn": ["@stock-bot/logger#build"], "outputs": ["dist/**"], "inputs": [ "src/**", diff --git a/libs/services/proxy/package.json b/libs/services/proxy/package.json new file mode 100644 index 0000000..6bb7edc --- /dev/null +++ b/libs/services/proxy/package.json @@ -0,0 +1,25 @@ +{ + "name": "@stock-bot/proxy", + "version": "0.1.0", + "description": "Proxy management and synchronization services", + "main": "dist/index.js", + "types": "dist/index.d.ts", + "scripts": { + "build": "tsc", + "dev": "tsc --watch", + "clean": "rm -rf dist" + }, + "dependencies": { + "@stock-bot/logger": "workspace:*", + "@stock-bot/cache": "workspace:*" + }, + "devDependencies": { + "typescript": "^5.0.0" + }, + "exports": { + ".": { + "types": "./dist/index.d.ts", + "default": "./dist/index.js" + } + } +} diff --git a/libs/services/proxy/src/index.ts b/libs/services/proxy/src/index.ts new file mode 100644 index 0000000..e224cfc --- /dev/null +++ b/libs/services/proxy/src/index.ts @@ -0,0 +1,16 @@ +/** + * Proxy Service Library + * Centralized proxy management and synchronization + */ + +// Main classes +export { ProxyManager } from './proxy-manager'; + +// Types +export type { ProxyInfo, ProxyManagerConfig, ProxyStats, ProxySyncConfig } from './types'; + +// Note: Convenience functions removed as ProxyManager is no longer a singleton +// Create an instance and use its methods directly + +// Default export +export { ProxyManager as default } from './proxy-manager'; diff --git a/libs/services/proxy/src/proxy-manager.ts b/libs/services/proxy/src/proxy-manager.ts new file mode 100644 index 0000000..b53fdb3 --- /dev/null +++ b/libs/services/proxy/src/proxy-manager.ts @@ -0,0 +1,397 @@ +/** + * Centralized Proxy Manager - Handles proxy storage, retrieval, and caching + */ +import type { CacheProvider } from '@stock-bot/cache'; +import type { ProxyInfo, ProxyManagerConfig, ProxyStats } from './types'; + +export class ProxyManager { + private cache: CacheProvider; + private proxies: ProxyInfo[] = []; + private proxyIndex: number = 0; + private lastUpdate: Date | null = null; + private lastFetchTime: Date | null = null; + private isInitialized = false; + private logger: any; + private config: ProxyManagerConfig; + + constructor(cache: CacheProvider, config: ProxyManagerConfig = {}, logger?: any) { + this.cache = cache; + this.config = config; + this.logger = logger || console; + } + + /** + * Internal initialization - loads existing proxies from cache + */ + private async initializeInternal(): Promise { + if (this.isInitialized) { + return; + } + + try { + this.logger.info('Initializing proxy manager...'); + + // Wait for cache to be ready + await this.cache.waitForReady(10000); // Wait up to 10 seconds + this.logger.debug('Cache is ready'); + + await this.loadFromCache(); + this.isInitialized = true; + this.logger.info('Proxy manager initialized', { + proxiesLoaded: this.proxies.length, + lastUpdate: this.lastUpdate, + }); + } catch (error) { + this.logger.error('Failed to initialize proxy manager', { error }); + this.isInitialized = true; // Set to true anyway to avoid infinite retries + } + } + + getProxy(): string | null { + if (this.proxies.length === 0) { + this.logger.warn('No proxies available in memory'); + return null; + } + + // Cycle through proxies + if (this.proxyIndex >= this.proxies.length) { + this.proxyIndex = 0; + } + + const proxyInfo = this.proxies[this.proxyIndex++]; + if (!proxyInfo) { + return null; + } + + // Build proxy URL with optional auth + let proxyUrl = `${proxyInfo.protocol}://`; + if (proxyInfo.username && proxyInfo.password) { + proxyUrl += `${proxyInfo.username}:${proxyInfo.password}@`; + } + proxyUrl += `${proxyInfo.host}:${proxyInfo.port}`; + + return proxyUrl; + } + /** + * Get a random working proxy from the available pool (synchronous) + */ + getRandomProxy(): ProxyInfo | null { + // Ensure initialized + if (!this.isInitialized) { + throw new Error('ProxyManager not initialized'); + } + + // Return null if no proxies available + if (this.proxies.length === 0) { + this.logger.warn('No proxies available in memory'); + return null; + } + + // Filter for working proxies (not explicitly marked as non-working) + const workingProxies = this.proxies.filter(proxy => proxy.isWorking !== false); + + if (workingProxies.length === 0) { + this.logger.warn('No working proxies available'); + return null; + } + + // Return random proxy with preference for recently successful ones + const sortedProxies = workingProxies.sort((a, b) => { + // Prefer proxies with better success rates + const aRate = a.successRate || 0; + const bRate = b.successRate || 0; + return bRate - aRate; + }); + + // Take from top 50% of best performing proxies + const topProxies = sortedProxies.slice(0, Math.max(1, Math.floor(sortedProxies.length * 0.5))); + const selectedProxy = topProxies[Math.floor(Math.random() * topProxies.length)]; + + if (!selectedProxy) { + this.logger.warn('No proxy selected from available pool'); + return null; + } + + this.logger.debug('Selected proxy', { + host: selectedProxy.host, + port: selectedProxy.port, + successRate: selectedProxy.successRate, + totalAvailable: workingProxies.length, + }); + + return selectedProxy; + } + + /** + * Get all working proxies (synchronous) + */ + getWorkingProxies(): ProxyInfo[] { + if (!this.isInitialized) { + throw new Error('ProxyManager not initialized'); + } + + return this.proxies.filter(proxy => proxy.isWorking !== false); + } + + /** + * Get all proxies (working and non-working) + */ + getAllProxies(): ProxyInfo[] { + if (!this.isInitialized) { + throw new Error('ProxyManager not initialized'); + } + + return [...this.proxies]; + } + + /** + * Get proxy statistics + */ + getStats(): ProxyStats { + if (!this.isInitialized) { + throw new Error('ProxyManager not initialized'); + } + + return { + total: this.proxies.length, + working: this.proxies.filter(p => p.isWorking !== false).length, + failed: this.proxies.filter(p => p.isWorking === false).length, + lastUpdate: this.lastUpdate, + }; + } + + /** + * Update the proxy pool with new proxies + */ + async updateProxies(proxies: ProxyInfo[]): Promise { + // Ensure manager is initialized before updating + if (!this.isInitialized) { + await this.initializeInternal(); + } + + try { + this.logger.info('Updating proxy pool', { + newCount: proxies.length, + existingCount: this.proxies.length, + }); + + this.proxies = proxies; + this.lastUpdate = new Date(); + + // Store to cache (keys will be prefixed with cache:proxy: automatically) + await this.cache.set('active', proxies); + await this.cache.set('last-update', this.lastUpdate.toISOString()); + + const workingCount = proxies.filter(p => p.isWorking !== false).length; + this.logger.info('Proxy pool updated successfully', { + totalProxies: proxies.length, + workingProxies: workingCount, + lastUpdate: this.lastUpdate, + }); + } catch (error) { + this.logger.error('Failed to update proxy pool', { error }); + throw error; + } + } + + /** + * Add or update a single proxy in the pool + */ + async updateProxy(proxy: ProxyInfo): Promise { + const existingIndex = this.proxies.findIndex( + p => p.host === proxy.host && p.port === proxy.port && p.protocol === proxy.protocol + ); + + if (existingIndex >= 0) { + this.proxies[existingIndex] = { ...this.proxies[existingIndex], ...proxy }; + this.logger.debug('Updated existing proxy', { host: proxy.host, port: proxy.port }); + } else { + this.proxies.push(proxy); + this.logger.debug('Added new proxy', { host: proxy.host, port: proxy.port }); + } + + // Update cache + await this.updateProxies(this.proxies); + } + + /** + * Remove a proxy from the pool + */ + async removeProxy(host: string, port: number, protocol: string): Promise { + const initialLength = this.proxies.length; + this.proxies = this.proxies.filter( + p => !(p.host === host && p.port === port && p.protocol === protocol) + ); + + if (this.proxies.length < initialLength) { + await this.updateProxies(this.proxies); + this.logger.debug('Removed proxy', { host, port, protocol }); + } + } + + /** + * Clear all proxies from memory and cache + */ + async clearProxies(): Promise { + this.proxies = []; + this.lastUpdate = null; + + await this.cache.del('active'); + await this.cache.del('last-update'); + + this.logger.info('Cleared all proxies'); + } + + /** + * Check if proxy manager is ready + */ + isReady(): boolean { + return this.isInitialized; + } + + /** + * Load proxies from cache storage + */ + private async loadFromCache(): Promise { + try { + const cachedProxies = await this.cache.get('active'); + const lastUpdateStr = await this.cache.get('last-update'); + + if (cachedProxies && Array.isArray(cachedProxies)) { + this.proxies = cachedProxies; + this.lastUpdate = lastUpdateStr ? new Date(lastUpdateStr) : null; + + this.logger.debug('Loaded proxies from cache', { + count: this.proxies.length, + lastUpdate: this.lastUpdate, + }); + } else { + this.logger.debug('No cached proxies found'); + } + } catch (error) { + this.logger.error('Failed to load proxies from cache', { error }); + } + } + + /** + * Fetch proxies from WebShare API + */ + private async fetchWebShareProxies(): Promise { + if (!this.config.webshare) { + throw new Error('WebShare configuration not provided'); + } + + const { apiKey, apiUrl } = this.config.webshare; + + this.logger.info('Fetching proxies from WebShare API', { apiUrl }); + + try { + const response = await fetch( + `${apiUrl}proxy/list/?mode=direct&page=1&page_size=100`, + { + method: 'GET', + headers: { + Authorization: `Token ${apiKey}`, + 'Content-Type': 'application/json', + }, + signal: AbortSignal.timeout(10000), // 10 second timeout + } + ); + + if (!response.ok) { + throw new Error(`WebShare API request failed: ${response.status} ${response.statusText}`); + } + + const data = await response.json(); + + if (!data.results || !Array.isArray(data.results)) { + throw new Error('Invalid response format from WebShare API'); + } + + // Transform proxy data to ProxyInfo format + const proxies: ProxyInfo[] = data.results.map( + (proxy: { username: string; password: string; proxy_address: string; port: number }) => ({ + source: 'webshare', + protocol: 'http' as const, + host: proxy.proxy_address, + port: proxy.port, + username: proxy.username, + password: proxy.password, + isWorking: true, // WebShare provides working proxies + firstSeen: new Date(), + lastChecked: new Date(), + }) + ); + + this.logger.info('Successfully fetched proxies from WebShare', { + count: proxies.length, + total: data.count || proxies.length, + }); + + this.lastFetchTime = new Date(); + return proxies; + } catch (error) { + this.logger.error('Failed to fetch proxies from WebShare', { error }); + throw error; + } + } + + /** + * Refresh proxies from WebShare (public method for manual refresh) + */ + async refreshProxies(): Promise { + if (!this.config.enabled || !this.config.webshare) { + this.logger.warn('Proxy refresh called but WebShare is not configured'); + return; + } + + try { + const proxies = await this.fetchWebShareProxies(); + await this.updateProxies(proxies); + } catch (error) { + this.logger.error('Failed to refresh proxies', { error }); + throw error; + } + } + + /** + * Get the last time proxies were fetched from WebShare + */ + getLastFetchTime(): Date | null { + return this.lastFetchTime; + } + + /** + * Initialize the proxy manager + */ + async initialize(): Promise { + await this.initializeInternal(); + + // Fetch proxies on startup if enabled + if (this.config.enabled && this.config.webshare) { + this.logger.info('Proxy fetching is enabled, fetching proxies from WebShare...'); + + try { + const proxies = await this.fetchWebShareProxies(); + if (proxies.length === 0) { + throw new Error('No proxies fetched from WebShare API'); + } + + await this.updateProxies(proxies); + this.logger.info('ProxyManager initialized with fresh proxies', { + count: proxies.length, + lastFetchTime: this.lastFetchTime, + }); + } catch (error) { + // If proxy fetching is enabled but fails, the service should not start + this.logger.error('Failed to fetch proxies during initialization', { error }); + throw new Error(`ProxyManager initialization failed: ${error instanceof Error ? error.message : 'Unknown error'}`); + } + } else { + this.logger.info('ProxyManager initialized without fetching proxies (disabled or not configured)'); + } + } +} + +// Export the class as default +export default ProxyManager; diff --git a/libs/services/proxy/src/types.ts b/libs/services/proxy/src/types.ts new file mode 100644 index 0000000..52e3339 --- /dev/null +++ b/libs/services/proxy/src/types.ts @@ -0,0 +1,47 @@ +/** + * Proxy service types and interfaces + */ + +export interface ProxyInfo { + host: string; + port: number; + protocol: 'http' | 'https'; // Simplified to only support HTTP/HTTPS + username?: string; + password?: string; + isWorking?: boolean; + successRate?: number; + lastChecked?: Date; + lastUsed?: Date; + responseTime?: number; + source?: string; + country?: string; + error?: string; + // Tracking properties + working?: number; // Number of successful checks + total?: number; // Total number of checks + averageResponseTime?: number; // Average response time in milliseconds + firstSeen?: Date; // When the proxy was first added +} + +export interface ProxyManagerConfig { + enabled?: boolean; + cachePrefix?: string; + ttl?: number; + enableMetrics?: boolean; + webshare?: { + apiKey: string; + apiUrl: string; + }; +} + +export interface ProxySyncConfig { + intervalMs?: number; + enableAutoSync?: boolean; +} + +export interface ProxyStats { + total: number; + working: number; + failed: number; + lastUpdate: Date | null; +} diff --git a/libs/services/proxy/tsconfig.json b/libs/services/proxy/tsconfig.json new file mode 100644 index 0000000..0c67432 --- /dev/null +++ b/libs/services/proxy/tsconfig.json @@ -0,0 +1,12 @@ +{ + "extends": "../../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "declaration": true, + "declarationMap": true, + "sourceMap": true + }, + "include": ["src/**/*"], + "exclude": ["dist", "node_modules"] +} diff --git a/libs/shutdown/tsconfig.json b/libs/shutdown/tsconfig.json deleted file mode 100644 index 969ce3b..0000000 --- a/libs/shutdown/tsconfig.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "extends": "../../tsconfig.lib.json", - "compilerOptions": { - "outDir": "./dist", - "rootDir": "./src" - }, - "include": ["src/**/*"], - "references": [ - ] -} diff --git a/libs/types/tsconfig.json b/libs/types/tsconfig.json deleted file mode 100644 index 969ce3b..0000000 --- a/libs/types/tsconfig.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "extends": "../../tsconfig.lib.json", - "compilerOptions": { - "outDir": "./dist", - "rootDir": "./src" - }, - "include": ["src/**/*"], - "references": [ - ] -} diff --git a/libs/utils/package.json b/libs/utils/package.json index 2cf648b..e11b506 100644 --- a/libs/utils/package.json +++ b/libs/utils/package.json @@ -2,32 +2,22 @@ "name": "@stock-bot/utils", "version": "1.0.0", "description": "Common utility functions for stock-bot services", - "main": "dist/index.js", - "types": "dist/index.d.ts", - "type": "module", + "main": "./src/index.ts", + "types": "./src/index.ts", "scripts": { "build": "tsc", "clean": "rimraf dist", "test": "bun test" }, "dependencies": { - "@stock-bot/types": "*", - "date-fns": "^2.30.0" + "@stock-bot/config": "workspace:*", + "@stock-bot/logger": "workspace:*", + "@stock-bot/cache": "workspace:*", + "@stock-bot/types": "workspace:*" }, "devDependencies": { "@types/node": "^20.11.0", "typescript": "^5.3.0", "bun-types": "^1.2.15" - }, - "exports": { - ".": { - "import": "./dist/index.js", - "require": "./dist/index.js", - "types": "./dist/index.d.ts" - } - }, - "files": [ - "dist", - "README.md" - ] + } } diff --git a/libs/utils/src/calculations/index.ts b/libs/utils/src/calculations/index.ts index e24dd5f..f45845b 100644 --- a/libs/utils/src/calculations/index.ts +++ b/libs/utils/src/calculations/index.ts @@ -37,25 +37,25 @@ export type { HasClose, HasOHLC, HasVolume, - HasTimestamp + HasTimestamp, } from '@stock-bot/types'; // Export working calculation functions export * from './basic-calculations'; // Export working technical indicators (building one by one) -export { - sma, - ema, - rsi, - macd, - bollingerBands, - atr, - obv, - stochastic, - williamsR, - cci, - mfi, +export { + sma, + ema, + rsi, + macd, + bollingerBands, + atr, + obv, + stochastic, + williamsR, + cci, + mfi, vwma, momentum, roc, @@ -80,36 +80,25 @@ export { balanceOfPower, trix, massIndex, - coppockCurve + coppockCurve, } from './technical-indicators'; export * from './risk-metrics'; -// export * from './portfolio-analytics'; -// export * from './options-pricing'; -// export * from './position-sizing'; export * from './performance-metrics'; -// export * from './market-statistics'; -// export * from './volatility-models'; -// export * from './correlation-analysis'; -// TODO: Re-enable when performance-metrics and risk-metrics are fixed -// // Convenience function for comprehensive portfolio analysis -// export function analyzePortfolio( -// returns: number[], -// equityCurve: Array<{ value: number; date: Date }>, -// benchmarkReturns?: number[], -// riskFreeRate: number = 0.02 -// ): { -// performance: PortfolioAnalysis; -// risk: RiskMetrics; -// trades?: any; -// drawdown?: any; -// } { -// const performance = calculateStrategyMetrics(equityCurve, benchmarkReturns, riskFreeRate); -// const equityValues = equityCurve.map(point => point.value); -// const risk = calculateRiskMetrics(returns, equityValues, benchmarkReturns, riskFreeRate); - -// return { -// performance, -// risk, -// }; -// } +// Convenience function for comprehensive portfolio analysis +export function analyzePortfolio( + _returns: number[], + _equityCurve: Array<{ value: number; date: Date }>, + _benchmarkReturns?: number[], + _riskFreeRate: number = 0.02 +): { + performance: any; + risk: any; +} { + // Note: Implementation depends on performance-metrics and risk-metrics + // This is a placeholder for the full implementation + return { + performance: {}, + risk: {}, + }; +} diff --git a/libs/utils/src/calculations/performance-metrics.ts b/libs/utils/src/calculations/performance-metrics.ts index 140a043..f2fe6e3 100644 --- a/libs/utils/src/calculations/performance-metrics.ts +++ b/libs/utils/src/calculations/performance-metrics.ts @@ -1,3 +1,5 @@ +import { ulcerIndex } from './risk-metrics'; + /** * Performance Metrics and Analysis * Comprehensive performance measurement tools for trading strategies and portfolios @@ -18,7 +20,6 @@ export interface PortfolioMetrics { alpha: number; volatility: number; } -import { ulcerIndex } from './risk-metrics'; export interface TradePerformance { totalTrades: number; @@ -156,8 +157,10 @@ export function analyzeDrawdowns( } const first = equityCurve[0]; - if (!first) {return { maxDrawdown: 0, maxDrawdownDuration: 0, averageDrawdown: 0, drawdownPeriods: [] };} - + if (!first) { + return { maxDrawdown: 0, maxDrawdownDuration: 0, averageDrawdown: 0, drawdownPeriods: [] }; + } + let peak = first.value; let peakDate = first.date; let maxDrawdown = 0; @@ -175,18 +178,21 @@ export function analyzeDrawdowns( for (let i = 1; i < equityCurve.length; i++) { const current = equityCurve[i]; - if (!current) {continue;} + if (!current) { + continue; + } if (current.value > peak) { // New peak - end any current drawdown if (currentDrawdownStart) { const prev = equityCurve[i - 1]; - if (!prev) {continue;} - + if (!prev) { + continue; + } + const drawdownMagnitude = (peak - prev.value) / peak; const duration = Math.floor( - (prev.date.getTime() - currentDrawdownStart.getTime()) / - (1000 * 60 * 60 * 24) + (prev.date.getTime() - currentDrawdownStart.getTime()) / (1000 * 60 * 60 * 24) ); drawdownPeriods.push({ @@ -217,8 +223,10 @@ export function analyzeDrawdowns( // Handle ongoing drawdown if (currentDrawdownStart) { const lastPoint = equityCurve[equityCurve.length - 1]; - if (!lastPoint) {return { maxDrawdown, maxDrawdownDuration, averageDrawdown: 0, drawdownPeriods };} - + if (!lastPoint) { + return { maxDrawdown, maxDrawdownDuration, averageDrawdown: 0, drawdownPeriods }; + } + const drawdownMagnitude = (peak - lastPoint.value) / peak; const duration = Math.floor( (lastPoint.date.getTime() - currentDrawdownStart.getTime()) / (1000 * 60 * 60 * 24) @@ -378,8 +386,10 @@ export function strategyPerformanceAttribution( for (let i = 0; i < sectorWeights.length; i++) { const portfolioWeight = sectorWeights[i]; const sectorReturn = sectorReturns[i]; - if (portfolioWeight === undefined || sectorReturn === undefined) {continue;} - + if (portfolioWeight === undefined || sectorReturn === undefined) { + continue; + } + const benchmarkWeight = 1 / sectorWeights.length; // Assuming equal benchmark weights // Allocation effect: (portfolio weight - benchmark weight) * (benchmark sector return - benchmark return) @@ -483,16 +493,31 @@ export function calculateStrategyMetrics( for (let i = 1; i < equityCurve.length; i++) { const current = equityCurve[i]; const previous = equityCurve[i - 1]; - if (!current || !previous) {continue;} - + if (!current || !previous) { + continue; + } + const ret = (current.value - previous.value) / previous.value; returns.push(ret); } const lastPoint = equityCurve[equityCurve.length - 1]; const firstPoint = equityCurve[0]; - if (!lastPoint || !firstPoint) {return { totalValue: 0, totalReturn: 0, totalReturnPercent: 0, dailyReturn: 0, dailyReturnPercent: 0, maxDrawdown: 0, sharpeRatio: 0, beta: 0, alpha: 0, volatility: 0 };} - + if (!lastPoint || !firstPoint) { + return { + totalValue: 0, + totalReturn: 0, + totalReturnPercent: 0, + dailyReturn: 0, + dailyReturnPercent: 0, + maxDrawdown: 0, + sharpeRatio: 0, + beta: 0, + alpha: 0, + volatility: 0, + }; + } + const totalValue = lastPoint.value; const totalReturn = totalValue - firstPoint.value; const totalReturnPercent = (totalReturn / firstPoint.value) * 100; @@ -562,12 +587,10 @@ export function informationRatio(portfolioReturns: number[], benchmarkReturns: n throw new Error('Portfolio and benchmark returns must have the same length.'); } - const excessReturns = portfolioReturns.map( - (portfolioReturn, index) => { - const benchmark = benchmarkReturns[index]; - return benchmark !== undefined ? portfolioReturn - benchmark : 0; - } - ); + const excessReturns = portfolioReturns.map((portfolioReturn, index) => { + const benchmark = benchmarkReturns[index]; + return benchmark !== undefined ? portfolioReturn - benchmark : 0; + }); const trackingError = calculateVolatility(excessReturns); const avgExcessReturn = excessReturns.reduce((sum, ret) => sum + ret, 0) / excessReturns.length; @@ -602,8 +625,10 @@ export function captureRatio( for (let i = 0; i < portfolioReturns.length; i++) { const benchmarkReturn = benchmarkReturns[i]; const portfolioReturn = portfolioReturns[i]; - if (benchmarkReturn === undefined || portfolioReturn === undefined) {continue;} - + if (benchmarkReturn === undefined || portfolioReturn === undefined) { + continue; + } + if (benchmarkReturn > 0) { upCapture += portfolioReturn; upMarketPeriods++; @@ -733,17 +758,21 @@ export function timeWeightedRateOfReturn( if (cashFlows.length < 2) { return 0; } - + const first = cashFlows[0]; - if (!first) {return 0;} - + if (!first) { + return 0; + } + let totalReturn = 1; let previousValue = first.value; for (let i = 1; i < cashFlows.length; i++) { const current = cashFlows[i]; - if (!current) {continue;} - + if (!current) { + continue; + } + const periodReturn = (current.value - previousValue - current.amount) / (previousValue + current.amount); totalReturn *= 1 + periodReturn; @@ -762,10 +791,12 @@ export function moneyWeightedRateOfReturn( if (cashFlows.length === 0) { return 0; } - + const first = cashFlows[0]; - if (!first) {return 0;} - + if (!first) { + return 0; + } + // Approximate MWRR using Internal Rate of Return (IRR) // This requires a numerical method or library for accurate IRR calculation // This is a simplified example and may not be accurate for all cases @@ -826,8 +857,10 @@ function calculateBeta(portfolioReturns: number[], marketReturns: number[]): num for (let i = 0; i < portfolioReturns.length; i++) { const portfolioReturn = portfolioReturns[i]; const marketReturn = marketReturns[i]; - if (portfolioReturn === undefined || marketReturn === undefined) {continue;} - + if (portfolioReturn === undefined || marketReturn === undefined) { + continue; + } + const portfolioDiff = portfolioReturn - portfolioMean; const marketDiff = marketReturn - marketMean; diff --git a/libs/utils/src/calculations/risk-metrics.ts b/libs/utils/src/calculations/risk-metrics.ts index 80cd20d..97daf82 100644 --- a/libs/utils/src/calculations/risk-metrics.ts +++ b/libs/utils/src/calculations/risk-metrics.ts @@ -71,14 +71,18 @@ export function maxDrawdown(equityCurve: number[]): number { let maxDD = 0; const first = equityCurve[0]; - if (first === undefined) {return 0;} - + if (first === undefined) { + return 0; + } + let peak = first; for (let i = 1; i < equityCurve.length; i++) { const current = equityCurve[i]; - if (current === undefined) {continue;} - + if (current === undefined) { + continue; + } + if (current > peak) { peak = current; } else { @@ -150,8 +154,10 @@ export function beta(portfolioReturns: number[], marketReturns: number[]): numbe for (let i = 0; i < n; i++) { const portfolioReturn = portfolioReturns[i]; const marketReturn = marketReturns[i]; - if (portfolioReturn === undefined || marketReturn === undefined) {continue;} - + if (portfolioReturn === undefined || marketReturn === undefined) { + continue; + } + const portfolioDiff = portfolioReturn - portfolioMean; const marketDiff = marketReturn - marketMean; @@ -187,12 +193,13 @@ export function treynorRatio( riskFreeRate: number = 0 ): number { const portfolioBeta = beta(portfolioReturns, marketReturns); - + if (portfolioBeta === 0) { return 0; } - - const portfolioMean = portfolioReturns.reduce((sum, ret) => sum + ret, 0) / portfolioReturns.length; + + const portfolioMean = + portfolioReturns.reduce((sum, ret) => sum + ret, 0) / portfolioReturns.length; return (portfolioMean - riskFreeRate) / portfolioBeta; } @@ -412,7 +419,9 @@ export function riskContribution( for (let i = 0; i < n; i++) { let marginalContribution = 0; const row = covarianceMatrix[i]; - if (!row) {continue;} + if (!row) { + continue; + } for (let j = 0; j < n; j++) { const weight = weights[j]; @@ -442,8 +451,10 @@ export function ulcerIndex(equityCurve: Array<{ value: number; date: Date }>): n let sumSquaredDrawdown = 0; const first = equityCurve[0]; - if (!first) {return 0;} - + if (!first) { + return 0; + } + let peak = first.value; for (const point of equityCurve) { diff --git a/libs/utils/src/calculations/technical-indicators.ts b/libs/utils/src/calculations/technical-indicators.ts index d45fec9..7f78776 100644 --- a/libs/utils/src/calculations/technical-indicators.ts +++ b/libs/utils/src/calculations/technical-indicators.ts @@ -540,7 +540,9 @@ export function adx( for (let i = 1; i < ohlcv.length; i++) { const current = ohlcv[i]; const previous = ohlcv[i - 1]; - if (!current || !previous) {continue;} + if (!current || !previous) { + continue; + } // True Range const tr = Math.max( @@ -575,8 +577,10 @@ export function adx( const atr = atrValues[i]; const plusDMSmoothed = smoothedPlusDM[i]; const minusDMSmoothed = smoothedMinusDM[i]; - if (atr === undefined || plusDMSmoothed === undefined || minusDMSmoothed === undefined) {continue;} - + if (atr === undefined || plusDMSmoothed === undefined || minusDMSmoothed === undefined) { + continue; + } + const diPlus = atr > 0 ? (plusDMSmoothed / atr) * 100 : 0; const diMinus = atr > 0 ? (minusDMSmoothed / atr) * 100 : 0; @@ -602,17 +606,15 @@ export function adx( /** * Parabolic SAR */ -export function parabolicSAR( - ohlcv: OHLCV[], - step: number = 0.02, - maxStep: number = 0.2 -): number[] { +export function parabolicSAR(ohlcv: OHLCV[], step: number = 0.02, maxStep: number = 0.2): number[] { if (ohlcv.length < 2) { return []; } const first = ohlcv[0]; - if (!first) {return [];} + if (!first) { + return []; + } const result: number[] = []; let trend = 1; // 1 for uptrend, -1 for downtrend @@ -625,7 +627,9 @@ export function parabolicSAR( for (let i = 1; i < ohlcv.length; i++) { const curr = ohlcv[i]; const prev = ohlcv[i - 1]; - if (!curr || !prev) {continue;} + if (!curr || !prev) { + continue; + } // Calculate new SAR sar = sar + acceleration * (extremePoint - sar); @@ -834,32 +838,37 @@ export function ultimateOscillator( // Calculate BP and TR for (let i = 0; i < ohlcv.length; i++) { const current = ohlcv[i]!; - + if (i === 0) { bp.push(current.close - current.low); tr.push(current.high - current.low); } else { const previous = ohlcv[i - 1]!; bp.push(current.close - Math.min(current.low, previous.close)); - tr.push(Math.max( - current.high - current.low, - Math.abs(current.high - previous.close), - Math.abs(current.low - previous.close) - )); + tr.push( + Math.max( + current.high - current.low, + Math.abs(current.high - previous.close), + Math.abs(current.low - previous.close) + ) + ); } } const result: number[] = []; for (let i = Math.max(period1, period2, period3) - 1; i < ohlcv.length; i++) { - const avg1 = bp.slice(i - period1 + 1, i + 1).reduce((a, b) => a + b, 0) / - tr.slice(i - period1 + 1, i + 1).reduce((a, b) => a + b, 0); - const avg2 = bp.slice(i - period2 + 1, i + 1).reduce((a, b) => a + b, 0) / - tr.slice(i - period2 + 1, i + 1).reduce((a, b) => a + b, 0); - const avg3 = bp.slice(i - period3 + 1, i + 1).reduce((a, b) => a + b, 0) / - tr.slice(i - period3 + 1, i + 1).reduce((a, b) => a + b, 0); + const avg1 = + bp.slice(i - period1 + 1, i + 1).reduce((a, b) => a + b, 0) / + tr.slice(i - period1 + 1, i + 1).reduce((a, b) => a + b, 0); + const avg2 = + bp.slice(i - period2 + 1, i + 1).reduce((a, b) => a + b, 0) / + tr.slice(i - period2 + 1, i + 1).reduce((a, b) => a + b, 0); + const avg3 = + bp.slice(i - period3 + 1, i + 1).reduce((a, b) => a + b, 0) / + tr.slice(i - period3 + 1, i + 1).reduce((a, b) => a + b, 0); - const uo = 100 * ((4 * avg1) + (2 * avg2) + avg3) / (4 + 2 + 1); + const uo = (100 * (4 * avg1 + 2 * avg2 + avg3)) / (4 + 2 + 1); result.push(uo); } @@ -880,7 +889,7 @@ export function easeOfMovement(ohlcv: OHLCV[], period: number = 14): number[] { const current = ohlcv[i]!; const previous = ohlcv[i - 1]!; - const distance = ((current.high + current.low) / 2) - ((previous.high + previous.low) / 2); + const distance = (current.high + current.low) / 2 - (previous.high + previous.low) / 2; const boxHeight = current.high - current.low; const volume = current.volume; @@ -1028,7 +1037,14 @@ export function klingerVolumeOscillator( const prevTypicalPrice = (previous.high + previous.low + previous.close) / 3; const trend = typicalPrice > prevTypicalPrice ? 1 : -1; - const vf = current.volume * trend * Math.abs((2 * ((current.close - current.low) - (current.high - current.close))) / (current.high - current.low)) * 100; + const vf = + current.volume * + trend * + Math.abs( + (2 * (current.close - current.low - (current.high - current.close))) / + (current.high - current.low) + ) * + 100; volumeForce.push(vf); } @@ -1137,7 +1153,7 @@ export function stochasticRSI( smoothD: number = 3 ): { k: number[]; d: number[] } { const rsiValues = rsi(prices, rsiPeriod); - + if (rsiValues.length < stochPeriod) { return { k: [], d: [] }; } @@ -1266,17 +1282,17 @@ export function massIndex(ohlcv: OHLCV[], period: number = 25): number[] { // Calculate high-low ranges const ranges = ohlcv.map(candle => candle.high - candle.low); - + // Calculate 9-period EMA of ranges const ema9 = ema(ranges, 9); - + // Calculate 9-period EMA of the EMA (double smoothing) const emaEma9 = ema(ema9, 9); // Calculate ratio const ratios: number[] = []; const minLength = Math.min(ema9.length, emaEma9.length); - + for (let i = 0; i < minLength; i++) { const singleEMA = ema9[i]; const doubleEMA = emaEma9[i]; @@ -1299,9 +1315,9 @@ export function massIndex(ohlcv: OHLCV[], period: number = 25): number[] { * Coppock Curve */ export function coppockCurve( - prices: number[], - shortROC: number = 11, - longROC: number = 14, + prices: number[], + shortROC: number = 11, + longROC: number = 14, wma: number = 10 ): number[] { const roc1 = roc(prices, shortROC); diff --git a/libs/utils/src/fetch.ts b/libs/utils/src/fetch.ts new file mode 100644 index 0000000..f446c58 --- /dev/null +++ b/libs/utils/src/fetch.ts @@ -0,0 +1,94 @@ +/** + * Enhanced fetch wrapper with proxy support and automatic debug logging + * Drop-in replacement for native fetch with additional features + */ + +export interface BunRequestInit extends RequestInit { + proxy?: string; +} + +export interface FetchOptions extends RequestInit { + logger?: any; + proxy?: string | null; + timeout?: number; +} + +export async function fetch(input: RequestInfo | URL, options?: FetchOptions): Promise { + const logger = options?.logger || console; + const url = + typeof input === 'string' ? input : input instanceof URL ? input.href : (input as Request).url; + + // Build request options + const requestOptions: RequestInit = { + method: options?.method || 'GET', + headers: options?.headers || {}, + body: options?.body, + signal: options?.signal, + credentials: options?.credentials, + cache: options?.cache, + redirect: options?.redirect, + referrer: options?.referrer, + referrerPolicy: options?.referrerPolicy, + integrity: options?.integrity, + keepalive: options?.keepalive, + mode: options?.mode, + }; + // Handle proxy for Bun + if (options?.proxy) { + // Bun supports proxy via fetch options + (requestOptions as BunRequestInit).proxy = options.proxy; + } + + // Handle timeout + if (options?.timeout) { + const controller = new AbortController(); + const timeoutId = setTimeout(() => controller.abort(), options.timeout); + requestOptions.signal = controller.signal; + + try { + const response = await performFetch(input, requestOptions, logger, url); + clearTimeout(timeoutId); + return response; + } catch (error) { + clearTimeout(timeoutId); + throw error; + } + } + + return performFetch(input, requestOptions, logger, url); +} + +async function performFetch( + input: RequestInfo | URL, + requestOptions: RequestInit, + logger: any, + url: string +): Promise { + logger.debug('HTTP request', { + method: requestOptions.method, + url, + headers: requestOptions.headers, + proxy: (requestOptions as BunRequestInit).proxy || null, + }); + + try { + const response = await globalThis.fetch(input, requestOptions); + + logger.debug('HTTP response', { + url, + status: response.status, + statusText: response.statusText, + ok: response.ok, + headers: Object.fromEntries(response.headers.entries()), + }); + + return response; + } catch (error) { + logger.debug('HTTP error', { + url, + error: error instanceof Error ? error.message : String(error), + name: error instanceof Error ? error.name : 'Unknown', + }); + throw error; + } +} diff --git a/libs/utils/src/generic-functions.ts b/libs/utils/src/generic-functions.ts index 0fdd76b..3e4a25d 100644 --- a/libs/utils/src/generic-functions.ts +++ b/libs/utils/src/generic-functions.ts @@ -3,7 +3,7 @@ * These functions demonstrate how to use generic types with OHLCV data */ -import type { OHLCV, HasClose, HasOHLC, HasVolume } from '@stock-bot/types'; +import type { HasClose, HasOHLC, HasVolume, OHLCV } from '@stock-bot/types'; /** * Extract close prices from any data structure that has a close field @@ -16,7 +16,9 @@ export function extractCloses(data: T[]): number[] { /** * Extract OHLC prices from any data structure that has OHLC fields */ -export function extractOHLC(data: T[]): { +export function extractOHLC( + data: T[] +): { opens: number[]; highs: number[]; lows: number[]; @@ -43,12 +45,12 @@ export function extractVolumes(data: T[]): number[] { export function calculateSMA(data: T[], period: number): number[] { const closes = extractCloses(data); const result: number[] = []; - + for (let i = period - 1; i < closes.length; i++) { const sum = closes.slice(i - period + 1, i + 1).reduce((a, b) => a + b, 0); result.push(sum / period); } - + return result; } @@ -64,7 +66,7 @@ export function calculateTypicalPrice(data: T[]): number[] { */ export function calculateTrueRange(data: T[]): number[] { const result: number[] = []; - + for (let i = 0; i < data.length; i++) { if (i === 0) { result.push(data[i]!.high - data[i]!.low); @@ -79,7 +81,7 @@ export function calculateTrueRange(data: T[]): number[] { result.push(tr); } } - + return result; } @@ -89,7 +91,7 @@ export function calculateTrueRange(data: T[]): number[] { export function calculateReturns(data: T[]): number[] { const closes = extractCloses(data); const returns: number[] = []; - + for (let i = 1; i < closes.length; i++) { const current = closes[i]!; const previous = closes[i - 1]!; @@ -99,7 +101,7 @@ export function calculateReturns(data: T[]): number[] { returns.push(0); } } - + return returns; } @@ -109,7 +111,7 @@ export function calculateReturns(data: T[]): number[] { export function calculateLogReturns(data: T[]): number[] { const closes = extractCloses(data); const logReturns: number[] = []; - + for (let i = 1; i < closes.length; i++) { const current = closes[i]!; const previous = closes[i - 1]!; @@ -119,7 +121,7 @@ export function calculateLogReturns(data: T[]): number[] { logReturns.push(0); } } - + return logReturns; } @@ -130,19 +132,19 @@ export function calculateVWAP(data: T[]): number[ const result: number[] = []; let cumulativeVolumePrice = 0; let cumulativeVolume = 0; - + for (const item of data) { const typicalPrice = (item.high + item.low + item.close) / 3; cumulativeVolumePrice += typicalPrice * item.volume; cumulativeVolume += item.volume; - + if (cumulativeVolume > 0) { result.push(cumulativeVolumePrice / cumulativeVolume); } else { result.push(typicalPrice); } } - + return result; } @@ -156,11 +158,7 @@ export function filterBySymbol(data: OHLCV[], symbol: string): OHLCV[] { /** * Filter OHLCV data by time range */ -export function filterByTimeRange( - data: OHLCV[], - startTime: number, - endTime: number -): OHLCV[] { +export function filterByTimeRange(data: OHLCV[], startTime: number, endTime: number): OHLCV[] { return data.filter(item => item.timestamp >= startTime && item.timestamp <= endTime); } @@ -169,14 +167,14 @@ export function filterByTimeRange( */ export function groupBySymbol(data: OHLCV[]): Record { const grouped: Record = {}; - + for (const item of data) { if (!grouped[item.symbol]) { grouped[item.symbol] = []; } grouped[item.symbol]!.push(item); } - + return grouped; } @@ -186,6 +184,6 @@ export function groupBySymbol(data: OHLCV[]): Record { export function convertTimestamps(data: OHLCV[]): Array { return data.map(item => ({ ...item, - date: new Date(item.timestamp) + date: new Date(item.timestamp), })); -} \ No newline at end of file +} diff --git a/libs/utils/src/index.ts b/libs/utils/src/index.ts index a552313..5329f6d 100644 --- a/libs/utils/src/index.ts +++ b/libs/utils/src/index.ts @@ -2,5 +2,5 @@ export * from './calculations/index'; export * from './common'; export * from './dateUtils'; export * from './generic-functions'; -export * from './operation-context'; -export * from './proxy'; +export * from './fetch'; +export * from './user-agent'; diff --git a/libs/utils/src/operation-context.ts b/libs/utils/src/operation-context.ts deleted file mode 100644 index 38ae757..0000000 --- a/libs/utils/src/operation-context.ts +++ /dev/null @@ -1,172 +0,0 @@ -/** - * OperationContext - Unified context for handler operations - * - * Provides streamlined access to: - * - Child loggers with hierarchical context - * - Database clients (MongoDB, PostgreSQL) - * - Contextual cache with automatic key prefixing - * - Shared resource management - */ - -import { createCache, type CacheProvider } from '@stock-bot/cache'; -import { getLogger, type Logger } from '@stock-bot/logger'; -import { getDatabaseConfig } from '@stock-bot/config'; - -export class OperationContext { - public readonly logger: Logger; - public readonly mongodb: any; // MongoDB client - imported dynamically - public readonly postgres: any; // PostgreSQL client - imported dynamically - - private static sharedCache: CacheProvider | null = null; - private static parentLoggers = new Map(); - private static databaseConfig: any = null; - - constructor( - public readonly handlerName: string, - public readonly operationName: string, - parentLogger?: Logger - ) { - // Create child logger from parent or create handler parent - const parent = parentLogger || this.getOrCreateParentLogger(); - this.logger = parent.child(operationName, { - handler: handlerName, - operation: operationName - }); - - // Set up database access - this.mongodb = this.getDatabaseClient('mongodb'); - this.postgres = this.getDatabaseClient('postgres'); - } - - private getDatabaseClient(type: 'mongodb' | 'postgres'): any { - try { - if (type === 'mongodb') { - // Dynamic import to avoid TypeScript issues during build - const { getMongoDBClient } = require('@stock-bot/mongodb-client'); - return getMongoDBClient(); - } else { - // Dynamic import to avoid TypeScript issues during build - const { getPostgreSQLClient } = require('@stock-bot/postgres-client'); - return getPostgreSQLClient(); - } - } catch (error) { - this.logger.warn(`${type} client not initialized, operations may fail`, { error }); - return null; - } - } - - private getOrCreateParentLogger(): Logger { - const parentKey = `${this.handlerName}-handler`; - - if (!OperationContext.parentLoggers.has(parentKey)) { - const parentLogger = getLogger(parentKey); - OperationContext.parentLoggers.set(parentKey, parentLogger); - } - - return OperationContext.parentLoggers.get(parentKey)!; - } - - /** - * Get contextual cache with automatic key prefixing - * Keys are automatically prefixed as: "operations:handlerName:operationName:key" - */ - get cache(): CacheProvider { - if (!OperationContext.sharedCache) { - // Get Redis configuration from database config - if (!OperationContext.databaseConfig) { - OperationContext.databaseConfig = getDatabaseConfig(); - } - - const redisConfig = OperationContext.databaseConfig.dragonfly || { - host: 'localhost', - port: 6379, - db: 1 - }; - - OperationContext.sharedCache = createCache({ - keyPrefix: 'operations:', - shared: true, // Use singleton Redis connection - enableMetrics: true, - ttl: 3600, // Default 1 hour TTL - redisConfig - }); - } - return this.createContextualCache(); - } - - private createContextualCache(): CacheProvider { - const contextPrefix = `${this.handlerName}:${this.operationName}:`; - - // Return a proxy that automatically prefixes keys with context - return { - async get(key: string): Promise { - return OperationContext.sharedCache!.get(`${contextPrefix}${key}`); - }, - - async set(key: string, value: T, options?: any): Promise { - return OperationContext.sharedCache!.set(`${contextPrefix}${key}`, value, options); - }, - - async del(key: string): Promise { - return OperationContext.sharedCache!.del(`${contextPrefix}${key}`); - }, - - async exists(key: string): Promise { - return OperationContext.sharedCache!.exists(`${contextPrefix}${key}`); - }, - - async clear(): Promise { - // Not implemented for contextual cache - use del() for specific keys - throw new Error('clear() not implemented for contextual cache - use del() for specific keys'); - }, - - async keys(pattern: string): Promise { - const fullPattern = `${contextPrefix}${pattern}`; - return OperationContext.sharedCache!.keys(fullPattern); - }, - - getStats() { - return OperationContext.sharedCache!.getStats(); - }, - - async health(): Promise { - return OperationContext.sharedCache!.health(); - }, - - async waitForReady(timeout?: number): Promise { - return OperationContext.sharedCache!.waitForReady(timeout); - }, - - isReady(): boolean { - return OperationContext.sharedCache!.isReady(); - } - } as CacheProvider; - } - - /** - * Factory method to create OperationContext - */ - static create(handlerName: string, operationName: string, parentLogger?: Logger): OperationContext { - return new OperationContext(handlerName, operationName, parentLogger); - } - - /** - * Get cache key prefix for this operation context - */ - getCacheKeyPrefix(): string { - return `operations:${this.handlerName}:${this.operationName}:`; - } - - /** - * Create a child context for sub-operations - */ - createChild(subOperationName: string): OperationContext { - return new OperationContext( - this.handlerName, - `${this.operationName}:${subOperationName}`, - this.logger - ); - } -} - -export default OperationContext; \ No newline at end of file diff --git a/libs/utils/src/proxy/index.ts b/libs/utils/src/proxy/index.ts deleted file mode 100644 index fe21f79..0000000 --- a/libs/utils/src/proxy/index.ts +++ /dev/null @@ -1,21 +0,0 @@ -/** - * Proxy management utilities - */ -export { - default as ProxyManager, - getProxy, - getRandomProxy, - getAllProxies, - getWorkingProxies, - updateProxies -} from './proxy-manager'; - -export { - ProxySyncService, - getProxySyncService, - startProxySync, - stopProxySync, - syncProxiesOnce -} from './proxy-sync'; - -export type { ProxyInfo } from '@stock-bot/http'; // Re-export for convenience \ No newline at end of file diff --git a/libs/utils/src/proxy/proxy-manager.ts b/libs/utils/src/proxy/proxy-manager.ts deleted file mode 100644 index f7000d2..0000000 --- a/libs/utils/src/proxy/proxy-manager.ts +++ /dev/null @@ -1,291 +0,0 @@ -/** - * Centralized Proxy Manager - Handles proxy storage, retrieval, and caching - */ -import { createCache, type CacheProvider } from '@stock-bot/cache'; -import { getDatabaseConfig } from '@stock-bot/config'; -import { getLogger } from '@stock-bot/logger'; -import type { ProxyInfo } from '@stock-bot/http'; - -const logger = getLogger('proxy-manager'); - -export class ProxyManager { - private static instance: ProxyManager | null = null; - private cache: CacheProvider; - private proxies: ProxyInfo[] = []; - private lastUpdate: Date | null = null; - private isInitialized = false; - - private constructor() { - const databaseConfig = getDatabaseConfig(); - this.cache = createCache({ - redisConfig: databaseConfig.dragonfly, - keyPrefix: 'proxies:', - ttl: 86400, // 24 hours - enableMetrics: true, - }); - } - - /** - * Internal initialization - loads existing proxies from cache - */ - private async initializeInternal(): Promise { - if (this.isInitialized) { - return; - } - - try { - logger.info('Initializing proxy manager...'); - - // Wait for cache to be ready - await this.cache.waitForReady(10000); // Wait up to 10 seconds - logger.debug('Cache is ready'); - - await this.loadFromCache(); - this.isInitialized = true; - logger.info('Proxy manager initialized', { - proxiesLoaded: this.proxies.length, - lastUpdate: this.lastUpdate, - }); - } catch (error) { - logger.error('Failed to initialize proxy manager', { error }); - this.isInitialized = true; // Set to true anyway to avoid infinite retries - } - } - - /** - * Get a random working proxy from the available pool (synchronous) - */ - getRandomProxy(): ProxyInfo | null { - // Ensure initialized - if (!this.isInitialized) { - throw new Error('ProxyManager not initialized'); - } - - // Return null if no proxies available - if (this.proxies.length === 0) { - logger.warn('No proxies available in memory'); - return null; - } - - // Filter for working proxies (not explicitly marked as non-working) - const workingProxies = this.proxies.filter(proxy => proxy.isWorking !== false); - - if (workingProxies.length === 0) { - logger.warn('No working proxies available'); - return null; - } - - // Return random proxy with preference for recently successful ones - const sortedProxies = workingProxies.sort((a, b) => { - // Prefer proxies with better success rates - const aRate = a.successRate || 0; - const bRate = b.successRate || 0; - return bRate - aRate; - }); - - // Take from top 50% of best performing proxies - const topProxies = sortedProxies.slice(0, Math.max(1, Math.floor(sortedProxies.length * 0.5))); - const selectedProxy = topProxies[Math.floor(Math.random() * topProxies.length)]; - - if (!selectedProxy) { - logger.warn('No proxy selected from available pool'); - return null; - } - - logger.debug('Selected proxy', { - host: selectedProxy.host, - port: selectedProxy.port, - successRate: selectedProxy.successRate, - totalAvailable: workingProxies.length, - }); - - return selectedProxy; - } - - /** - * Get all working proxies (synchronous) - */ - getWorkingProxies(): ProxyInfo[] { - if (!this.isInitialized) { - throw new Error('ProxyManager not initialized'); - } - - return this.proxies.filter(proxy => proxy.isWorking !== false); - } - - /** - * Get all proxies (working and non-working) - */ - getAllProxies(): ProxyInfo[] { - if (!this.isInitialized) { - throw new Error('ProxyManager not initialized'); - } - - return [...this.proxies]; - } - - /** - * Update the proxy pool with new proxies - */ - async updateProxies(proxies: ProxyInfo[]): Promise { - try { - logger.info('Updating proxy pool', { newCount: proxies.length, existingCount: this.proxies.length }); - - this.proxies = proxies; - this.lastUpdate = new Date(); - - // Store to cache - await this.cache.set('active-proxies', proxies); - await this.cache.set('last-update', this.lastUpdate.toISOString()); - - const workingCount = proxies.filter(p => p.isWorking !== false).length; - logger.info('Proxy pool updated successfully', { - totalProxies: proxies.length, - workingProxies: workingCount, - lastUpdate: this.lastUpdate, - }); - } catch (error) { - logger.error('Failed to update proxy pool', { error }); - throw error; - } - } - - /** - * Add or update a single proxy in the pool - */ - async updateProxy(proxy: ProxyInfo): Promise { - const existingIndex = this.proxies.findIndex( - p => p.host === proxy.host && p.port === proxy.port && p.protocol === proxy.protocol - ); - - if (existingIndex >= 0) { - this.proxies[existingIndex] = { ...this.proxies[existingIndex], ...proxy }; - logger.debug('Updated existing proxy', { host: proxy.host, port: proxy.port }); - } else { - this.proxies.push(proxy); - logger.debug('Added new proxy', { host: proxy.host, port: proxy.port }); - } - - // Update cache - await this.updateProxies(this.proxies); - } - - /** - * Remove a proxy from the pool - */ - async removeProxy(host: string, port: number, protocol: string): Promise { - const initialLength = this.proxies.length; - this.proxies = this.proxies.filter( - p => !(p.host === host && p.port === port && p.protocol === protocol) - ); - - if (this.proxies.length < initialLength) { - await this.updateProxies(this.proxies); - logger.debug('Removed proxy', { host, port, protocol }); - } - } - - /** - * Clear all proxies from memory and cache - */ - async clearProxies(): Promise { - this.proxies = []; - this.lastUpdate = null; - - await this.cache.del('active-proxies'); - await this.cache.del('last-update'); - - logger.info('Cleared all proxies'); - } - - /** - * Check if proxy manager is ready - */ - isReady(): boolean { - return this.isInitialized; - } - - /** - * Load proxies from cache storage - */ - private async loadFromCache(): Promise { - try { - const cachedProxies = await this.cache.get('active-proxies'); - const lastUpdateStr = await this.cache.get('last-update'); - - if (cachedProxies && Array.isArray(cachedProxies)) { - this.proxies = cachedProxies; - this.lastUpdate = lastUpdateStr ? new Date(lastUpdateStr) : null; - - logger.debug('Loaded proxies from cache', { - count: this.proxies.length, - lastUpdate: this.lastUpdate, - }); - } else { - logger.debug('No cached proxies found'); - } - } catch (error) { - logger.error('Failed to load proxies from cache', { error }); - } - } - - /** - * Initialize the singleton instance - */ - static async initialize(): Promise { - if (!ProxyManager.instance) { - ProxyManager.instance = new ProxyManager(); - await ProxyManager.instance.initializeInternal(); - - // Perform initial sync with proxy:active:* storage - try { - const { syncProxiesOnce } = await import('./proxy-sync'); - await syncProxiesOnce(); - logger.info('Initial proxy sync completed'); - } catch (error) { - logger.error('Failed to perform initial proxy sync', { error }); - } - } - } - - /** - * Get the singleton instance (must be initialized first) - */ - static getInstance(): ProxyManager { - if (!ProxyManager.instance) { - throw new Error('ProxyManager not initialized. Call ProxyManager.initialize() first.'); - } - return ProxyManager.instance; - } - - /** - * Reset the singleton instance (for testing) - */ - static reset(): void { - ProxyManager.instance = null; - } -} - -// Export the class as default -export default ProxyManager; - -// Convenience functions for easier imports -export function getProxy(): ProxyInfo | null { - return ProxyManager.getInstance().getRandomProxy(); -} - -export function getRandomProxy(): ProxyInfo | null { - return ProxyManager.getInstance().getRandomProxy(); -} - -export function getAllProxies(): ProxyInfo[] { - return ProxyManager.getInstance().getAllProxies(); -} - -export function getWorkingProxies(): ProxyInfo[] { - return ProxyManager.getInstance().getWorkingProxies(); -} - -export async function updateProxies(proxies: ProxyInfo[]): Promise { - return ProxyManager.getInstance().updateProxies(proxies); -} \ No newline at end of file diff --git a/libs/utils/src/proxy/proxy-sync.ts b/libs/utils/src/proxy/proxy-sync.ts deleted file mode 100644 index cb99e46..0000000 --- a/libs/utils/src/proxy/proxy-sync.ts +++ /dev/null @@ -1,157 +0,0 @@ -/** - * Proxy Storage Synchronization Service - * - * This service bridges the gap between two proxy storage systems: - * 1. proxy:active:* keys (used by proxy tasks for individual proxy storage) - * 2. proxies:active-proxies (used by ProxyManager for centralized storage) - */ - -import { createCache, type CacheProvider } from '@stock-bot/cache'; -import { getDatabaseConfig } from '@stock-bot/config'; -import { getLogger } from '@stock-bot/logger'; -import type { ProxyInfo } from '@stock-bot/http'; -import { ProxyManager } from './proxy-manager'; - -const logger = getLogger('proxy-sync'); - -export class ProxySyncService { - private cache: CacheProvider; - private syncInterval: Timer | null = null; - private isRunning = false; - - constructor() { - const databaseConfig = getDatabaseConfig(); - this.cache = createCache({ - redisConfig: databaseConfig.dragonfly, - keyPrefix: '', // No prefix to access all keys - ttl: 86400, - }); - } - - /** - * Start the synchronization service - * @param intervalMs - Sync interval in milliseconds (default: 5 minutes) - */ - async start(intervalMs: number = 300000): Promise { - if (this.isRunning) { - logger.warn('Proxy sync service is already running'); - return; - } - - this.isRunning = true; - logger.info('Starting proxy sync service', { intervalMs }); - - // Wait for cache to be ready before initial sync - await this.cache.waitForReady(10000); - - // Initial sync - await this.syncProxies(); - - // Set up periodic sync - this.syncInterval = setInterval(async () => { - try { - await this.syncProxies(); - } catch (error) { - logger.error('Error during periodic sync', { error }); - } - }, intervalMs); - } - - /** - * Stop the synchronization service - */ - stop(): void { - if (this.syncInterval) { - clearInterval(this.syncInterval); - this.syncInterval = null; - } - this.isRunning = false; - logger.info('Stopped proxy sync service'); - } - - /** - * Perform a one-time synchronization - */ - async syncProxies(): Promise { - try { - logger.debug('Starting proxy synchronization'); - - // Wait for cache to be ready - await this.cache.waitForReady(5000); - - // Collect all proxies from proxy:active:* storage - const proxyKeys = await this.cache.keys('proxy:active:*'); - - if (proxyKeys.length === 0) { - logger.debug('No proxies found in proxy:active:* storage'); - return; - } - - const allProxies: ProxyInfo[] = []; - - // Fetch all proxies in parallel for better performance - const proxyPromises = proxyKeys.map(key => this.cache.get(key)); - const proxyResults = await Promise.all(proxyPromises); - - for (const proxy of proxyResults) { - if (proxy) { - allProxies.push(proxy); - } - } - - const workingCount = allProxies.filter(p => p.isWorking).length; - - logger.info('Collected proxies from storage', { - total: allProxies.length, - working: workingCount, - }); - - // Update ProxyManager with all proxies - const manager = ProxyManager.getInstance(); - await manager.updateProxies(allProxies); - - logger.info('Proxy synchronization completed', { - synchronized: allProxies.length, - working: workingCount, - }); - } catch (error) { - logger.error('Failed to sync proxies', { error }); - throw error; - } - } - - /** - * Get synchronization status - */ - getStatus(): { isRunning: boolean; lastSync?: Date } { - return { - isRunning: this.isRunning, - }; - } -} - -// Export singleton instance -let syncServiceInstance: ProxySyncService | null = null; - -export function getProxySyncService(): ProxySyncService { - if (!syncServiceInstance) { - syncServiceInstance = new ProxySyncService(); - } - return syncServiceInstance; -} - -// Convenience functions -export async function startProxySync(intervalMs?: number): Promise { - const service = getProxySyncService(); - await service.start(intervalMs); -} - -export function stopProxySync(): void { - const service = getProxySyncService(); - service.stop(); -} - -export async function syncProxiesOnce(): Promise { - const service = getProxySyncService(); - await service.syncProxies(); -} \ No newline at end of file diff --git a/libs/utils/src/user-agent.ts b/libs/utils/src/user-agent.ts new file mode 100644 index 0000000..ac76234 --- /dev/null +++ b/libs/utils/src/user-agent.ts @@ -0,0 +1,30 @@ +/** + * User Agent utility for generating random user agents + */ + +// Simple list of common user agents to avoid external dependency +const USER_AGENTS = [ + // Chrome on Windows + 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36', + 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36', + // Chrome on Mac + 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36', + 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36', + // Firefox on Windows + 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:120.0) Gecko/20100101 Firefox/120.0', + 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:119.0) Gecko/20100101 Firefox/119.0', + // Firefox on Mac + 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15) Gecko/20100101 Firefox/120.0', + 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15) Gecko/20100101 Firefox/119.0', + // Safari on Mac + 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.1 Safari/605.1.15', + 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.0 Safari/605.1.15', + // Edge on Windows + 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36 Edg/120.0.0.0', + 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36 Edg/119.0.0.0', +]; + +export function getRandomUserAgent(): string { + const index = Math.floor(Math.random() * USER_AGENTS.length); + return USER_AGENTS[index]!; +} diff --git a/libs/utils/tsconfig.json b/libs/utils/tsconfig.json index 57d004a..7a56502 100644 --- a/libs/utils/tsconfig.json +++ b/libs/utils/tsconfig.json @@ -1,15 +1,18 @@ { - "extends": "../../tsconfig.lib.json", + "extends": "../../tsconfig.json", "compilerOptions": { "outDir": "./dist", - "rootDir": "./src" + "rootDir": "./src", + "composite": true, + "skipLibCheck": true, + "types": ["node", "bun-types"] }, "include": ["src/**/*"], + "exclude": ["node_modules", "dist"], "references": [ - { "path": "../types" }, - { "path": "../cache" }, - { "path": "../config" }, - { "path": "../logger" }, - { "path": "../http" } + { "path": "../core/types" }, + { "path": "../core/cache" }, + { "path": "../core/config" }, + { "path": "../core/logger" } ] } diff --git a/package.json b/package.json index 933f9c6..06de352 100644 --- a/package.json +++ b/package.json @@ -52,11 +52,19 @@ "infra:reset": "docker-compose down -v && docker-compose up -d dragonfly postgres questdb mongodb", "dev:full": "npm run infra:up && npm run docker:admin && turbo run dev", "dev:clean": "npm run infra:reset && npm run dev:full", - "proxy": "bun run ./apps/data-service/src/proxy-demo.ts" + "proxy": "bun run ./apps/data-ingestion/src/proxy-demo.ts" }, "workspaces": [ "libs/*", - "apps/*" + "libs/core/*", + "libs/data/*", + "libs/services/*", + "apps/stock", + "apps/stock/config", + "apps/stock/data-ingestion", + "apps/stock/data-pipeline", + "apps/stock/web-api", + "apps/stock/web-app" ], "devDependencies": { "@eslint/js": "^9.28.0", @@ -64,8 +72,7 @@ "@modelcontextprotocol/server-postgres": "^0.6.2", "@testcontainers/mongodb": "^10.7.2", "@testcontainers/postgresql": "^10.7.2", - "@types/bun": "latest", - "@types/node": "^22.15.30", + "@types/bun": "^1.2.17", "@types/supertest": "^6.0.2", "@types/yup": "^0.32.0", "@typescript-eslint/eslint-plugin": "^8.34.0", @@ -80,6 +87,7 @@ "pg-mem": "^2.8.1", "prettier": "^3.5.3", "supertest": "^6.3.4", + "ts-unused-exports": "^11.0.1", "turbo": "^2.5.4", "typescript": "^5.8.3", "yup": "^1.6.1" @@ -93,6 +101,7 @@ "@primeng/themes": "^19.1.3", "@tanstack/table-core": "^8.21.3", "@types/pg": "^8.15.4", + "awilix": "^12.0.5", "bullmq": "^5.53.2", "ioredis": "^5.6.1", "pg": "^8.16.0", diff --git a/scripts/build-libs.sh b/scripts/build-libs.sh index f721587..287e4d5 100755 --- a/scripts/build-libs.sh +++ b/scripts/build-libs.sh @@ -31,24 +31,32 @@ trap cleanup EXIT # Build order is important due to dependencies libs=( - # Core Libraries - "types" # Base types - no dependencies - "config" # Configuration - depends on types - "logger" # Logging utilities - depends on types - "utils" # Utilities - depends on types and config - - # Database clients - "postgres-client" # PostgreSQL client - depends on types, config, logger - "mongodb-client" # MongoDB client - depends on types, config, logger - "questdb-client" # QuestDB client - depends on types, config, logger + # Core Libraries - minimal dependencies + "core/types" # Base types - no dependencies + "core/config" # Configuration - depends on types + "core/logger" # Logging utilities - depends on types - # Service libraries - "cache" # Cache - depends on types and logger - "http" # HTTP client - depends on types, config, logger - "event-bus" # Event bus - depends on types, logger - "queue" # Queue - depends on types, logger, cache - "shutdown" # Shutdown - depends on types, logger + # Data access libraries + "data/mongodb" # MongoDB client - depends on core libs + "data/postgres" # PostgreSQL client - depends on core libs + "data/questdb" # QuestDB client - depends on core libs + # Core infrastructure services + "core/shutdown" # Shutdown - no dependencies + "core/cache" # Cache - depends on core libs + "core/event-bus" # Event bus - depends on core libs + "core/handlers" # Handlers - depends on core libs + "core/queue" # Queue - depends on core libs, cache, and handlers + + # Application services + "services/browser" # Browser - depends on core libs + "services/proxy" # Proxy manager - depends on core libs and cache + + # Utils + "utils" # Utilities - depends on many libs + + # DI - dependency injection library + "core/di" # Dependency injection - depends on data, service libs, and handlers ) # Build each library in order diff --git a/scripts/setup-mcp.sh b/scripts/setup-mcp.sh deleted file mode 100755 index 25fefb6..0000000 --- a/scripts/setup-mcp.sh +++ /dev/null @@ -1,83 +0,0 @@ -#!/bin/bash - -# Setup MCP Servers for Stock Bot -# This script helps set up Model Context Protocol servers for PostgreSQL and MongoDB - -set -e - -echo "🚀 Setting up MCP servers for Stock Bot..." - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -NC='\033[0m' # No Color - -# Check if infrastructure is running -echo -e "\n${YELLOW}📊 Checking infrastructure status...${NC}" - -# Check PostgreSQL -if nc -z localhost 5432; then - echo -e "${GREEN}✅ PostgreSQL is running on port 5432${NC}" - PG_RUNNING=true -else - echo -e "${RED}❌ PostgreSQL is not running on port 5432${NC}" - PG_RUNNING=false -fi - -# Check MongoDB -if nc -z localhost 27017; then - echo -e "${GREEN}✅ MongoDB is running on port 27017${NC}" - MONGO_RUNNING=true -else - echo -e "${RED}❌ MongoDB is not running on port 27017${NC}" - MONGO_RUNNING=false -fi - -# Start infrastructure if needed -if [ "$PG_RUNNING" = false ] || [ "$MONGO_RUNNING" = false ]; then - echo -e "\n${YELLOW}🔧 Starting required infrastructure...${NC}" - bun run infra:up - echo -e "${GREEN}✅ Infrastructure started${NC}" - - # Wait a moment for services to be ready - echo -e "${YELLOW}⏳ Waiting for services to be ready...${NC}" - sleep 5 -fi - -echo -e "\n${YELLOW}🔧 Testing MCP server connections...${NC}" - -# Get project paths -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -PROJECT_ROOT="$(dirname "$SCRIPT_DIR")" - -# Test PostgreSQL MCP server -echo -e "\n${YELLOW}Testing PostgreSQL MCP server...${NC}" -if npm list @modelcontextprotocol/server-postgres --prefix "$PROJECT_ROOT" >/dev/null 2>&1; then - echo -e "${GREEN}✅ PostgreSQL MCP server package is installed${NC}" - echo -e "${YELLOW} Package: @modelcontextprotocol/server-postgres v0.6.2${NC}" -else - echo -e "${RED}❌ PostgreSQL MCP server package not found${NC}" -fi - -# Test MongoDB MCP server -echo -e "\n${YELLOW}Testing MongoDB MCP server...${NC}" -if npm list mongodb-mcp-server --prefix "$PROJECT_ROOT" >/dev/null 2>&1; then - echo -e "${GREEN}✅ MongoDB MCP server package is installed${NC}" - echo -e "${YELLOW} Package: mongodb-mcp-server v0.1.1 (official MongoDB team)${NC}" -else - echo -e "${RED}❌ MongoDB MCP server package not found${NC}" -fi - -echo -e "\n${GREEN}🎉 MCP setup complete!${NC}" -echo -e "\n${YELLOW}📋 Configuration saved to: .vscode/mcp.json${NC}" -echo -e "\n${YELLOW}🔗 Connection details:${NC}" -echo -e " PostgreSQL: postgresql://trading_user:trading_pass_dev@localhost:5432/trading_bot" -echo -e " MongoDB: mongodb://trading_admin:trading_mongo_dev@localhost:27017/stock?authSource=admin" - -echo -e "\n${YELLOW}📖 Usage:${NC}" -echo -e " - The MCP servers are configured in .vscode/mcp.json" -echo -e " - Claude Code will automatically use these servers when they're available" -echo -e " - Make sure your infrastructure is running with: bun run infra:up" - -echo -e "\n${GREEN}✨ Ready to use MCP with PostgreSQL and MongoDB!${NC}" \ No newline at end of file diff --git a/tsconfig.app.json b/tsconfig.app.json index 42cc06a..009565a 100644 --- a/tsconfig.app.json +++ b/tsconfig.app.json @@ -1,12 +1,18 @@ -{ - "$schema": "https://json.schemastore.org/tsconfig", - "extends": "./tsconfig.json", - "compilerOptions": { - // Override root settings for application builds - "composite": true, - "incremental": true, - "types": ["bun-types"] - }, - "include": ["src/**/*"], - "exclude": ["node_modules", "dist", "**/*.test.ts", "**/*.spec.ts"] -} \ No newline at end of file +{ + "$schema": "https://json.schemastore.org/tsconfig", + "extends": "./tsconfig.json", + "compilerOptions": { + // Override root settings for application builds + "composite": true, + "incremental": true, + "types": ["bun-types"], + // Modern TC39 Stage 3 decorators (TypeScript 5+ default) + "experimentalDecorators": false, + "emitDecoratorMetadata": true, + // Suppress decorator-related type checking issues due to Bun's hybrid implementation + "skipLibCheck": true, + "suppressImplicitAnyIndexErrors": true + }, + "include": ["src/**/*"], + "exclude": ["node_modules", "dist", "**/*.test.ts", "**/*.spec.ts"] +} diff --git a/tsconfig.json b/tsconfig.json index 28be05b..2f2388d 100644 --- a/tsconfig.json +++ b/tsconfig.json @@ -1,12 +1,31 @@ { "$schema": "https://json.schemastore.org/tsconfig", "compilerOptions": { + "types": ["bun-types"], // JavaScript output target version "target": "ES2022", // Module configuration for different project types "module": "ESNext", "moduleResolution": "bundler", + // "lib": ["ESNext"], + // "target": "ESNext", + // "module": "Preserve", + // "moduleDetection": "force", + "jsx": "react-jsx", + // "allowJs": true, + + // Bundler mode + // "moduleResolution": "bundler", + // "allowImportingTsExtensions": true, + "verbatimModuleSyntax": true, + // "noEmit": true, + + // Some stricter flags (disabled by default) + "noUnusedLocals": false, + "noUnusedParameters": false, + "noPropertyAccessFromIndexSignature": false, + // Type checking "strict": true, "noImplicitAny": true, @@ -16,13 +35,10 @@ "declarationMap": true, // stuff claude put in - "noUnusedLocals": true, - "noUnusedParameters": true, "noImplicitReturns": true, "noFallthroughCasesInSwitch": true, "noUncheckedIndexedAccess": true, "noImplicitOverride": true, - "noPropertyAccessFromIndexSignature": true, // Module interoperability "esModuleInterop": true, @@ -37,10 +53,14 @@ "disableReferencedProjectLoad": true, "disableSourceOfProjectReferenceRedirect": false, + // Decorator support for Bun's hybrid implementation + "experimentalDecorators": true, + "emitDecoratorMetadata": true, + // Paths and output "baseUrl": ".", "paths": { - "@stock-bot/*": ["libs/*/src"] + "@stock-bot/*": ["libs/*/src"], } }, "exclude": ["node_modules", "dist"] diff --git a/tsconfig.lib.json b/tsconfig.lib.json index 1d94681..4224ec1 100644 --- a/tsconfig.lib.json +++ b/tsconfig.lib.json @@ -1,17 +1,17 @@ -{ - "$schema": "https://json.schemastore.org/tsconfig", - "extends": "./tsconfig.json", - "compilerOptions": { - // Override root settings for library builds - "composite": true, - "declaration": true, - "declarationMap": true, - "incremental": true, - "noEmit": false, - "outDir": "./dist", - "rootDir": "./src", - "types": ["bun-types"] - }, - "include": ["src/**/*"], - "exclude": ["node_modules", "./dist", "**/*.test.ts", "**/*.spec.ts"] -} +{ + "$schema": "https://json.schemastore.org/tsconfig", + "extends": "./tsconfig.json", + "compilerOptions": { + // Override root settings for library builds + "composite": true, + "declaration": true, + "declarationMap": true, + "incremental": true, + "noEmit": false, + "outDir": "./dist", + "rootDir": "./src", + "types": ["bun-types"] + }, + "include": ["src/**/*"], + "exclude": ["node_modules", "./dist", "**/*.test.ts", "**/*.spec.ts"] +} diff --git a/tsconfig.unused.json b/tsconfig.unused.json new file mode 100644 index 0000000..4781c1a --- /dev/null +++ b/tsconfig.unused.json @@ -0,0 +1,19 @@ +{ + "extends": "./tsconfig.json", + "include": [ + "apps/**/*.ts", + "apps/**/*.tsx", + "libs/**/*.ts", + "libs/**/*.tsx" + ], + "exclude": [ + "node_modules", + "dist", + "**/dist/**", + "**/node_modules/**", + "**/*.test.ts", + "**/*.spec.ts", + "**/test/**", + "**/tests/**" + ] +} \ No newline at end of file