huge refactor with a million of things to make the code much more managable and easier to create new services #3
487 changed files with 22175 additions and 14905 deletions
4
.env
4
.env
|
|
@ -4,7 +4,7 @@
|
||||||
|
|
||||||
# Core Application Settings
|
# Core Application Settings
|
||||||
NODE_ENV=development
|
NODE_ENV=development
|
||||||
LOG_LEVEL=debug
|
LOG_LEVEL=trace
|
||||||
LOG_HIDE_OBJECT=true
|
LOG_HIDE_OBJECT=true
|
||||||
|
|
||||||
# Data Service Configuration
|
# Data Service Configuration
|
||||||
|
|
@ -39,7 +39,7 @@ POSTGRES_SSL=false
|
||||||
QUESTDB_HOST=localhost
|
QUESTDB_HOST=localhost
|
||||||
QUESTDB_PORT=9000
|
QUESTDB_PORT=9000
|
||||||
QUESTDB_DB=qdb
|
QUESTDB_DB=qdb
|
||||||
QUESTDB_USER=admin
|
QUESTDB_USERNAME=admin
|
||||||
QUESTDB_PASSWORD=quest
|
QUESTDB_PASSWORD=quest
|
||||||
|
|
||||||
# MongoDB Configuration
|
# MongoDB Configuration
|
||||||
|
|
|
||||||
BIN
.serena/cache/typescript/document_symbols_cache_v20-05-25.pkl
vendored
Normal file
BIN
.serena/cache/typescript/document_symbols_cache_v20-05-25.pkl
vendored
Normal file
Binary file not shown.
58
.serena/memories/code_style_conventions.md
Normal file
58
.serena/memories/code_style_conventions.md
Normal file
|
|
@ -0,0 +1,58 @@
|
||||||
|
# Code Style and Conventions
|
||||||
|
|
||||||
|
## TypeScript Configuration
|
||||||
|
- **Strict mode enabled**: All strict checks are on
|
||||||
|
- **Target**: ES2022
|
||||||
|
- **Module**: ESNext with bundler resolution
|
||||||
|
- **Path aliases**: `@stock-bot/*` maps to `libs/*/src`
|
||||||
|
- **Decorators**: Enabled for dependency injection
|
||||||
|
|
||||||
|
## Code Style Rules (ESLint)
|
||||||
|
- **No unused variables**: Error (except prefixed with `_`)
|
||||||
|
- **No explicit any**: Warning
|
||||||
|
- **No non-null assertion**: Warning
|
||||||
|
- **No console**: Warning (except in tests)
|
||||||
|
- **Prefer const**: Enforced
|
||||||
|
- **Strict equality**: Always use `===`
|
||||||
|
- **Curly braces**: Required for all blocks
|
||||||
|
|
||||||
|
## Formatting (Prettier)
|
||||||
|
- **Semicolons**: Always
|
||||||
|
- **Single quotes**: Yes
|
||||||
|
- **Trailing comma**: ES5
|
||||||
|
- **Print width**: 100 characters
|
||||||
|
- **Tab width**: 2 spaces
|
||||||
|
- **Arrow parens**: Avoid when possible
|
||||||
|
- **End of line**: LF
|
||||||
|
|
||||||
|
## Import Order
|
||||||
|
1. Node built-ins
|
||||||
|
2. Third-party modules
|
||||||
|
3. `@stock-bot/*` imports
|
||||||
|
4. Relative imports (parent directories first)
|
||||||
|
5. Current directory imports
|
||||||
|
|
||||||
|
## Naming Conventions
|
||||||
|
- **Files**: kebab-case (e.g., `database-setup.ts`)
|
||||||
|
- **Classes**: PascalCase
|
||||||
|
- **Functions/Variables**: camelCase
|
||||||
|
- **Constants**: UPPER_SNAKE_CASE
|
||||||
|
- **Interfaces/Types**: PascalCase with 'I' or 'T' prefix optional
|
||||||
|
|
||||||
|
## Library Standards
|
||||||
|
- **Named exports only**: No default exports
|
||||||
|
- **Factory patterns**: For complex initialization
|
||||||
|
- **Singleton pattern**: For global services (config, logger)
|
||||||
|
- **Direct class exports**: For DI-managed services
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
- **File naming**: `*.test.ts` or `*.spec.ts`
|
||||||
|
- **Test structure**: Bun's built-in test runner
|
||||||
|
- **Integration tests**: Use TestContainers for databases
|
||||||
|
- **Mocking**: Mock external dependencies
|
||||||
|
|
||||||
|
## Documentation
|
||||||
|
- **JSDoc**: For all public APIs
|
||||||
|
- **README.md**: Required for each library
|
||||||
|
- **Usage examples**: Include in documentation
|
||||||
|
- **Error messages**: Descriptive with context
|
||||||
41
.serena/memories/current_refactoring_context.md
Normal file
41
.serena/memories/current_refactoring_context.md
Normal file
|
|
@ -0,0 +1,41 @@
|
||||||
|
# Current Refactoring Context
|
||||||
|
|
||||||
|
## Data Ingestion Service Refactor
|
||||||
|
The project is currently undergoing a major refactoring to move away from singleton patterns to a dependency injection approach using service containers.
|
||||||
|
|
||||||
|
### What's Been Done
|
||||||
|
- Created connection pool pattern with `ServiceContainer`
|
||||||
|
- Refactored data-ingestion service to use DI container
|
||||||
|
- Updated handlers to accept container parameter
|
||||||
|
- Added proper resource disposal with `ctx.dispose()`
|
||||||
|
|
||||||
|
### Migration Status
|
||||||
|
- QM handler: ✅ Fully migrated to container pattern
|
||||||
|
- IB handler: ⚠️ Partially migrated (using migration helper)
|
||||||
|
- Proxy handler: ✅ Updated to accept container
|
||||||
|
- WebShare handler: ✅ Updated to accept container
|
||||||
|
|
||||||
|
### Key Patterns
|
||||||
|
1. **Service Container**: Central DI container managing all connections
|
||||||
|
2. **Operation Context**: Provides scoped database access within operations
|
||||||
|
3. **Factory Pattern**: Connection factories for different databases
|
||||||
|
4. **Resource Disposal**: Always call `ctx.dispose()` after operations
|
||||||
|
|
||||||
|
### Example Pattern
|
||||||
|
```typescript
|
||||||
|
const ctx = OperationContext.create('handler', 'operation', { container });
|
||||||
|
try {
|
||||||
|
// Use databases through context
|
||||||
|
await ctx.mongodb.insertOne(data);
|
||||||
|
await ctx.postgres.query('...');
|
||||||
|
return { success: true };
|
||||||
|
} finally {
|
||||||
|
await ctx.dispose(); // Always cleanup
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Next Steps
|
||||||
|
- Complete migration of remaining IB operations
|
||||||
|
- Remove migration helper once complete
|
||||||
|
- Apply same pattern to other services
|
||||||
|
- Add monitoring for connection pools
|
||||||
55
.serena/memories/project_overview.md
Normal file
55
.serena/memories/project_overview.md
Normal file
|
|
@ -0,0 +1,55 @@
|
||||||
|
# Stock Bot Trading Platform
|
||||||
|
|
||||||
|
## Project Purpose
|
||||||
|
This is an advanced trading bot platform with a microservice architecture designed for automated stock trading. The system includes:
|
||||||
|
- Market data ingestion from multiple providers (Yahoo Finance, QuoteMedia, Interactive Brokers, WebShare)
|
||||||
|
- Data processing and technical indicator calculation
|
||||||
|
- Trading strategy development and backtesting
|
||||||
|
- Order execution and risk management
|
||||||
|
- Portfolio tracking and performance analytics
|
||||||
|
- Web dashboard for monitoring
|
||||||
|
|
||||||
|
## Architecture Overview
|
||||||
|
The project follows a **microservices architecture** with shared libraries:
|
||||||
|
|
||||||
|
### Core Services (apps/)
|
||||||
|
- **data-ingestion**: Ingests market data from multiple providers
|
||||||
|
- **data-pipeline**: Processes and transforms data
|
||||||
|
- **web-api**: REST API service
|
||||||
|
- **web-app**: React-based dashboard
|
||||||
|
|
||||||
|
### Shared Libraries (libs/)
|
||||||
|
**Core Libraries:**
|
||||||
|
- config: Environment configuration with Zod validation
|
||||||
|
- logger: Structured logging with Loki integration
|
||||||
|
- di: Dependency injection container
|
||||||
|
- types: Shared TypeScript types
|
||||||
|
- handlers: Common handler patterns
|
||||||
|
|
||||||
|
**Data Libraries:**
|
||||||
|
- postgres: PostgreSQL client for transactional data
|
||||||
|
- questdb: Time-series database for market data
|
||||||
|
- mongodb: Document storage for configurations
|
||||||
|
|
||||||
|
**Service Libraries:**
|
||||||
|
- queue: BullMQ-based job processing
|
||||||
|
- event-bus: Dragonfly/Redis event bus
|
||||||
|
- shutdown: Graceful shutdown management
|
||||||
|
|
||||||
|
**Utils:**
|
||||||
|
- Financial calculations and technical indicators
|
||||||
|
- Date utilities
|
||||||
|
- Position sizing calculations
|
||||||
|
|
||||||
|
## Database Strategy
|
||||||
|
- **PostgreSQL**: Transactional data (orders, positions, strategies)
|
||||||
|
- **QuestDB**: Time-series data (OHLCV, indicators, performance metrics)
|
||||||
|
- **MongoDB**: Document storage (configurations, raw API responses)
|
||||||
|
- **Dragonfly/Redis**: Event bus and caching layer
|
||||||
|
|
||||||
|
## Current Development Phase
|
||||||
|
Phase 1: Data Foundation Layer (In Progress)
|
||||||
|
- Enhancing data provider reliability
|
||||||
|
- Implementing data validation
|
||||||
|
- Optimizing time-series storage
|
||||||
|
- Building robust HTTP client with circuit breakers
|
||||||
62
.serena/memories/project_structure.md
Normal file
62
.serena/memories/project_structure.md
Normal file
|
|
@ -0,0 +1,62 @@
|
||||||
|
# Project Structure
|
||||||
|
|
||||||
|
## Root Directory
|
||||||
|
```
|
||||||
|
stock-bot/
|
||||||
|
├── apps/ # Microservice applications
|
||||||
|
│ ├── data-ingestion/ # Market data ingestion service
|
||||||
|
│ ├── data-pipeline/ # Data processing pipeline
|
||||||
|
│ ├── web-api/ # REST API service
|
||||||
|
│ └── web-app/ # React dashboard
|
||||||
|
├── libs/ # Shared libraries
|
||||||
|
│ ├── core/ # Core functionality
|
||||||
|
│ │ ├── config/ # Configuration management
|
||||||
|
│ │ ├── logger/ # Logging infrastructure
|
||||||
|
│ │ ├── di/ # Dependency injection
|
||||||
|
│ │ ├── types/ # Shared TypeScript types
|
||||||
|
│ │ └── handlers/ # Common handler patterns
|
||||||
|
│ ├── data/ # Database clients
|
||||||
|
│ │ ├── postgres/ # PostgreSQL client
|
||||||
|
│ │ ├── questdb/ # QuestDB time-series client
|
||||||
|
│ │ └── mongodb/ # MongoDB document storage
|
||||||
|
│ ├── services/ # Service utilities
|
||||||
|
│ │ ├── queue/ # BullMQ job processing
|
||||||
|
│ │ ├── event-bus/ # Dragonfly event bus
|
||||||
|
│ │ └── shutdown/ # Graceful shutdown
|
||||||
|
│ └── utils/ # Utility functions
|
||||||
|
├── database/ # Database schemas and migrations
|
||||||
|
├── scripts/ # Build and utility scripts
|
||||||
|
├── config/ # Configuration files
|
||||||
|
├── monitoring/ # Monitoring configurations
|
||||||
|
├── docs/ # Documentation
|
||||||
|
└── test/ # Global test utilities
|
||||||
|
|
||||||
|
## Key Files
|
||||||
|
- `package.json` - Root package configuration
|
||||||
|
- `turbo.json` - Turbo monorepo configuration
|
||||||
|
- `tsconfig.json` - TypeScript configuration
|
||||||
|
- `eslint.config.js` - ESLint rules
|
||||||
|
- `.prettierrc` - Prettier formatting rules
|
||||||
|
- `docker-compose.yml` - Infrastructure setup
|
||||||
|
- `.env` - Environment variables
|
||||||
|
|
||||||
|
## Monorepo Structure
|
||||||
|
- Uses Bun workspaces with Turbo for orchestration
|
||||||
|
- Each app and library has its own package.json
|
||||||
|
- Shared dependencies at root level
|
||||||
|
- Libraries published as `@stock-bot/*` packages
|
||||||
|
|
||||||
|
## Service Architecture Pattern
|
||||||
|
Each service typically follows:
|
||||||
|
```
|
||||||
|
service/
|
||||||
|
├── src/
|
||||||
|
│ ├── index.ts # Entry point
|
||||||
|
│ ├── routes/ # API routes (Hono)
|
||||||
|
│ ├── handlers/ # Business logic
|
||||||
|
│ ├── services/ # Service layer
|
||||||
|
│ └── types/ # Service-specific types
|
||||||
|
├── test/ # Tests
|
||||||
|
├── package.json
|
||||||
|
└── tsconfig.json
|
||||||
|
```
|
||||||
73
.serena/memories/suggested_commands.md
Normal file
73
.serena/memories/suggested_commands.md
Normal file
|
|
@ -0,0 +1,73 @@
|
||||||
|
# Suggested Commands for Development
|
||||||
|
|
||||||
|
## Package Management (Bun)
|
||||||
|
- `bun install` - Install all dependencies
|
||||||
|
- `bun add <package>` - Add a new dependency
|
||||||
|
- `bun add -D <package>` - Add a dev dependency
|
||||||
|
- `bun update` - Update dependencies
|
||||||
|
|
||||||
|
## Development
|
||||||
|
- `bun run dev` - Start all services in development mode (uses Turbo)
|
||||||
|
- `bun run dev:full` - Start infrastructure + admin tools + dev mode
|
||||||
|
- `bun run dev:clean` - Reset infrastructure and start fresh
|
||||||
|
|
||||||
|
## Building
|
||||||
|
- `bun run build` - Build all services and libraries
|
||||||
|
- `bun run build:libs` - Build only shared libraries
|
||||||
|
- `bun run build:all:clean` - Clean build with cache removal
|
||||||
|
- `./scripts/build-all.sh` - Custom build script with options
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
- `bun test` - Run all tests
|
||||||
|
- `bun test --watch` - Run tests in watch mode
|
||||||
|
- `bun run test:coverage` - Run tests with coverage report
|
||||||
|
- `bun run test:libs` - Test only shared libraries
|
||||||
|
- `bun run test:apps` - Test only applications
|
||||||
|
- `bun test <file>` - Run specific test file
|
||||||
|
|
||||||
|
## Code Quality (IMPORTANT - Run before committing!)
|
||||||
|
- `bun run lint` - Check for linting errors
|
||||||
|
- `bun run lint:fix` - Auto-fix linting issues
|
||||||
|
- `bun run format` - Format code with Prettier
|
||||||
|
- `./scripts/format.sh` - Alternative format script
|
||||||
|
|
||||||
|
## Infrastructure Management
|
||||||
|
- `bun run infra:up` - Start databases (PostgreSQL, QuestDB, MongoDB, Dragonfly)
|
||||||
|
- `bun run infra:down` - Stop infrastructure
|
||||||
|
- `bun run infra:reset` - Reset with clean volumes
|
||||||
|
- `bun run docker:admin` - Start admin GUIs (pgAdmin, Mongo Express, Redis Insight)
|
||||||
|
- `bun run docker:monitoring` - Start monitoring stack
|
||||||
|
|
||||||
|
## Database Operations
|
||||||
|
- `bun run db:setup-ib` - Setup Interactive Brokers database schema
|
||||||
|
- `bun run db:init` - Initialize all database schemas
|
||||||
|
|
||||||
|
## Utility Commands
|
||||||
|
- `bun run clean` - Clean build artifacts
|
||||||
|
- `bun run clean:all` - Deep clean including node_modules
|
||||||
|
- `turbo run <task>` - Run task across monorepo
|
||||||
|
|
||||||
|
## Git Commands (Linux)
|
||||||
|
- `git status` - Check current status
|
||||||
|
- `git add .` - Stage all changes
|
||||||
|
- `git commit -m "message"` - Commit changes
|
||||||
|
- `git push` - Push to remote
|
||||||
|
- `git pull` - Pull from remote
|
||||||
|
- `git checkout -b <branch>` - Create new branch
|
||||||
|
|
||||||
|
## System Commands (Linux)
|
||||||
|
- `ls -la` - List files with details
|
||||||
|
- `cd <directory>` - Change directory
|
||||||
|
- `grep -r "pattern" .` - Search for pattern
|
||||||
|
- `find . -name "*.ts"` - Find files by pattern
|
||||||
|
- `which <command>` - Find command location
|
||||||
|
|
||||||
|
## MCP Setup (for database access in IDE)
|
||||||
|
- `./scripts/setup-mcp.sh` - Setup Model Context Protocol servers
|
||||||
|
- Requires infrastructure to be running first
|
||||||
|
|
||||||
|
## Service URLs
|
||||||
|
- Dashboard: http://localhost:4200
|
||||||
|
- QuestDB Console: http://localhost:9000
|
||||||
|
- Grafana: http://localhost:3000
|
||||||
|
- pgAdmin: http://localhost:8080
|
||||||
55
.serena/memories/task_completion_checklist.md
Normal file
55
.serena/memories/task_completion_checklist.md
Normal file
|
|
@ -0,0 +1,55 @@
|
||||||
|
# Task Completion Checklist
|
||||||
|
|
||||||
|
When you complete any coding task, ALWAYS run these commands in order:
|
||||||
|
|
||||||
|
## 1. Code Quality Checks (MANDATORY)
|
||||||
|
```bash
|
||||||
|
# Run linting to catch code issues
|
||||||
|
bun run lint
|
||||||
|
|
||||||
|
# If there are errors, fix them automatically
|
||||||
|
bun run lint:fix
|
||||||
|
|
||||||
|
# Format the code
|
||||||
|
bun run format
|
||||||
|
```
|
||||||
|
|
||||||
|
## 2. Testing (if applicable)
|
||||||
|
```bash
|
||||||
|
# Run tests if you modified existing functionality
|
||||||
|
bun test
|
||||||
|
|
||||||
|
# Run specific test file if you added/modified tests
|
||||||
|
bun test <path-to-test-file>
|
||||||
|
```
|
||||||
|
|
||||||
|
## 3. Build Verification (for significant changes)
|
||||||
|
```bash
|
||||||
|
# Build the affected libraries/apps
|
||||||
|
bun run build:libs # if you changed libraries
|
||||||
|
bun run build # for full build
|
||||||
|
```
|
||||||
|
|
||||||
|
## 4. Final Verification Steps
|
||||||
|
- Ensure no TypeScript errors in the IDE
|
||||||
|
- Check that imports are properly ordered (Prettier should handle this)
|
||||||
|
- Verify no console.log statements in production code
|
||||||
|
- Confirm all new code follows the established patterns
|
||||||
|
|
||||||
|
## 5. Git Commit Guidelines
|
||||||
|
- Stage changes: `git add .`
|
||||||
|
- Write descriptive commit messages
|
||||||
|
- Reference issue numbers if applicable
|
||||||
|
- Use conventional commit format when possible:
|
||||||
|
- `feat:` for new features
|
||||||
|
- `fix:` for bug fixes
|
||||||
|
- `refactor:` for code refactoring
|
||||||
|
- `docs:` for documentation
|
||||||
|
- `test:` for tests
|
||||||
|
- `chore:` for maintenance
|
||||||
|
|
||||||
|
## Important Notes
|
||||||
|
- NEVER skip the linting and formatting steps
|
||||||
|
- The project uses ESLint and Prettier - let them do their job
|
||||||
|
- If lint errors persist after auto-fix, they need manual attention
|
||||||
|
- Always test your changes, even if just running the service locally
|
||||||
49
.serena/memories/tech_stack.md
Normal file
49
.serena/memories/tech_stack.md
Normal file
|
|
@ -0,0 +1,49 @@
|
||||||
|
# Technology Stack
|
||||||
|
|
||||||
|
## Runtime & Package Manager
|
||||||
|
- **Bun**: v1.1.0+ (primary runtime and package manager)
|
||||||
|
- **Node.js**: v18.0.0+ (compatibility)
|
||||||
|
- **TypeScript**: v5.8.3
|
||||||
|
|
||||||
|
## Core Technologies
|
||||||
|
- **Turbo**: Monorepo build system
|
||||||
|
- **ESBuild**: Fast bundling (integrated with Bun)
|
||||||
|
- **Hono**: Lightweight web framework for services
|
||||||
|
|
||||||
|
## Databases
|
||||||
|
- **PostgreSQL**: Primary transactional database
|
||||||
|
- **QuestDB**: Time-series database for market data
|
||||||
|
- **MongoDB**: Document storage
|
||||||
|
- **Dragonfly**: Redis-compatible cache and event bus
|
||||||
|
|
||||||
|
## Queue & Messaging
|
||||||
|
- **BullMQ**: Job queue processing
|
||||||
|
- **IORedis**: Redis client for Dragonfly
|
||||||
|
|
||||||
|
## Web Technologies
|
||||||
|
- **React**: Frontend framework (web-app)
|
||||||
|
- **Angular**: (based on polyfills.ts reference)
|
||||||
|
- **PrimeNG**: UI component library
|
||||||
|
- **TailwindCSS**: CSS framework
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
- **Bun Test**: Built-in test runner
|
||||||
|
- **TestContainers**: Database integration testing
|
||||||
|
- **Supertest**: API testing
|
||||||
|
|
||||||
|
## Monitoring & Observability
|
||||||
|
- **Loki**: Log aggregation
|
||||||
|
- **Prometheus**: Metrics collection
|
||||||
|
- **Grafana**: Visualization dashboards
|
||||||
|
|
||||||
|
## Development Tools
|
||||||
|
- **ESLint**: Code linting
|
||||||
|
- **Prettier**: Code formatting
|
||||||
|
- **Docker Compose**: Local infrastructure
|
||||||
|
- **Model Context Protocol (MCP)**: Database access in IDE
|
||||||
|
|
||||||
|
## Key Dependencies
|
||||||
|
- **Awilix**: Dependency injection container
|
||||||
|
- **Zod**: Schema validation
|
||||||
|
- **pg**: PostgreSQL client
|
||||||
|
- **Playwright**: Browser automation for proxy testing
|
||||||
66
.serena/project.yml
Normal file
66
.serena/project.yml
Normal file
|
|
@ -0,0 +1,66 @@
|
||||||
|
# language of the project (csharp, python, rust, java, typescript, javascript, go, cpp, or ruby)
|
||||||
|
# Special requirements:
|
||||||
|
# * csharp: Requires the presence of a .sln file in the project folder.
|
||||||
|
language: typescript
|
||||||
|
|
||||||
|
# whether to use the project's gitignore file to ignore files
|
||||||
|
# Added on 2025-04-07
|
||||||
|
ignore_all_files_in_gitignore: true
|
||||||
|
# list of additional paths to ignore
|
||||||
|
# same syntax as gitignore, so you can use * and **
|
||||||
|
# Was previously called `ignored_dirs`, please update your config if you are using that.
|
||||||
|
# Added (renamed)on 2025-04-07
|
||||||
|
ignored_paths: []
|
||||||
|
|
||||||
|
# whether the project is in read-only mode
|
||||||
|
# If set to true, all editing tools will be disabled and attempts to use them will result in an error
|
||||||
|
# Added on 2025-04-18
|
||||||
|
read_only: false
|
||||||
|
|
||||||
|
|
||||||
|
# list of tool names to exclude. We recommend not excluding any tools, see the readme for more details.
|
||||||
|
# Below is the complete list of tools for convenience.
|
||||||
|
# To make sure you have the latest list of tools, and to view their descriptions,
|
||||||
|
# execute `uv run scripts/print_tool_overview.py`.
|
||||||
|
#
|
||||||
|
# * `activate_project`: Activates a project by name.
|
||||||
|
# * `check_onboarding_performed`: Checks whether project onboarding was already performed.
|
||||||
|
# * `create_text_file`: Creates/overwrites a file in the project directory.
|
||||||
|
# * `delete_lines`: Deletes a range of lines within a file.
|
||||||
|
# * `delete_memory`: Deletes a memory from Serena's project-specific memory store.
|
||||||
|
# * `execute_shell_command`: Executes a shell command.
|
||||||
|
# * `find_referencing_code_snippets`: Finds code snippets in which the symbol at the given location is referenced.
|
||||||
|
# * `find_referencing_symbols`: Finds symbols that reference the symbol at the given location (optionally filtered by type).
|
||||||
|
# * `find_symbol`: Performs a global (or local) search for symbols with/containing a given name/substring (optionally filtered by type).
|
||||||
|
# * `get_current_config`: Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
|
||||||
|
# * `get_symbols_overview`: Gets an overview of the top-level symbols defined in a given file or directory.
|
||||||
|
# * `initial_instructions`: Gets the initial instructions for the current project.
|
||||||
|
# Should only be used in settings where the system prompt cannot be set,
|
||||||
|
# e.g. in clients you have no control over, like Claude Desktop.
|
||||||
|
# * `insert_after_symbol`: Inserts content after the end of the definition of a given symbol.
|
||||||
|
# * `insert_at_line`: Inserts content at a given line in a file.
|
||||||
|
# * `insert_before_symbol`: Inserts content before the beginning of the definition of a given symbol.
|
||||||
|
# * `list_dir`: Lists files and directories in the given directory (optionally with recursion).
|
||||||
|
# * `list_memories`: Lists memories in Serena's project-specific memory store.
|
||||||
|
# * `onboarding`: Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
|
||||||
|
# * `prepare_for_new_conversation`: Provides instructions for preparing for a new conversation (in order to continue with the necessary context).
|
||||||
|
# * `read_file`: Reads a file within the project directory.
|
||||||
|
# * `read_memory`: Reads the memory with the given name from Serena's project-specific memory store.
|
||||||
|
# * `remove_project`: Removes a project from the Serena configuration.
|
||||||
|
# * `replace_lines`: Replaces a range of lines within a file with new content.
|
||||||
|
# * `replace_symbol_body`: Replaces the full definition of a symbol.
|
||||||
|
# * `restart_language_server`: Restarts the language server, may be necessary when edits not through Serena happen.
|
||||||
|
# * `search_for_pattern`: Performs a search for a pattern in the project.
|
||||||
|
# * `summarize_changes`: Provides instructions for summarizing the changes made to the codebase.
|
||||||
|
# * `switch_modes`: Activates modes by providing a list of their names
|
||||||
|
# * `think_about_collected_information`: Thinking tool for pondering the completeness of collected information.
|
||||||
|
# * `think_about_task_adherence`: Thinking tool for determining whether the agent is still on track with the current task.
|
||||||
|
# * `think_about_whether_you_are_done`: Thinking tool for determining whether the task is truly completed.
|
||||||
|
# * `write_memory`: Writes a named memory (for future reference) to Serena's project-specific memory store.
|
||||||
|
excluded_tools: []
|
||||||
|
|
||||||
|
# initial prompt for the project. It will always be given to the LLM upon activating the project
|
||||||
|
# (contrary to the memories, which are loaded on demand).
|
||||||
|
initial_prompt: ""
|
||||||
|
|
||||||
|
project_name: "stock-bot"
|
||||||
20
.vscode/mcp.json
vendored
20
.vscode/mcp.json
vendored
|
|
@ -1,21 +1,3 @@
|
||||||
{
|
{
|
||||||
"mcpServers": {
|
|
||||||
"postgres": {
|
|
||||||
"command": "npx",
|
|
||||||
"args": [
|
|
||||||
"-y",
|
|
||||||
"@modelcontextprotocol/server-postgres",
|
|
||||||
"postgresql://trading_user:trading_pass_dev@localhost:5432/trading_bot"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"mongodb": {
|
|
||||||
"command": "npx",
|
|
||||||
"args": [
|
|
||||||
"-y",
|
|
||||||
"mongodb-mcp-server",
|
|
||||||
"--connectionString",
|
|
||||||
"mongodb://trading_admin:trading_mongo_dev@localhost:27017/stock?authSource=admin"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
174
CLAUDE.md
174
CLAUDE.md
|
|
@ -1,171 +1,7 @@
|
||||||
# CLAUDE.md
|
Be brutally honest, don't be a yes man. │
|
||||||
|
If I am wrong, point it out bluntly. │
|
||||||
|
I need honest feedback on my code.
|
||||||
|
|
||||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
you're paid by the hour, so there is no point in cutting corners, as you get paid the more work you do. Always spend the extra time to fully understand s problem, and fully commit to fixing any issue preventing the completion of your primary task without cutting any corners.
|
||||||
|
|
||||||
## Development Commands
|
use bun and turbo where possible and always try to take a more modern approach.
|
||||||
|
|
||||||
**Package Manager**: Bun (v1.1.0+)
|
|
||||||
|
|
||||||
**Build & Development**:
|
|
||||||
- `bun install` - Install dependencies
|
|
||||||
- `bun run dev` - Start all services in development mode (uses Turbo)
|
|
||||||
- `bun run build` - Build all services and libraries
|
|
||||||
- `bun run build:libs` - Build only shared libraries
|
|
||||||
- `./scripts/build-all.sh` - Custom build script with options
|
|
||||||
|
|
||||||
**Testing**:
|
|
||||||
- `bun test` - Run all tests
|
|
||||||
- `bun run test:libs` - Test only shared libraries
|
|
||||||
- `bun run test:apps` - Test only applications
|
|
||||||
- `bun run test:coverage` - Run tests with coverage
|
|
||||||
|
|
||||||
**Code Quality**:
|
|
||||||
- `bun run lint` - Lint TypeScript files
|
|
||||||
- `bun run lint:fix` - Auto-fix linting issues
|
|
||||||
- `bun run format` - Format code using Prettier
|
|
||||||
- `./scripts/format.sh` - Format script
|
|
||||||
|
|
||||||
**Infrastructure**:
|
|
||||||
- `bun run infra:up` - Start database infrastructure (PostgreSQL, QuestDB, MongoDB, Dragonfly)
|
|
||||||
- `bun run infra:down` - Stop infrastructure
|
|
||||||
- `bun run infra:reset` - Reset infrastructure with clean volumes
|
|
||||||
- `bun run docker:admin` - Start admin GUIs (pgAdmin, Mongo Express, Redis Insight)
|
|
||||||
|
|
||||||
**Database Setup**:
|
|
||||||
- `bun run db:setup-ib` - Setup Interactive Brokers database schema
|
|
||||||
- `bun run db:init` - Initialize database schemas
|
|
||||||
|
|
||||||
## Architecture Overview
|
|
||||||
|
|
||||||
**Microservices Architecture** with shared libraries and multi-database storage:
|
|
||||||
|
|
||||||
### Core Services (`apps/`)
|
|
||||||
- **data-service** - Market data ingestion from multiple providers (Yahoo, QuoteMedia, IB)
|
|
||||||
- **processing-service** - Data cleaning, validation, and technical indicators
|
|
||||||
- **strategy-service** - Trading strategies and backtesting (multi-mode: live, event-driven, vectorized, hybrid)
|
|
||||||
- **execution-service** - Order management and risk controls
|
|
||||||
- **portfolio-service** - Position tracking and performance analytics
|
|
||||||
- **web-app** - React dashboard with real-time updates
|
|
||||||
|
|
||||||
### Shared Libraries (`libs/`)
|
|
||||||
- **config** - Environment configuration with Zod validation
|
|
||||||
- **logger** - Loki-integrated structured logging (use `getLogger()` pattern)
|
|
||||||
- **http** - HTTP client with proxy support and rate limiting
|
|
||||||
- **cache** - Redis/Dragonfly caching layer
|
|
||||||
- **queue** - BullMQ-based job processing with batch support
|
|
||||||
- **postgres-client** - PostgreSQL operations with transactions
|
|
||||||
- **questdb-client** - Time-series data storage
|
|
||||||
- **mongodb-client** - Document storage operations
|
|
||||||
- **utils** - Financial calculations and technical indicators
|
|
||||||
|
|
||||||
### Database Strategy
|
|
||||||
- **PostgreSQL** - Transactional data (orders, positions, strategies)
|
|
||||||
- **QuestDB** - Time-series data (OHLCV, indicators, performance metrics)
|
|
||||||
- **MongoDB** - Document storage (configurations, raw responses)
|
|
||||||
- **Dragonfly** - Event bus and caching (Redis-compatible)
|
|
||||||
|
|
||||||
## Key Patterns & Conventions
|
|
||||||
|
|
||||||
**Library Usage**:
|
|
||||||
- Import from shared libraries: `import { getLogger } from '@stock-bot/logger'`
|
|
||||||
- Use configuration: `import { databaseConfig } from '@stock-bot/config'`
|
|
||||||
- Logger pattern: `const logger = getLogger('service-name')`
|
|
||||||
|
|
||||||
**Service Structure**:
|
|
||||||
- Each service has `src/index.ts` as entry point
|
|
||||||
- Routes in `src/routes/` using Hono framework
|
|
||||||
- Handlers/services in `src/handlers/` or `src/services/`
|
|
||||||
- Use dependency injection pattern
|
|
||||||
|
|
||||||
**Data Processing**:
|
|
||||||
- Raw data → QuestDB via handlers
|
|
||||||
- Processed data → PostgreSQL via processing service
|
|
||||||
- Event-driven communication via Dragonfly
|
|
||||||
- Queue-based batch processing for large datasets
|
|
||||||
|
|
||||||
**Multi-Mode Backtesting**:
|
|
||||||
- **Live Mode** - Real-time trading with brokers
|
|
||||||
- **Event-Driven** - Realistic simulation with market conditions
|
|
||||||
- **Vectorized** - Fast mathematical backtesting for optimization
|
|
||||||
- **Hybrid** - Validation by comparing vectorized vs event-driven results
|
|
||||||
|
|
||||||
## Development Workflow
|
|
||||||
|
|
||||||
1. **Start Infrastructure**: `bun run infra:up`
|
|
||||||
2. **Build Libraries**: `bun run build:libs`
|
|
||||||
3. **Start Development**: `bun run dev`
|
|
||||||
4. **Access UIs**:
|
|
||||||
- Dashboard: http://localhost:4200
|
|
||||||
- QuestDB Console: http://localhost:9000
|
|
||||||
- Grafana: http://localhost:3000
|
|
||||||
- pgAdmin: http://localhost:8080
|
|
||||||
|
|
||||||
## Important Files & Locations
|
|
||||||
|
|
||||||
**Configuration**:
|
|
||||||
- Environment variables in `.env` files
|
|
||||||
- Service configs in `libs/config/src/`
|
|
||||||
- Database init scripts in `database/postgres/init/`
|
|
||||||
|
|
||||||
**Key Scripts**:
|
|
||||||
- `scripts/build-all.sh` - Production build with cleanup
|
|
||||||
- `scripts/docker.sh` - Docker management
|
|
||||||
- `scripts/format.sh` - Code formatting
|
|
||||||
- `scripts/setup-mcp.sh` - Setup Model Context Protocol servers for database access
|
|
||||||
|
|
||||||
**Documentation**:
|
|
||||||
- `SIMPLIFIED-ARCHITECTURE.md` - Detailed architecture overview
|
|
||||||
- `DEVELOPMENT-ROADMAP.md` - Development phases and priorities
|
|
||||||
- Individual library READMEs in `libs/*/README.md`
|
|
||||||
|
|
||||||
## Current Development Phase
|
|
||||||
|
|
||||||
**Phase 1: Data Foundation Layer** (In Progress)
|
|
||||||
- Enhancing data provider reliability and rate limiting
|
|
||||||
- Implementing data validation and quality metrics
|
|
||||||
- Optimizing QuestDB storage for time-series data
|
|
||||||
- Building robust HTTP client with circuit breakers
|
|
||||||
|
|
||||||
Focus on data quality and provider fault tolerance before advancing to strategy implementation.
|
|
||||||
|
|
||||||
## Testing & Quality
|
|
||||||
|
|
||||||
- Use Bun's built-in test runner
|
|
||||||
- Integration tests with TestContainers for databases
|
|
||||||
- ESLint for code quality with TypeScript rules
|
|
||||||
- Prettier for code formatting
|
|
||||||
- All services should have health check endpoints
|
|
||||||
|
|
||||||
## Model Context Protocol (MCP) Setup
|
|
||||||
|
|
||||||
**MCP Database Servers** are configured in `.vscode/mcp.json` for direct database access:
|
|
||||||
|
|
||||||
- **PostgreSQL MCP Server**: Provides read-only access to PostgreSQL database
|
|
||||||
- Connection: `postgresql://trading_user:trading_pass_dev@localhost:5432/trading_bot`
|
|
||||||
- Package: `@modelcontextprotocol/server-postgres`
|
|
||||||
|
|
||||||
- **MongoDB MCP Server**: Official MongoDB team server for database and Atlas interaction
|
|
||||||
- Connection: `mongodb://trading_admin:trading_mongo_dev@localhost:27017/stock?authSource=admin`
|
|
||||||
- Package: `mongodb-mcp-server` (official MongoDB JavaScript team package)
|
|
||||||
|
|
||||||
**Setup Commands**:
|
|
||||||
- `./scripts/setup-mcp.sh` - Setup and test MCP servers
|
|
||||||
- `bun run infra:up` - Start database infrastructure (required for MCP)
|
|
||||||
|
|
||||||
**Usage**: Once configured, Claude Code can directly query and inspect database schemas and data through natural language commands.
|
|
||||||
|
|
||||||
## Environment Variables
|
|
||||||
|
|
||||||
Key environment variables (see `.env` example):
|
|
||||||
- `NODE_ENV` - Environment (development/production)
|
|
||||||
- `DATA_SERVICE_PORT` - Port for data service
|
|
||||||
- `DRAGONFLY_HOST/PORT` - Cache/event bus connection
|
|
||||||
- Database connection strings for PostgreSQL, QuestDB, MongoDB
|
|
||||||
|
|
||||||
## Monitoring & Observability
|
|
||||||
|
|
||||||
- **Logging**: Structured JSON logs to Loki
|
|
||||||
- **Metrics**: Prometheus metrics collection
|
|
||||||
- **Visualization**: Grafana dashboards
|
|
||||||
- **Queue Monitoring**: Bull Board for job queues
|
|
||||||
- **Health Checks**: All services expose `/health` endpoints
|
|
||||||
|
|
@ -1,35 +0,0 @@
|
||||||
{
|
|
||||||
"service": {
|
|
||||||
"name": "data-service",
|
|
||||||
"port": 2001,
|
|
||||||
"host": "0.0.0.0",
|
|
||||||
"healthCheckPath": "/health",
|
|
||||||
"metricsPath": "/metrics",
|
|
||||||
"shutdownTimeout": 30000,
|
|
||||||
"cors": {
|
|
||||||
"enabled": true,
|
|
||||||
"origin": "*",
|
|
||||||
"credentials": false
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"queue": {
|
|
||||||
"redis": {
|
|
||||||
"host": "localhost",
|
|
||||||
"port": 6379,
|
|
||||||
"db": 0
|
|
||||||
},
|
|
||||||
"defaultJobOptions": {
|
|
||||||
"attempts": 3,
|
|
||||||
"backoff": {
|
|
||||||
"type": "exponential",
|
|
||||||
"delay": 1000
|
|
||||||
},
|
|
||||||
"removeOnComplete": true,
|
|
||||||
"removeOnFail": false
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"webshare": {
|
|
||||||
"apiKey": "",
|
|
||||||
"apiUrl": "https://proxy.webshare.io/api/v2/"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,89 +0,0 @@
|
||||||
/**
|
|
||||||
* Interactive Brokers Provider for new queue system
|
|
||||||
*/
|
|
||||||
import { getLogger } from '@stock-bot/logger';
|
|
||||||
import {
|
|
||||||
createJobHandler,
|
|
||||||
handlerRegistry,
|
|
||||||
type HandlerConfigWithSchedule,
|
|
||||||
} from '@stock-bot/queue';
|
|
||||||
|
|
||||||
const logger = getLogger('ib-provider');
|
|
||||||
|
|
||||||
// Initialize and register the IB provider
|
|
||||||
export function initializeIBProvider() {
|
|
||||||
logger.debug('Registering IB provider with scheduled jobs...');
|
|
||||||
|
|
||||||
const ibProviderConfig: HandlerConfigWithSchedule = {
|
|
||||||
name: 'ib',
|
|
||||||
operations: {
|
|
||||||
'fetch-session': createJobHandler(async () => {
|
|
||||||
// payload contains session configuration (not used in current implementation)
|
|
||||||
logger.debug('Processing session fetch request');
|
|
||||||
const { fetchSession } = await import('./operations/session.operations');
|
|
||||||
return fetchSession();
|
|
||||||
}),
|
|
||||||
|
|
||||||
'fetch-exchanges': createJobHandler(async () => {
|
|
||||||
// payload should contain session headers
|
|
||||||
logger.debug('Processing exchanges fetch request');
|
|
||||||
const { fetchSession } = await import('./operations/session.operations');
|
|
||||||
const { fetchExchanges } = await import('./operations/exchanges.operations');
|
|
||||||
const sessionHeaders = await fetchSession();
|
|
||||||
if (sessionHeaders) {
|
|
||||||
return fetchExchanges(sessionHeaders);
|
|
||||||
}
|
|
||||||
throw new Error('Failed to get session headers');
|
|
||||||
}),
|
|
||||||
|
|
||||||
'fetch-symbols': createJobHandler(async () => {
|
|
||||||
// payload should contain session headers
|
|
||||||
logger.debug('Processing symbols fetch request');
|
|
||||||
const { fetchSession } = await import('./operations/session.operations');
|
|
||||||
const { fetchSymbols } = await import('./operations/symbols.operations');
|
|
||||||
const sessionHeaders = await fetchSession();
|
|
||||||
if (sessionHeaders) {
|
|
||||||
return fetchSymbols(sessionHeaders);
|
|
||||||
}
|
|
||||||
throw new Error('Failed to get session headers');
|
|
||||||
}),
|
|
||||||
|
|
||||||
'ib-exchanges-and-symbols': createJobHandler(async () => {
|
|
||||||
// Legacy operation for scheduled jobs
|
|
||||||
logger.info('Fetching symbol summary from IB');
|
|
||||||
const { fetchSession } = await import('./operations/session.operations');
|
|
||||||
const { fetchExchanges } = await import('./operations/exchanges.operations');
|
|
||||||
const { fetchSymbols } = await import('./operations/symbols.operations');
|
|
||||||
|
|
||||||
const sessionHeaders = await fetchSession();
|
|
||||||
logger.info('Fetched symbol summary from IB');
|
|
||||||
|
|
||||||
if (sessionHeaders) {
|
|
||||||
logger.debug('Fetching exchanges from IB');
|
|
||||||
const exchanges = await fetchExchanges(sessionHeaders);
|
|
||||||
logger.info('Fetched exchanges from IB', { count: exchanges?.length });
|
|
||||||
|
|
||||||
logger.debug('Fetching symbols from IB');
|
|
||||||
const symbols = await fetchSymbols(sessionHeaders);
|
|
||||||
logger.info('Fetched symbols from IB', { symbols });
|
|
||||||
|
|
||||||
return { exchangesCount: exchanges?.length, symbolsCount: symbols?.length };
|
|
||||||
}
|
|
||||||
return null;
|
|
||||||
}),
|
|
||||||
},
|
|
||||||
scheduledJobs: [
|
|
||||||
{
|
|
||||||
type: 'ib-exchanges-and-symbols',
|
|
||||||
operation: 'ib-exchanges-and-symbols',
|
|
||||||
cronPattern: '0 0 * * 0', // Every Sunday at midnight
|
|
||||||
priority: 5,
|
|
||||||
description: 'Fetch and update IB exchanges and symbols data',
|
|
||||||
// immediately: true, // Don't run immediately during startup to avoid conflicts
|
|
||||||
},
|
|
||||||
],
|
|
||||||
};
|
|
||||||
|
|
||||||
handlerRegistry.registerWithSchedule(ibProviderConfig);
|
|
||||||
logger.debug('IB provider registered successfully with scheduled jobs');
|
|
||||||
}
|
|
||||||
|
|
@ -1,88 +0,0 @@
|
||||||
/**
|
|
||||||
* IB Session Operations - Browser automation for session headers
|
|
||||||
*/
|
|
||||||
import { Browser } from '@stock-bot/browser';
|
|
||||||
import { OperationContext } from '@stock-bot/utils';
|
|
||||||
|
|
||||||
import { IB_CONFIG } from '../shared/config';
|
|
||||||
|
|
||||||
export async function fetchSession(): Promise<Record<string, string> | undefined> {
|
|
||||||
const ctx = OperationContext.create('ib', 'session');
|
|
||||||
|
|
||||||
try {
|
|
||||||
await Browser.initialize({
|
|
||||||
headless: true,
|
|
||||||
timeout: IB_CONFIG.BROWSER_TIMEOUT,
|
|
||||||
blockResources: false
|
|
||||||
});
|
|
||||||
ctx.logger.info('✅ Browser initialized');
|
|
||||||
|
|
||||||
const { page } = await Browser.createPageWithProxy(
|
|
||||||
IB_CONFIG.BASE_URL + IB_CONFIG.PRODUCTS_PAGE,
|
|
||||||
IB_CONFIG.DEFAULT_PROXY
|
|
||||||
);
|
|
||||||
ctx.logger.info('✅ Page created with proxy');
|
|
||||||
|
|
||||||
const headersPromise = new Promise<Record<string, string> | undefined>(resolve => {
|
|
||||||
let resolved = false;
|
|
||||||
|
|
||||||
page.onNetworkEvent(event => {
|
|
||||||
if (event.url.includes('/webrest/search/product-types/summary')) {
|
|
||||||
if (event.type === 'request') {
|
|
||||||
try {
|
|
||||||
resolve(event.headers);
|
|
||||||
} catch (e) {
|
|
||||||
resolve(undefined);
|
|
||||||
ctx.logger.debug('Raw Summary Response error', { error: (e as Error).message });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Timeout fallback
|
|
||||||
setTimeout(() => {
|
|
||||||
if (!resolved) {
|
|
||||||
resolved = true;
|
|
||||||
ctx.logger.warn('Timeout waiting for headers');
|
|
||||||
resolve(undefined);
|
|
||||||
}
|
|
||||||
}, IB_CONFIG.HEADERS_TIMEOUT);
|
|
||||||
});
|
|
||||||
|
|
||||||
ctx.logger.info('⏳ Waiting for page load...');
|
|
||||||
await page.waitForLoadState('domcontentloaded', { timeout: IB_CONFIG.PAGE_LOAD_TIMEOUT });
|
|
||||||
ctx.logger.info('✅ Page loaded');
|
|
||||||
|
|
||||||
//Products tabs
|
|
||||||
ctx.logger.info('🔍 Looking for Products tab...');
|
|
||||||
const productsTab = page.locator('#productSearchTab[role=\"tab\"][href=\"#products\"]');
|
|
||||||
await productsTab.waitFor({ timeout: IB_CONFIG.ELEMENT_TIMEOUT });
|
|
||||||
ctx.logger.info('✅ Found Products tab');
|
|
||||||
ctx.logger.info('🖱️ Clicking Products tab...');
|
|
||||||
await productsTab.click();
|
|
||||||
ctx.logger.info('✅ Products tab clicked');
|
|
||||||
|
|
||||||
// New Products Checkbox
|
|
||||||
ctx.logger.info('🔍 Looking for \"New Products Only\" radio button...');
|
|
||||||
const radioButton = page.locator('span.checkbox-text:has-text(\"New Products Only\")');
|
|
||||||
await radioButton.waitFor({ timeout: IB_CONFIG.ELEMENT_TIMEOUT });
|
|
||||||
ctx.logger.info(`🎯 Found \"New Products Only\" radio button`);
|
|
||||||
await radioButton.first().click();
|
|
||||||
ctx.logger.info('✅ \"New Products Only\" radio button clicked');
|
|
||||||
|
|
||||||
// Wait for and return headers immediately when captured
|
|
||||||
ctx.logger.info('⏳ Waiting for headers to be captured...');
|
|
||||||
const headers = await headersPromise;
|
|
||||||
page.close();
|
|
||||||
if (headers) {
|
|
||||||
ctx.logger.info('✅ Headers captured successfully');
|
|
||||||
} else {
|
|
||||||
ctx.logger.warn('⚠️ No headers were captured');
|
|
||||||
}
|
|
||||||
|
|
||||||
return headers;
|
|
||||||
} catch (error) {
|
|
||||||
ctx.logger.error('Failed to fetch IB symbol summary', { error });
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,86 +0,0 @@
|
||||||
/**
|
|
||||||
* Proxy Provider for new queue system
|
|
||||||
*/
|
|
||||||
import { ProxyInfo } from '@stock-bot/http';
|
|
||||||
import { getLogger } from '@stock-bot/logger';
|
|
||||||
import { handlerRegistry, createJobHandler, type HandlerConfigWithSchedule } from '@stock-bot/queue';
|
|
||||||
|
|
||||||
const handlerLogger = getLogger('proxy-handler');
|
|
||||||
|
|
||||||
// Initialize and register the Proxy provider
|
|
||||||
export function initializeProxyProvider() {
|
|
||||||
handlerLogger.debug('Registering proxy provider with scheduled jobs...');
|
|
||||||
|
|
||||||
const proxyProviderConfig: HandlerConfigWithSchedule = {
|
|
||||||
name: 'proxy',
|
|
||||||
|
|
||||||
operations: {
|
|
||||||
'fetch-from-sources': createJobHandler(async () => {
|
|
||||||
// Fetch proxies from all configured sources
|
|
||||||
handlerLogger.info('Processing fetch proxies from sources request');
|
|
||||||
const { fetchProxiesFromSources } = await import('./operations/fetch.operations');
|
|
||||||
const { processItems } = await import('@stock-bot/queue');
|
|
||||||
|
|
||||||
// Fetch all proxies from sources
|
|
||||||
const proxies = await fetchProxiesFromSources();
|
|
||||||
handlerLogger.info('Fetched proxies from sources', { count: proxies.length });
|
|
||||||
|
|
||||||
if (proxies.length === 0) {
|
|
||||||
handlerLogger.warn('No proxies fetched from sources');
|
|
||||||
return { processed: 0, successful: 0 };
|
|
||||||
}
|
|
||||||
|
|
||||||
// Batch process the proxies through check-proxy operation
|
|
||||||
const batchResult = await processItems(proxies, 'proxy', {
|
|
||||||
handler: 'proxy',
|
|
||||||
operation: 'check-proxy',
|
|
||||||
totalDelayHours: 0.083, // 5 minutes (5/60 hours)
|
|
||||||
batchSize: 50, // Process 50 proxies per batch
|
|
||||||
priority: 3,
|
|
||||||
useBatching: true,
|
|
||||||
retries: 1,
|
|
||||||
ttl: 30000, // 30 second timeout per proxy check
|
|
||||||
removeOnComplete: 5,
|
|
||||||
removeOnFail: 3,
|
|
||||||
});
|
|
||||||
|
|
||||||
handlerLogger.info('Batch proxy validation completed', {
|
|
||||||
totalProxies: proxies.length,
|
|
||||||
jobsCreated: batchResult.jobsCreated,
|
|
||||||
mode: batchResult.mode,
|
|
||||||
batchesCreated: batchResult.batchesCreated,
|
|
||||||
duration: `${batchResult.duration}ms`,
|
|
||||||
});
|
|
||||||
|
|
||||||
return {
|
|
||||||
processed: proxies.length,
|
|
||||||
jobsCreated: batchResult.jobsCreated,
|
|
||||||
batchesCreated: batchResult.batchesCreated,
|
|
||||||
mode: batchResult.mode,
|
|
||||||
};
|
|
||||||
}),
|
|
||||||
|
|
||||||
'check-proxy': createJobHandler(async (payload: ProxyInfo) => {
|
|
||||||
// payload is now the raw proxy info object
|
|
||||||
handlerLogger.debug('Processing proxy check request', {
|
|
||||||
proxy: `${payload.host}:${payload.port}`,
|
|
||||||
});
|
|
||||||
const { checkProxy } = await import('./operations/check.operations');
|
|
||||||
return checkProxy(payload);
|
|
||||||
}),
|
|
||||||
},
|
|
||||||
scheduledJobs: [
|
|
||||||
{
|
|
||||||
type: 'proxy-fetch-and-check',
|
|
||||||
operation: 'fetch-from-sources',
|
|
||||||
cronPattern: '0 0 * * 0', // Every week at midnight on Sunday
|
|
||||||
priority: 0,
|
|
||||||
description: 'Fetch and validate proxy list from sources',
|
|
||||||
// immediately: true, // Don't run immediately during startup to avoid conflicts
|
|
||||||
},
|
|
||||||
],
|
|
||||||
};
|
|
||||||
|
|
||||||
handlerRegistry.registerWithSchedule(proxyProviderConfig);
|
|
||||||
handlerLogger.debug('Proxy provider registered successfully with scheduled jobs');
|
|
||||||
}
|
|
||||||
|
|
@ -1,56 +0,0 @@
|
||||||
/**
|
|
||||||
* Proxy Stats Manager - Singleton for managing proxy statistics
|
|
||||||
*/
|
|
||||||
import type { ProxySource } from './types';
|
|
||||||
import { PROXY_CONFIG } from './config';
|
|
||||||
|
|
||||||
export class ProxyStatsManager {
|
|
||||||
private static instance: ProxyStatsManager | null = null;
|
|
||||||
private proxyStats: ProxySource[] = [];
|
|
||||||
|
|
||||||
private constructor() {
|
|
||||||
this.resetStats();
|
|
||||||
}
|
|
||||||
|
|
||||||
static getInstance(): ProxyStatsManager {
|
|
||||||
if (!ProxyStatsManager.instance) {
|
|
||||||
ProxyStatsManager.instance = new ProxyStatsManager();
|
|
||||||
}
|
|
||||||
return ProxyStatsManager.instance;
|
|
||||||
}
|
|
||||||
|
|
||||||
resetStats(): void {
|
|
||||||
this.proxyStats = PROXY_CONFIG.PROXY_SOURCES.map(source => ({
|
|
||||||
id: source.id,
|
|
||||||
total: 0,
|
|
||||||
working: 0,
|
|
||||||
lastChecked: new Date(),
|
|
||||||
protocol: source.protocol,
|
|
||||||
url: source.url,
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
|
|
||||||
getStats(): ProxySource[] {
|
|
||||||
return [...this.proxyStats];
|
|
||||||
}
|
|
||||||
|
|
||||||
updateSourceStats(sourceId: string, success: boolean): ProxySource | undefined {
|
|
||||||
const source = this.proxyStats.find(s => s.id === sourceId);
|
|
||||||
if (source) {
|
|
||||||
if (typeof source.working !== 'number') {
|
|
||||||
source.working = 0;
|
|
||||||
}
|
|
||||||
if (typeof source.total !== 'number') {
|
|
||||||
source.total = 0;
|
|
||||||
}
|
|
||||||
source.total += 1;
|
|
||||||
if (success) {
|
|
||||||
source.working += 1;
|
|
||||||
}
|
|
||||||
source.percentWorking = (source.working / source.total) * 100;
|
|
||||||
source.lastChecked = new Date();
|
|
||||||
return source;
|
|
||||||
}
|
|
||||||
return undefined;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,41 +0,0 @@
|
||||||
/**
|
|
||||||
* QM Exchanges Operations - Exchange fetching functionality
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { OperationContext } from '@stock-bot/utils';
|
|
||||||
|
|
||||||
import { initializeQMResources } from './session.operations';
|
|
||||||
|
|
||||||
export async function fetchExchanges(): Promise<unknown[] | null> {
|
|
||||||
const ctx = OperationContext.create('qm', 'exchanges');
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Ensure resources are initialized
|
|
||||||
const { QMSessionManager } = await import('../shared/session-manager');
|
|
||||||
const sessionManager = QMSessionManager.getInstance();
|
|
||||||
|
|
||||||
if (!sessionManager.getInitialized()) {
|
|
||||||
await initializeQMResources();
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx.logger.info('QM exchanges fetch - not implemented yet');
|
|
||||||
|
|
||||||
// Cache the "not implemented" status
|
|
||||||
await ctx.cache.set('fetch-status', {
|
|
||||||
implemented: false,
|
|
||||||
message: 'QM exchanges fetching not yet implemented',
|
|
||||||
timestamp: new Date().toISOString()
|
|
||||||
}, { ttl: 3600 });
|
|
||||||
|
|
||||||
// TODO: Implement QM exchanges fetching logic
|
|
||||||
// This could involve:
|
|
||||||
// 1. Querying existing exchanges from MongoDB
|
|
||||||
// 2. Making API calls to discover new exchanges
|
|
||||||
// 3. Processing and storing exchange metadata
|
|
||||||
|
|
||||||
return null;
|
|
||||||
} catch (error) {
|
|
||||||
ctx.logger.error('Failed to fetch QM exchanges', { error });
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,184 +0,0 @@
|
||||||
/**
|
|
||||||
* QM Session Operations - Session creation and management
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { OperationContext } from '@stock-bot/utils';
|
|
||||||
import { isShutdownSignalReceived } from '@stock-bot/shutdown';
|
|
||||||
import { getRandomProxy } from '@stock-bot/utils';
|
|
||||||
|
|
||||||
import { QMSessionManager } from '../shared/session-manager';
|
|
||||||
import { QM_SESSION_IDS, QM_CONFIG, SESSION_CONFIG, getQmHeaders } from '../shared/config';
|
|
||||||
import type { QMSession } from '../shared/types';
|
|
||||||
|
|
||||||
export async function createSessions(): Promise<void> {
|
|
||||||
const ctx = OperationContext.create('qm', 'session');
|
|
||||||
|
|
||||||
try {
|
|
||||||
ctx.logger.info('Creating QM sessions...');
|
|
||||||
|
|
||||||
// Get session manager instance
|
|
||||||
const sessionManager = QMSessionManager.getInstance();
|
|
||||||
|
|
||||||
// Check if already initialized
|
|
||||||
if (!sessionManager.getInitialized()) {
|
|
||||||
await initializeQMResources();
|
|
||||||
}
|
|
||||||
|
|
||||||
// Clean up failed sessions first
|
|
||||||
const removedCount = sessionManager.cleanupFailedSessions();
|
|
||||||
if (removedCount > 0) {
|
|
||||||
ctx.logger.info(`Cleaned up ${removedCount} failed sessions`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cache session creation stats
|
|
||||||
const initialStats = sessionManager.getStats();
|
|
||||||
await ctx.cache.set('pre-creation-stats', initialStats, { ttl: 300 });
|
|
||||||
|
|
||||||
// Create sessions for each session ID that needs them
|
|
||||||
for (const [sessionKey, sessionId] of Object.entries(QM_SESSION_IDS)) {
|
|
||||||
if (sessionManager.isAtCapacity(sessionId)) {
|
|
||||||
ctx.logger.debug(`Session ID ${sessionKey} is at capacity, skipping`);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
while (sessionManager.needsMoreSessions(sessionId)) {
|
|
||||||
if (isShutdownSignalReceived()) {
|
|
||||||
ctx.logger.info('Shutting down, skipping session creation');
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
await createSingleSession(sessionId, sessionKey, ctx);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cache final stats and session count
|
|
||||||
const finalStats = sessionManager.getStats();
|
|
||||||
const totalSessions = sessionManager.getSessionCount();
|
|
||||||
|
|
||||||
await ctx.cache.set('post-creation-stats', finalStats, { ttl: 3600 });
|
|
||||||
await ctx.cache.set('session-count', totalSessions, { ttl: 900 });
|
|
||||||
await ctx.cache.set('last-session-creation', new Date().toISOString());
|
|
||||||
|
|
||||||
ctx.logger.info('QM session creation completed', {
|
|
||||||
totalSessions,
|
|
||||||
sessionStats: finalStats
|
|
||||||
});
|
|
||||||
|
|
||||||
} catch (error) {
|
|
||||||
ctx.logger.error('Failed to create QM sessions', { error });
|
|
||||||
throw error;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function createSingleSession(
|
|
||||||
sessionId: string,
|
|
||||||
sessionKey: string,
|
|
||||||
ctx: OperationContext
|
|
||||||
): Promise<void> {
|
|
||||||
ctx.logger.debug(`Creating new session for ${sessionKey}`, { sessionId });
|
|
||||||
|
|
||||||
const proxyInfo = await getRandomProxy();
|
|
||||||
if (!proxyInfo) {
|
|
||||||
ctx.logger.error('No proxy available for QM session creation');
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Convert ProxyInfo to string format
|
|
||||||
const auth = proxyInfo.username && proxyInfo.password ?
|
|
||||||
`${proxyInfo.username}:${proxyInfo.password}@` : '';
|
|
||||||
const proxy = `${proxyInfo.protocol}://${auth}${proxyInfo.host}:${proxyInfo.port}`;
|
|
||||||
|
|
||||||
const newSession: QMSession = {
|
|
||||||
proxy: proxy,
|
|
||||||
headers: getQmHeaders(),
|
|
||||||
successfulCalls: 0,
|
|
||||||
failedCalls: 0,
|
|
||||||
lastUsed: new Date(),
|
|
||||||
};
|
|
||||||
|
|
||||||
try {
|
|
||||||
const sessionResponse = await fetch(
|
|
||||||
`${QM_CONFIG.BASE_URL}${QM_CONFIG.AUTH_PATH}/${sessionId}`,
|
|
||||||
{
|
|
||||||
method: 'GET',
|
|
||||||
headers: newSession.headers,
|
|
||||||
signal: AbortSignal.timeout(SESSION_CONFIG.SESSION_TIMEOUT),
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
ctx.logger.debug('Session response received', {
|
|
||||||
status: sessionResponse.status,
|
|
||||||
sessionKey,
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!sessionResponse.ok) {
|
|
||||||
ctx.logger.error('Failed to create QM session', {
|
|
||||||
sessionKey,
|
|
||||||
sessionId,
|
|
||||||
status: sessionResponse.status,
|
|
||||||
statusText: sessionResponse.statusText,
|
|
||||||
});
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const sessionData = await sessionResponse.json();
|
|
||||||
|
|
||||||
// Add token to headers
|
|
||||||
newSession.headers['Datatool-Token'] = sessionData.token;
|
|
||||||
|
|
||||||
// Add session to manager
|
|
||||||
const sessionManager = QMSessionManager.getInstance();
|
|
||||||
sessionManager.addSession(sessionId, newSession);
|
|
||||||
|
|
||||||
// Cache successful session creation
|
|
||||||
await ctx.cache.set(
|
|
||||||
`successful-session:${sessionKey}:${Date.now()}`,
|
|
||||||
{ sessionId, proxy, tokenExists: !!sessionData.token },
|
|
||||||
{ ttl: 300 }
|
|
||||||
);
|
|
||||||
|
|
||||||
ctx.logger.info('QM session created successfully', {
|
|
||||||
sessionKey,
|
|
||||||
sessionId,
|
|
||||||
proxy: newSession.proxy,
|
|
||||||
sessionCount: sessionManager.getSessions(sessionId).length,
|
|
||||||
hasToken: !!sessionData.token
|
|
||||||
});
|
|
||||||
|
|
||||||
} catch (error) {
|
|
||||||
if (error.name === 'TimeoutError') {
|
|
||||||
ctx.logger.warn('QM session creation timed out', { sessionKey, sessionId });
|
|
||||||
} else {
|
|
||||||
ctx.logger.error('Error creating QM session', { sessionKey, sessionId, error });
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cache failed session attempt for debugging
|
|
||||||
await ctx.cache.set(
|
|
||||||
`failed-session:${sessionKey}:${Date.now()}`,
|
|
||||||
{ sessionId, proxy, error: error.message },
|
|
||||||
{ ttl: 300 }
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function initializeQMResources(): Promise<void> {
|
|
||||||
const ctx = OperationContext.create('qm', 'init');
|
|
||||||
|
|
||||||
// Check if already initialized
|
|
||||||
const alreadyInitialized = await ctx.cache.get('initialized');
|
|
||||||
if (alreadyInitialized) {
|
|
||||||
ctx.logger.debug('QM resources already initialized');
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx.logger.debug('Initializing QM resources...');
|
|
||||||
|
|
||||||
// Mark as initialized in cache and session manager
|
|
||||||
await ctx.cache.set('initialized', true, { ttl: 3600 });
|
|
||||||
await ctx.cache.set('initialization-time', new Date().toISOString());
|
|
||||||
|
|
||||||
const sessionManager = QMSessionManager.getInstance();
|
|
||||||
sessionManager.setInitialized(true);
|
|
||||||
|
|
||||||
ctx.logger.info('QM resources initialized successfully');
|
|
||||||
}
|
|
||||||
|
|
@ -1,268 +0,0 @@
|
||||||
/**
|
|
||||||
* QM Spider Operations - Symbol spider search functionality
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { OperationContext } from '@stock-bot/utils';
|
|
||||||
import { QueueManager } from '@stock-bot/queue';
|
|
||||||
|
|
||||||
import { QMSessionManager } from '../shared/session-manager';
|
|
||||||
import { QM_SESSION_IDS } from '../shared/config';
|
|
||||||
import type { SymbolSpiderJob, SpiderResult } from '../shared/types';
|
|
||||||
import { initializeQMResources } from './session.operations';
|
|
||||||
import { searchQMSymbolsAPI } from './symbols.operations';
|
|
||||||
|
|
||||||
export async function spiderSymbolSearch(
|
|
||||||
payload: SymbolSpiderJob
|
|
||||||
): Promise<SpiderResult> {
|
|
||||||
const ctx = OperationContext.create('qm', 'spider');
|
|
||||||
|
|
||||||
try {
|
|
||||||
const { prefix, depth, source = 'qm', maxDepth = 4 } = payload;
|
|
||||||
|
|
||||||
ctx.logger.info('Starting spider search', {
|
|
||||||
prefix: prefix || 'ROOT',
|
|
||||||
depth,
|
|
||||||
source,
|
|
||||||
maxDepth
|
|
||||||
});
|
|
||||||
|
|
||||||
// Check cache for recent results
|
|
||||||
const cacheKey = `search-result:${prefix || 'ROOT'}:${depth}`;
|
|
||||||
const cachedResult = await ctx.cache.get<SpiderResult>(cacheKey);
|
|
||||||
if (cachedResult) {
|
|
||||||
ctx.logger.debug('Using cached spider search result', { prefix, depth });
|
|
||||||
return cachedResult;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ensure resources are initialized
|
|
||||||
const sessionManager = QMSessionManager.getInstance();
|
|
||||||
if (!sessionManager.getInitialized()) {
|
|
||||||
await initializeQMResources();
|
|
||||||
}
|
|
||||||
|
|
||||||
let result: SpiderResult;
|
|
||||||
|
|
||||||
// Root job: Create A-Z jobs
|
|
||||||
if (prefix === null || prefix === undefined || prefix === '') {
|
|
||||||
result = await createAlphabetJobs(source, maxDepth, ctx);
|
|
||||||
} else {
|
|
||||||
// Leaf job: Search for symbols with this prefix
|
|
||||||
result = await searchAndSpawnJobs(prefix, depth, source, maxDepth, ctx);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cache the result
|
|
||||||
await ctx.cache.set(cacheKey, result, { ttl: 3600 });
|
|
||||||
|
|
||||||
// Store spider operation metrics in cache instead of PostgreSQL for now
|
|
||||||
try {
|
|
||||||
const statsKey = `spider-stats:${prefix || 'ROOT'}:${depth}:${Date.now()}`;
|
|
||||||
await ctx.cache.set(statsKey, {
|
|
||||||
handler: 'qm',
|
|
||||||
operation: 'spider',
|
|
||||||
prefix: prefix || 'ROOT',
|
|
||||||
depth,
|
|
||||||
symbolsFound: result.symbolsFound,
|
|
||||||
jobsCreated: result.jobsCreated,
|
|
||||||
searchTime: new Date().toISOString()
|
|
||||||
}, { ttl: 86400 }); // Keep for 24 hours
|
|
||||||
} catch (error) {
|
|
||||||
ctx.logger.debug('Failed to store spider stats in cache', { error });
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx.logger.info('Spider search completed', {
|
|
||||||
prefix: prefix || 'ROOT',
|
|
||||||
depth,
|
|
||||||
success: result.success,
|
|
||||||
symbolsFound: result.symbolsFound,
|
|
||||||
jobsCreated: result.jobsCreated
|
|
||||||
});
|
|
||||||
|
|
||||||
return result;
|
|
||||||
|
|
||||||
} catch (error) {
|
|
||||||
ctx.logger.error('Spider symbol search failed', { error, payload });
|
|
||||||
const failedResult = { success: false, symbolsFound: 0, jobsCreated: 0 };
|
|
||||||
|
|
||||||
// Cache failed result for a shorter time
|
|
||||||
const cacheKey = `search-result:${payload.prefix || 'ROOT'}:${payload.depth}`;
|
|
||||||
await ctx.cache.set(cacheKey, failedResult, { ttl: 300 });
|
|
||||||
|
|
||||||
return failedResult;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function createAlphabetJobs(
|
|
||||||
source: string,
|
|
||||||
maxDepth: number,
|
|
||||||
ctx: OperationContext
|
|
||||||
): Promise<SpiderResult> {
|
|
||||||
try {
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const queue = queueManager.getQueue('qm');
|
|
||||||
let jobsCreated = 0;
|
|
||||||
|
|
||||||
ctx.logger.info('Creating alphabet jobs (A-Z)');
|
|
||||||
|
|
||||||
// Create jobs for A-Z
|
|
||||||
for (let i = 0; i < 26; i++) {
|
|
||||||
const letter = String.fromCharCode(65 + i); // A=65, B=66, etc.
|
|
||||||
|
|
||||||
const job: SymbolSpiderJob = {
|
|
||||||
prefix: letter,
|
|
||||||
depth: 1,
|
|
||||||
source,
|
|
||||||
maxDepth,
|
|
||||||
};
|
|
||||||
|
|
||||||
await queue.add(
|
|
||||||
'spider-symbol-search',
|
|
||||||
{
|
|
||||||
handler: 'qm',
|
|
||||||
operation: 'spider-symbol-search',
|
|
||||||
payload: job,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
priority: 5,
|
|
||||||
delay: i * 100, // Stagger jobs by 100ms
|
|
||||||
attempts: 3,
|
|
||||||
backoff: { type: 'exponential', delay: 2000 },
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
jobsCreated++;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cache alphabet job creation
|
|
||||||
await ctx.cache.set('alphabet-jobs-created', {
|
|
||||||
count: jobsCreated,
|
|
||||||
timestamp: new Date().toISOString(),
|
|
||||||
source,
|
|
||||||
maxDepth
|
|
||||||
}, { ttl: 3600 });
|
|
||||||
|
|
||||||
ctx.logger.info(`Created ${jobsCreated} alphabet jobs (A-Z)`);
|
|
||||||
return { success: true, symbolsFound: 0, jobsCreated };
|
|
||||||
|
|
||||||
} catch (error) {
|
|
||||||
ctx.logger.error('Failed to create alphabet jobs', { error });
|
|
||||||
return { success: false, symbolsFound: 0, jobsCreated: 0 };
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function searchAndSpawnJobs(
|
|
||||||
prefix: string,
|
|
||||||
depth: number,
|
|
||||||
source: string,
|
|
||||||
maxDepth: number,
|
|
||||||
ctx: OperationContext
|
|
||||||
): Promise<SpiderResult> {
|
|
||||||
try {
|
|
||||||
// Ensure sessions exist for symbol search
|
|
||||||
const sessionManager = QMSessionManager.getInstance();
|
|
||||||
const lookupSession = sessionManager.getSession(QM_SESSION_IDS.LOOKUP);
|
|
||||||
|
|
||||||
if (!lookupSession) {
|
|
||||||
ctx.logger.info('No lookup sessions available, creating sessions first...');
|
|
||||||
const { createSessions } = await import('./session.operations');
|
|
||||||
await createSessions();
|
|
||||||
|
|
||||||
// Wait a bit for session creation
|
|
||||||
await new Promise(resolve => setTimeout(resolve, 1000));
|
|
||||||
}
|
|
||||||
|
|
||||||
// Search for symbols with this prefix
|
|
||||||
const symbols = await searchQMSymbolsAPI(prefix);
|
|
||||||
const symbolCount = symbols.length;
|
|
||||||
|
|
||||||
ctx.logger.info(`Prefix "${prefix}" returned ${symbolCount} symbols`);
|
|
||||||
|
|
||||||
let jobsCreated = 0;
|
|
||||||
|
|
||||||
// Store symbols in MongoDB
|
|
||||||
if (ctx.mongodb && symbols.length > 0) {
|
|
||||||
try {
|
|
||||||
const updatedSymbols = symbols.map((symbol: Record<string, unknown>) => ({
|
|
||||||
...symbol,
|
|
||||||
qmSearchCode: symbol.symbol,
|
|
||||||
symbol: (symbol.symbol as string)?.split(':')[0],
|
|
||||||
searchPrefix: prefix,
|
|
||||||
searchDepth: depth,
|
|
||||||
discoveredAt: new Date()
|
|
||||||
}));
|
|
||||||
|
|
||||||
await ctx.mongodb.batchUpsert('qmSymbols', updatedSymbols, ['qmSearchCode']);
|
|
||||||
ctx.logger.debug('Stored symbols in MongoDB', { count: symbols.length });
|
|
||||||
} catch (error) {
|
|
||||||
ctx.logger.warn('Failed to store symbols in MongoDB', { error });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// If we have 50+ symbols and haven't reached max depth, spawn sub-jobs
|
|
||||||
if (symbolCount >= 50 && depth < maxDepth) {
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const queue = queueManager.getQueue('qm');
|
|
||||||
|
|
||||||
ctx.logger.info(`Spawning sub-jobs for prefix "${prefix}" (${symbolCount} >= 50 symbols)`);
|
|
||||||
|
|
||||||
// Create jobs for prefixA, prefixB, prefixC... prefixZ
|
|
||||||
for (let i = 0; i < 26; i++) {
|
|
||||||
const letter = String.fromCharCode(65 + i);
|
|
||||||
const newPrefix = prefix + letter;
|
|
||||||
|
|
||||||
const job: SymbolSpiderJob = {
|
|
||||||
prefix: newPrefix,
|
|
||||||
depth: depth + 1,
|
|
||||||
source,
|
|
||||||
maxDepth,
|
|
||||||
};
|
|
||||||
|
|
||||||
await queue.add(
|
|
||||||
'spider-symbol-search',
|
|
||||||
{
|
|
||||||
handler: 'qm',
|
|
||||||
operation: 'spider-symbol-search',
|
|
||||||
payload: job,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
priority: Math.max(1, 6 - depth), // Higher priority for deeper jobs
|
|
||||||
delay: i * 50, // Stagger sub-jobs by 50ms
|
|
||||||
attempts: 3,
|
|
||||||
backoff: { type: 'exponential', delay: 2000 },
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
jobsCreated++;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cache sub-job creation info
|
|
||||||
await ctx.cache.set(`sub-jobs:${prefix}`, {
|
|
||||||
parentPrefix: prefix,
|
|
||||||
depth,
|
|
||||||
symbolCount,
|
|
||||||
jobsCreated,
|
|
||||||
timestamp: new Date().toISOString()
|
|
||||||
}, { ttl: 3600 });
|
|
||||||
|
|
||||||
ctx.logger.info(`Created ${jobsCreated} sub-jobs for prefix "${prefix}"`);
|
|
||||||
} else {
|
|
||||||
// Terminal case: save symbols (already done above)
|
|
||||||
ctx.logger.info(`Terminal case for prefix "${prefix}": ${symbolCount} symbols saved`);
|
|
||||||
|
|
||||||
// Cache terminal case info
|
|
||||||
await ctx.cache.set(`terminal:${prefix}`, {
|
|
||||||
prefix,
|
|
||||||
depth,
|
|
||||||
symbolCount,
|
|
||||||
isTerminal: true,
|
|
||||||
reason: symbolCount < 50 ? 'insufficient_symbols' : 'max_depth_reached',
|
|
||||||
timestamp: new Date().toISOString()
|
|
||||||
}, { ttl: 3600 });
|
|
||||||
}
|
|
||||||
|
|
||||||
return { success: true, symbolsFound: symbolCount, jobsCreated };
|
|
||||||
|
|
||||||
} catch (error) {
|
|
||||||
ctx.logger.error(`Failed to search and spawn jobs for prefix "${prefix}"`, { error, depth });
|
|
||||||
return { success: false, symbolsFound: 0, jobsCreated: 0 };
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,195 +0,0 @@
|
||||||
/**
|
|
||||||
* QM Symbols Operations - Symbol fetching and API interactions
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { OperationContext } from '@stock-bot/utils';
|
|
||||||
import { getRandomProxy } from '@stock-bot/utils';
|
|
||||||
|
|
||||||
import { QMSessionManager } from '../shared/session-manager';
|
|
||||||
import { QM_SESSION_IDS, QM_CONFIG, SESSION_CONFIG } from '../shared/config';
|
|
||||||
import type { SymbolSpiderJob, Exchange } from '../shared/types';
|
|
||||||
import { initializeQMResources } from './session.operations';
|
|
||||||
import { spiderSymbolSearch } from './spider.operations';
|
|
||||||
|
|
||||||
export async function fetchSymbols(): Promise<unknown[] | null> {
|
|
||||||
const ctx = OperationContext.create('qm', 'symbols');
|
|
||||||
|
|
||||||
try {
|
|
||||||
const sessionManager = QMSessionManager.getInstance();
|
|
||||||
if (!sessionManager.getInitialized()) {
|
|
||||||
await initializeQMResources();
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx.logger.info('Starting QM spider-based symbol search...');
|
|
||||||
|
|
||||||
// Check if we have a recent symbol fetch
|
|
||||||
const lastFetch = await ctx.cache.get('last-symbol-fetch');
|
|
||||||
if (lastFetch) {
|
|
||||||
ctx.logger.info('Recent symbol fetch found, using spider search');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start the spider process with root job
|
|
||||||
const rootJob: SymbolSpiderJob = {
|
|
||||||
prefix: null, // Root job creates A-Z jobs
|
|
||||||
depth: 0,
|
|
||||||
source: 'qm',
|
|
||||||
maxDepth: 4,
|
|
||||||
};
|
|
||||||
|
|
||||||
const result = await spiderSymbolSearch(rootJob);
|
|
||||||
|
|
||||||
if (result.success) {
|
|
||||||
// Cache successful fetch info
|
|
||||||
await ctx.cache.set('last-symbol-fetch', {
|
|
||||||
timestamp: new Date().toISOString(),
|
|
||||||
jobsCreated: result.jobsCreated,
|
|
||||||
success: true
|
|
||||||
}, { ttl: 3600 });
|
|
||||||
|
|
||||||
ctx.logger.info(
|
|
||||||
`QM spider search initiated successfully. Created ${result.jobsCreated} initial jobs`
|
|
||||||
);
|
|
||||||
return [`Spider search initiated with ${result.jobsCreated} jobs`];
|
|
||||||
} else {
|
|
||||||
ctx.logger.error('Failed to initiate QM spider search');
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
ctx.logger.error('Failed to start QM spider symbol search', { error });
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function searchQMSymbolsAPI(query: string): Promise<any[]> {
|
|
||||||
const ctx = OperationContext.create('qm', 'api-search');
|
|
||||||
|
|
||||||
const proxyInfo = await getRandomProxy();
|
|
||||||
if (!proxyInfo) {
|
|
||||||
throw new Error('No proxy available for QM API call');
|
|
||||||
}
|
|
||||||
|
|
||||||
const sessionManager = QMSessionManager.getInstance();
|
|
||||||
const session = sessionManager.getSession(QM_SESSION_IDS.LOOKUP);
|
|
||||||
|
|
||||||
if (!session) {
|
|
||||||
throw new Error(`No active session found for QM API with ID: ${QM_SESSION_IDS.LOOKUP}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
ctx.logger.debug('Searching QM symbols API', { query, proxy: session.proxy });
|
|
||||||
|
|
||||||
// Check cache for recent API results
|
|
||||||
const cacheKey = `api-search:${query}`;
|
|
||||||
const cachedResult = await ctx.cache.get(cacheKey);
|
|
||||||
if (cachedResult) {
|
|
||||||
ctx.logger.debug('Using cached API search result', { query });
|
|
||||||
return cachedResult;
|
|
||||||
}
|
|
||||||
|
|
||||||
// QM lookup endpoint for symbol search
|
|
||||||
const searchParams = new URLSearchParams({
|
|
||||||
marketType: 'equity',
|
|
||||||
pathName: '/demo/portal/company-summary.php',
|
|
||||||
q: query,
|
|
||||||
qmodTool: 'SmartSymbolLookup',
|
|
||||||
searchType: 'symbol',
|
|
||||||
showFree: 'false',
|
|
||||||
showHisa: 'false',
|
|
||||||
webmasterId: '500'
|
|
||||||
});
|
|
||||||
|
|
||||||
const apiUrl = `${QM_CONFIG.LOOKUP_URL}?${searchParams.toString()}`;
|
|
||||||
|
|
||||||
const response = await fetch(apiUrl, {
|
|
||||||
method: 'GET',
|
|
||||||
headers: session.headers,
|
|
||||||
signal: AbortSignal.timeout(SESSION_CONFIG.API_TIMEOUT),
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
throw new Error(`QM API request failed: ${response.status} ${response.statusText}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
const symbols = await response.json();
|
|
||||||
|
|
||||||
// Update session stats
|
|
||||||
session.successfulCalls++;
|
|
||||||
session.lastUsed = new Date();
|
|
||||||
|
|
||||||
// Process symbols and extract exchanges
|
|
||||||
if (ctx.mongodb && symbols.length > 0) {
|
|
||||||
try {
|
|
||||||
const updatedSymbols = symbols.map((symbol: Record<string, unknown>) => ({
|
|
||||||
...symbol,
|
|
||||||
qmSearchCode: symbol.symbol,
|
|
||||||
symbol: (symbol.symbol as string)?.split(':')[0],
|
|
||||||
searchQuery: query,
|
|
||||||
fetchedAt: new Date()
|
|
||||||
}));
|
|
||||||
|
|
||||||
await ctx.mongodb.batchUpsert('qmSymbols', updatedSymbols, ['qmSearchCode']);
|
|
||||||
|
|
||||||
// Extract and store unique exchanges
|
|
||||||
const exchanges: Exchange[] = [];
|
|
||||||
for (const symbol of symbols) {
|
|
||||||
if (!exchanges.some(ex => ex.exchange === symbol.exchange)) {
|
|
||||||
exchanges.push({
|
|
||||||
exchange: symbol.exchange,
|
|
||||||
exchangeCode: symbol.exchangeCode,
|
|
||||||
exchangeShortName: symbol.exchangeShortName,
|
|
||||||
countryCode: symbol.countryCode,
|
|
||||||
source: 'qm',
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (exchanges.length > 0) {
|
|
||||||
await ctx.mongodb.batchUpsert('qmExchanges', exchanges, ['exchange']);
|
|
||||||
ctx.logger.debug('Stored exchanges in MongoDB', { count: exchanges.length });
|
|
||||||
}
|
|
||||||
|
|
||||||
} catch (error) {
|
|
||||||
ctx.logger.warn('Failed to store symbols/exchanges in MongoDB', { error });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cache the result
|
|
||||||
await ctx.cache.set(cacheKey, symbols, { ttl: 1800 }); // 30 minutes
|
|
||||||
|
|
||||||
// Store API call stats
|
|
||||||
await ctx.cache.set(`api-stats:${query}:${Date.now()}`, {
|
|
||||||
query,
|
|
||||||
symbolCount: symbols.length,
|
|
||||||
proxy: session.proxy,
|
|
||||||
success: true,
|
|
||||||
timestamp: new Date().toISOString()
|
|
||||||
}, { ttl: 3600 });
|
|
||||||
|
|
||||||
ctx.logger.info(
|
|
||||||
`QM API returned ${symbols.length} symbols for query: ${query}`,
|
|
||||||
{ proxy: session.proxy, symbolCount: symbols.length }
|
|
||||||
);
|
|
||||||
|
|
||||||
return symbols;
|
|
||||||
|
|
||||||
} catch (error) {
|
|
||||||
// Update session failure stats
|
|
||||||
session.failedCalls++;
|
|
||||||
session.lastUsed = new Date();
|
|
||||||
|
|
||||||
// Cache failed API call info
|
|
||||||
await ctx.cache.set(`api-failure:${query}:${Date.now()}`, {
|
|
||||||
query,
|
|
||||||
error: error.message,
|
|
||||||
proxy: session.proxy,
|
|
||||||
timestamp: new Date().toISOString()
|
|
||||||
}, { ttl: 600 });
|
|
||||||
|
|
||||||
ctx.logger.error(`Error searching QM symbols for query "${query}"`, {
|
|
||||||
error: error.message,
|
|
||||||
proxy: session.proxy
|
|
||||||
});
|
|
||||||
|
|
||||||
throw error;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,78 +0,0 @@
|
||||||
import { getLogger } from '@stock-bot/logger';
|
|
||||||
import {
|
|
||||||
createJobHandler,
|
|
||||||
handlerRegistry,
|
|
||||||
type HandlerConfigWithSchedule
|
|
||||||
} from '@stock-bot/queue';
|
|
||||||
import type { SymbolSpiderJob } from './shared/types';
|
|
||||||
|
|
||||||
const handlerLogger = getLogger('qm-handler');
|
|
||||||
|
|
||||||
// Initialize and register the QM provider
|
|
||||||
export function initializeQMProvider() {
|
|
||||||
handlerLogger.debug('Registering QM provider with scheduled jobs...');
|
|
||||||
|
|
||||||
const qmProviderConfig: HandlerConfigWithSchedule = {
|
|
||||||
name: 'qm',
|
|
||||||
operations: {
|
|
||||||
'create-sessions': createJobHandler(async () => {
|
|
||||||
const { createSessions } = await import('./operations/session.operations');
|
|
||||||
await createSessions();
|
|
||||||
return { success: true, message: 'QM sessions created successfully' };
|
|
||||||
}),
|
|
||||||
'search-symbols': createJobHandler(async () => {
|
|
||||||
const { fetchSymbols } = await import('./operations/symbols.operations');
|
|
||||||
const symbols = await fetchSymbols();
|
|
||||||
|
|
||||||
if (symbols && symbols.length > 0) {
|
|
||||||
return {
|
|
||||||
success: true,
|
|
||||||
message: 'QM symbol search completed successfully',
|
|
||||||
count: symbols.length,
|
|
||||||
symbols: symbols.slice(0, 10), // Return first 10 symbols as sample
|
|
||||||
};
|
|
||||||
} else {
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
message: 'No symbols found',
|
|
||||||
count: 0,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
}),
|
|
||||||
'spider-symbol-search': createJobHandler(async (payload: SymbolSpiderJob) => {
|
|
||||||
const { spiderSymbolSearch } = await import('./operations/spider.operations');
|
|
||||||
const result = await spiderSymbolSearch(payload);
|
|
||||||
|
|
||||||
return result;
|
|
||||||
}),
|
|
||||||
},
|
|
||||||
|
|
||||||
scheduledJobs: [
|
|
||||||
{
|
|
||||||
type: 'session-management',
|
|
||||||
operation: 'create-sessions',
|
|
||||||
cronPattern: '0 */15 * * *', // Every 15 minutes
|
|
||||||
priority: 7,
|
|
||||||
immediately: true, // Don't run on startup to avoid blocking
|
|
||||||
description: 'Create and maintain QM sessions',
|
|
||||||
},
|
|
||||||
{
|
|
||||||
type: 'qm-maintnance',
|
|
||||||
operation: 'spider-symbol-search',
|
|
||||||
payload: {
|
|
||||||
prefix: null,
|
|
||||||
depth: 1,
|
|
||||||
source: 'qm',
|
|
||||||
maxDepth: 4
|
|
||||||
},
|
|
||||||
cronPattern: '0 0 * * 0', // Every Sunday at midnight
|
|
||||||
priority: 10,
|
|
||||||
immediately: true, // Don't run on startup - this is a heavy operation
|
|
||||||
description: 'Comprehensive symbol search using QM API',
|
|
||||||
},
|
|
||||||
],
|
|
||||||
};
|
|
||||||
|
|
||||||
handlerRegistry.registerWithSchedule(qmProviderConfig);
|
|
||||||
handlerLogger.debug('QM provider registered successfully with scheduled jobs');
|
|
||||||
}
|
|
||||||
|
|
@ -1,420 +0,0 @@
|
||||||
import { getRandomUserAgent } from '@stock-bot/http';
|
|
||||||
import { getLogger } from '@stock-bot/logger';
|
|
||||||
import { getMongoDBClient } from '@stock-bot/mongodb-client';
|
|
||||||
import { QueueManager } from '@stock-bot/queue';
|
|
||||||
import { isShutdownSignalReceived } from '@stock-bot/shutdown';
|
|
||||||
import { getRandomProxy } from '@stock-bot/utils';
|
|
||||||
|
|
||||||
// Shared instances (module-scoped, not global)
|
|
||||||
let isInitialized = false; // Track if resources are initialized
|
|
||||||
let logger: ReturnType<typeof getLogger>;
|
|
||||||
// let cache: CacheProvider;
|
|
||||||
|
|
||||||
export interface QMSession {
|
|
||||||
proxy: string;
|
|
||||||
headers: Record<string, string>;
|
|
||||||
successfulCalls: number;
|
|
||||||
failedCalls: number;
|
|
||||||
lastUsed: Date;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface SymbolSpiderJob {
|
|
||||||
prefix: string | null; // null = root job (A-Z)
|
|
||||||
depth: number; // 1=A, 2=AA, 3=AAA, etc.
|
|
||||||
source: string; // 'qm'
|
|
||||||
maxDepth?: number; // optional max depth limit
|
|
||||||
}
|
|
||||||
|
|
||||||
interface Exchange {
|
|
||||||
exchange: string;
|
|
||||||
exchangeCode: string;
|
|
||||||
exchangeShortName: string;
|
|
||||||
countryCode: string;
|
|
||||||
source: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
function getQmHeaders(): Record<string, string> {
|
|
||||||
return {
|
|
||||||
'User-Agent': getRandomUserAgent(),
|
|
||||||
Accept: '*/*',
|
|
||||||
'Accept-Language': 'en',
|
|
||||||
'Sec-Fetch-Mode': 'cors',
|
|
||||||
Origin: 'https://www.quotemedia.com',
|
|
||||||
Referer: 'https://www.quotemedia.com/',
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
const sessionCache: Record<string, QMSession[]> = {
|
|
||||||
// '5ad521e05faf5778d567f6d0012ec34d6cdbaeb2462f41568f66558bc7b4ced9': [], //4488d072b
|
|
||||||
// cc1cbdaf040f76db8f4c94f7d156b9b9b716e1a7509ec9c74a48a47f6b6b9f87: [], //97ff00cf3 // getQuotes
|
|
||||||
// '74963ff42f1db2320d051762b5d3950ff9eab23f9d5c5b592551b4ca0441d086': [], //32ca24e394b // getSplitsBySymbol getBrokerRatingsBySymbol getDividendsBySymbol getEarningsSurprisesBySymbol getEarningsEventsBySymbol
|
|
||||||
// '1e1d7cb1de1fd2fe52684abdea41a446919a5fe12776dfab88615ac1ce1ec2f6': [], //fb5721812d2c // getEnhancedQuotes getProfiles
|
|
||||||
// a900a06cc6b3e8036afb9eeb1bbf9783f0007698ed8f5cb1e373dc790e7be2e5: [], //cc882cd95f9 // getEnhancedQuotes
|
|
||||||
// a863d519e38f80e45d10e280fb1afc729816e23f0218db2f3e8b23005a9ad8dd: [], //05a09a41225 // getCompanyFilings getEnhancedQuotes
|
|
||||||
// b3cdb1873f3682c5aeeac097be6181529bfb755945e5a412a24f4b9316291427: [], //6a63f56a6 // getHeadlinesTickerStory
|
|
||||||
dc8c9930437f65d30f6597768800957017bac203a0a50342932757c8dfa158d6: [], //fceb3c4bdd // lookup
|
|
||||||
// '97b24911d7b034620aafad9441afdb2bc906ee5c992d86933c5903254ca29709': [], //c56424868d // detailed-quotes
|
|
||||||
// '8a394f09cb8540c8be8988780660a7ae5b583c331a1f6cb12834f051a0169a8f': [], //2a86d214e50e5 // getGlobalIndustrySectorPeers getKeyRatiosBySymbol getGlobalIndustrySectorCodeList
|
|
||||||
// '2f059f75e2a839437095c9e7e4991d2365bafa7bbb086672a87ae0cf8d92eb01': [], // 48fa36d // getNethouseBySymbol
|
|
||||||
// d7ae7e0091dd1d7011948c3dc4af09b5ec552285d92bb188be2618968bc78e3f: [], // 63548ee //getRecentTradesBySymbol getQuotes getLevel2Quote getRecentTradesBySymbol
|
|
||||||
// d22d1db8f67fe6e420b4028e5129b289ca64862aa6cee8459193747b68c01de3: [], // 84e9e
|
|
||||||
// '6e0b22a7cbc02ac3fa07d45e2880b7696aaebeb29574dce81789e570570c9002': [], //
|
|
||||||
};
|
|
||||||
|
|
||||||
export async function initializeQMResources(): Promise<void> {
|
|
||||||
// Skip if already initialized
|
|
||||||
if (isInitialized) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
logger = getLogger('qm-tasks');
|
|
||||||
isInitialized = true;
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function createSessions(): Promise<void> {
|
|
||||||
try {
|
|
||||||
//for each session, check array length, if less than 5, create new session
|
|
||||||
if (!isInitialized) {
|
|
||||||
await initializeQMResources();
|
|
||||||
}
|
|
||||||
logger.info('Creating QM sessions...');
|
|
||||||
for (const [sessionId, sessionArray] of Object.entries(sessionCache)) {
|
|
||||||
const initialCount = sessionArray.length;
|
|
||||||
const filteredArray = sessionArray.filter(session => session.failedCalls <= 10);
|
|
||||||
sessionCache[sessionId] = filteredArray;
|
|
||||||
|
|
||||||
const removedCount = initialCount - filteredArray.length;
|
|
||||||
if (removedCount > 0) {
|
|
||||||
logger.info(
|
|
||||||
`Removed ${removedCount} sessions with excessive failures for ${sessionId}. Remaining: ${filteredArray.length}`
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
while (sessionCache[sessionId].length < 10) {
|
|
||||||
if(isShutdownSignalReceived()) {
|
|
||||||
logger.info('Shutting down, skipping session creation');
|
|
||||||
break; // Exit if shutting down
|
|
||||||
}
|
|
||||||
logger.info(`Creating new session for ${sessionId}`);
|
|
||||||
const proxyInfo = await getRandomProxy();
|
|
||||||
if (!proxyInfo) {
|
|
||||||
logger.error('No proxy available for QM session creation');
|
|
||||||
break; // Skip session creation if no proxy is available
|
|
||||||
}
|
|
||||||
|
|
||||||
// Convert ProxyInfo to string format
|
|
||||||
const auth = proxyInfo.username && proxyInfo.password ? `${proxyInfo.username}:${proxyInfo.password}@` : '';
|
|
||||||
const proxy = `${proxyInfo.protocol}://${auth}${proxyInfo.host}:${proxyInfo.port}`;
|
|
||||||
const newSession: QMSession = {
|
|
||||||
proxy: proxy, // Placeholder, should be set to a valid proxy
|
|
||||||
headers: getQmHeaders(),
|
|
||||||
successfulCalls: 0,
|
|
||||||
failedCalls: 0,
|
|
||||||
lastUsed: new Date(),
|
|
||||||
};
|
|
||||||
const sessionResponse = await fetch(
|
|
||||||
`https://app.quotemedia.com/auth/g/authenticate/dataTool/v0/500/${sessionId}`,
|
|
||||||
{
|
|
||||||
method: 'GET',
|
|
||||||
proxy: newSession.proxy,
|
|
||||||
headers: newSession.headers,
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
logger.debug('Session response received', {
|
|
||||||
status: sessionResponse.status,
|
|
||||||
sessionId,
|
|
||||||
});
|
|
||||||
if (!sessionResponse.ok) {
|
|
||||||
logger.error('Failed to create QM session', {
|
|
||||||
sessionId,
|
|
||||||
status: sessionResponse.status,
|
|
||||||
statusText: sessionResponse.statusText,
|
|
||||||
});
|
|
||||||
continue; // Skip this session if creation failed
|
|
||||||
}
|
|
||||||
const sessionData = await sessionResponse.json();
|
|
||||||
logger.info('QM session created successfully', {
|
|
||||||
sessionId,
|
|
||||||
sessionData,
|
|
||||||
proxy: newSession.proxy,
|
|
||||||
sessionCount: sessionCache[sessionId].length + 1,
|
|
||||||
});
|
|
||||||
newSession.headers['Datatool-Token'] = sessionData.token;
|
|
||||||
sessionCache[sessionId].push(newSession);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return undefined;
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('❌ Failed to fetch QM session', { error });
|
|
||||||
return undefined;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Spider-based symbol search functions
|
|
||||||
export async function spiderSymbolSearch(
|
|
||||||
payload: SymbolSpiderJob
|
|
||||||
): Promise<{ success: boolean; symbolsFound: number; jobsCreated: number }> {
|
|
||||||
try {
|
|
||||||
if (!isInitialized) {
|
|
||||||
await initializeQMResources();
|
|
||||||
}
|
|
||||||
|
|
||||||
const { prefix, depth, source = 'qm', maxDepth = 4 } = payload;
|
|
||||||
|
|
||||||
logger.info(`Starting spider search`, { prefix: prefix || 'ROOT', depth, source });
|
|
||||||
|
|
||||||
// Root job: Create A-Z jobs
|
|
||||||
if (prefix === null || prefix === undefined || prefix === '') {
|
|
||||||
return await createAlphabetJobs(source, maxDepth);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Leaf job: Search for symbols with this prefix
|
|
||||||
return await searchAndSpawnJobs(prefix, depth, source, maxDepth);
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Spider symbol search failed', { error, payload });
|
|
||||||
return { success: false, symbolsFound: 0, jobsCreated: 0 };
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function createAlphabetJobs(
|
|
||||||
source: string,
|
|
||||||
maxDepth: number
|
|
||||||
): Promise<{ success: boolean; symbolsFound: number; jobsCreated: number }> {
|
|
||||||
try {
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const queue = queueManager.getQueue('qm');
|
|
||||||
let jobsCreated = 0;
|
|
||||||
|
|
||||||
// Create jobs for A-Z
|
|
||||||
for (let i = 0; i < 26; i++) {
|
|
||||||
const letter = String.fromCharCode(65 + i); // A=65, B=66, etc.
|
|
||||||
|
|
||||||
const job: SymbolSpiderJob = {
|
|
||||||
prefix: letter,
|
|
||||||
depth: 1,
|
|
||||||
source,
|
|
||||||
maxDepth,
|
|
||||||
};
|
|
||||||
|
|
||||||
await queue.add(
|
|
||||||
'spider-symbol-search',
|
|
||||||
{
|
|
||||||
handler: 'qm',
|
|
||||||
operation: 'spider-symbol-search',
|
|
||||||
payload: job,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
priority: 5,
|
|
||||||
delay: i * 100, // Stagger jobs by 100ms
|
|
||||||
attempts: 3,
|
|
||||||
backoff: { type: 'exponential', delay: 2000 },
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
jobsCreated++;
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.info(`Created ${jobsCreated} alphabet jobs (A-Z)`);
|
|
||||||
return { success: true, symbolsFound: 0, jobsCreated };
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to create alphabet jobs', { error });
|
|
||||||
return { success: false, symbolsFound: 0, jobsCreated: 0 };
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function searchAndSpawnJobs(
|
|
||||||
prefix: string,
|
|
||||||
depth: number,
|
|
||||||
source: string,
|
|
||||||
maxDepth: number
|
|
||||||
): Promise<{ success: boolean; symbolsFound: number; jobsCreated: number }> {
|
|
||||||
try {
|
|
||||||
// Ensure sessions exist
|
|
||||||
const sessionId = 'dc8c9930437f65d30f6597768800957017bac203a0a50342932757c8dfa158d6';
|
|
||||||
const currentSessions = sessionCache[sessionId] || [];
|
|
||||||
|
|
||||||
if (currentSessions.length === 0) {
|
|
||||||
logger.info('No sessions found, creating sessions first...');
|
|
||||||
await createSessions();
|
|
||||||
await new Promise(resolve => setTimeout(resolve, 1000));
|
|
||||||
}
|
|
||||||
|
|
||||||
// Search for symbols with this prefix
|
|
||||||
const symbols = await searchQMSymbolsAPI(prefix);
|
|
||||||
const symbolCount = symbols.length;
|
|
||||||
|
|
||||||
logger.info(`Prefix "${prefix}" returned ${symbolCount} symbols`);
|
|
||||||
|
|
||||||
let jobsCreated = 0;
|
|
||||||
|
|
||||||
// If we have 50+ symbols and haven't reached max depth, spawn sub-jobs
|
|
||||||
if (symbolCount >= 50 && depth < maxDepth) {
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const queue = queueManager.getQueue('qm');
|
|
||||||
|
|
||||||
logger.info(`Spawning sub-jobs for prefix "${prefix}" (${symbolCount} >= 50 symbols)`);
|
|
||||||
|
|
||||||
// Create jobs for prefixA, prefixB, prefixC... prefixZ
|
|
||||||
for (let i = 0; i < 26; i++) {
|
|
||||||
const letter = String.fromCharCode(65 + i);
|
|
||||||
const newPrefix = prefix + letter;
|
|
||||||
|
|
||||||
const job: SymbolSpiderJob = {
|
|
||||||
prefix: newPrefix,
|
|
||||||
depth: depth + 1,
|
|
||||||
source,
|
|
||||||
maxDepth,
|
|
||||||
};
|
|
||||||
|
|
||||||
await queue.add(
|
|
||||||
'spider-symbol-search',
|
|
||||||
{
|
|
||||||
handler: 'qm',
|
|
||||||
operation: 'spider-symbol-search',
|
|
||||||
payload: job,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
priority: Math.max(1, 6 - depth), // Higher priority for deeper jobs
|
|
||||||
delay: i * 50, // Stagger sub-jobs by 50ms
|
|
||||||
attempts: 3,
|
|
||||||
backoff: { type: 'exponential', delay: 2000 },
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
jobsCreated++;
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.info(`Created ${jobsCreated} sub-jobs for prefix "${prefix}"`);
|
|
||||||
} else {
|
|
||||||
// Terminal case: save symbols and exchanges (already done in searchQMSymbolsAPI)
|
|
||||||
logger.info(`Terminal case for prefix "${prefix}": ${symbolCount} symbols saved`);
|
|
||||||
}
|
|
||||||
|
|
||||||
return { success: true, symbolsFound: symbolCount, jobsCreated };
|
|
||||||
} catch (error) {
|
|
||||||
logger.error(`Failed to search and spawn jobs for prefix "${prefix}"`, { error, depth });
|
|
||||||
return { success: false, symbolsFound: 0, jobsCreated: 0 };
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// API call function to search symbols via QM
|
|
||||||
async function searchQMSymbolsAPI(query: string): Promise<string[]> {
|
|
||||||
const proxyInfo = await getRandomProxy();
|
|
||||||
|
|
||||||
if (!proxyInfo) {
|
|
||||||
throw new Error('No proxy available for QM API call');
|
|
||||||
}
|
|
||||||
const sessionId = 'dc8c9930437f65d30f6597768800957017bac203a0a50342932757c8dfa158d6'; // Use the session ID for symbol lookup
|
|
||||||
const session =
|
|
||||||
sessionCache[sessionId][Math.floor(Math.random() * sessionCache[sessionId].length)]; // lookup session
|
|
||||||
if (!session) {
|
|
||||||
throw new Error(`No active session found for QM API with ID: ${sessionId}`);
|
|
||||||
}
|
|
||||||
try {
|
|
||||||
// QM lookup endpoint for symbol search
|
|
||||||
const apiUrl = `https://app.quotemedia.com/datatool/lookup.json?marketType=equity&pathName=%2Fdemo%2Fportal%2Fcompany-summary.php&q=${encodeURIComponent(query)}&qmodTool=SmartSymbolLookup&searchType=symbol&showFree=false&showHisa=false&webmasterId=500`;
|
|
||||||
|
|
||||||
const response = await fetch(apiUrl, {
|
|
||||||
method: 'GET',
|
|
||||||
headers: session.headers,
|
|
||||||
proxy: session.proxy,
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
throw new Error(`QM API request failed: ${response.status} ${response.statusText}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
const symbols = await response.json();
|
|
||||||
const mongoClient = getMongoDBClient();
|
|
||||||
const updatedSymbols = symbols.map((symbol: Record<string, unknown>) => {
|
|
||||||
return {
|
|
||||||
...symbol,
|
|
||||||
qmSearchCode: symbol.symbol, // Store original symbol for reference
|
|
||||||
symbol: symbol.symbol.split(':')[0], // Extract symbol from "symbol:exchange"
|
|
||||||
};
|
|
||||||
});
|
|
||||||
await mongoClient.batchUpsert('qmSymbols', updatedSymbols, ['qmSearchCode']);
|
|
||||||
const exchanges: Exchange[] = [];
|
|
||||||
for (const symbol of symbols) {
|
|
||||||
if (!exchanges.some(ex => ex.exchange === symbol.exchange)) {
|
|
||||||
exchanges.push({
|
|
||||||
exchange: symbol.exchange,
|
|
||||||
exchangeCode: symbol.exchangeCode,
|
|
||||||
exchangeShortName: symbol.exchangeShortName,
|
|
||||||
countryCode: symbol.countryCode,
|
|
||||||
source: 'qm',
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
await mongoClient.batchUpsert('qmExchanges', exchanges, ['exchange']);
|
|
||||||
session.successfulCalls++;
|
|
||||||
session.lastUsed = new Date();
|
|
||||||
|
|
||||||
logger.info(
|
|
||||||
`QM API returned ${symbols.length} symbols for query: ${query} with proxy ${session.proxy}`
|
|
||||||
);
|
|
||||||
return symbols;
|
|
||||||
} catch (error) {
|
|
||||||
logger.error(`Error searching QM symbols for query "${query}":`, error);
|
|
||||||
if (session) {
|
|
||||||
session.failedCalls++;
|
|
||||||
session.lastUsed = new Date();
|
|
||||||
}
|
|
||||||
throw error;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function fetchSymbols(): Promise<unknown[] | null> {
|
|
||||||
try {
|
|
||||||
if (!isInitialized) {
|
|
||||||
await initializeQMResources();
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.info('🔄 Starting QM spider-based symbol search...');
|
|
||||||
|
|
||||||
// Start the spider process with root job
|
|
||||||
const rootJob: SymbolSpiderJob = {
|
|
||||||
prefix: null, // Root job creates A-Z jobs
|
|
||||||
depth: 0,
|
|
||||||
source: 'qm',
|
|
||||||
maxDepth: 4,
|
|
||||||
};
|
|
||||||
|
|
||||||
const result = await spiderSymbolSearch(rootJob);
|
|
||||||
|
|
||||||
if (result.success) {
|
|
||||||
logger.info(
|
|
||||||
`QM spider search initiated successfully. Created ${result.jobsCreated} initial jobs`
|
|
||||||
);
|
|
||||||
return [`Spider search initiated with ${result.jobsCreated} jobs`];
|
|
||||||
} else {
|
|
||||||
logger.error('Failed to initiate QM spider search');
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('❌ Failed to start QM spider symbol search', { error });
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function fetchExchanges(): Promise<unknown[] | null> {
|
|
||||||
try {
|
|
||||||
if (!isInitialized) {
|
|
||||||
await initializeQMResources();
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.info('🔄 QM exchanges fetch - not implemented yet');
|
|
||||||
// TODO: Implement QM exchanges fetching logic
|
|
||||||
return null;
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('❌ Failed to fetch QM exchanges', { error });
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
export const qmTasks = {
|
|
||||||
createSessions,
|
|
||||||
fetchSymbols,
|
|
||||||
fetchExchanges,
|
|
||||||
spiderSymbolSearch,
|
|
||||||
};
|
|
||||||
|
|
@ -1,85 +0,0 @@
|
||||||
/**
|
|
||||||
* WebShare Fetch Operations - API integration
|
|
||||||
*/
|
|
||||||
import { type ProxyInfo } from '@stock-bot/http';
|
|
||||||
import { OperationContext } from '@stock-bot/utils';
|
|
||||||
|
|
||||||
import { WEBSHARE_CONFIG } from '../shared/config';
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Fetch proxies from WebShare API and convert to ProxyInfo format
|
|
||||||
*/
|
|
||||||
export async function fetchWebShareProxies(): Promise<ProxyInfo[]> {
|
|
||||||
const ctx = OperationContext.create('webshare', 'fetch-proxies');
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Get configuration from config system
|
|
||||||
const { getConfig } = await import('@stock-bot/config');
|
|
||||||
const config = getConfig();
|
|
||||||
|
|
||||||
const apiKey = config.webshare?.apiKey;
|
|
||||||
const apiUrl = config.webshare?.apiUrl;
|
|
||||||
|
|
||||||
if (!apiKey || !apiUrl) {
|
|
||||||
ctx.logger.error('Missing WebShare configuration', {
|
|
||||||
hasApiKey: !!apiKey,
|
|
||||||
hasApiUrl: !!apiUrl,
|
|
||||||
});
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx.logger.info('Fetching proxies from WebShare API', { apiUrl });
|
|
||||||
|
|
||||||
const response = await fetch(`${apiUrl}proxy/list/?mode=${WEBSHARE_CONFIG.DEFAULT_MODE}&page=${WEBSHARE_CONFIG.DEFAULT_PAGE}&page_size=${WEBSHARE_CONFIG.DEFAULT_PAGE_SIZE}`, {
|
|
||||||
method: 'GET',
|
|
||||||
headers: {
|
|
||||||
Authorization: `Token ${apiKey}`,
|
|
||||||
'Content-Type': 'application/json',
|
|
||||||
},
|
|
||||||
signal: AbortSignal.timeout(WEBSHARE_CONFIG.TIMEOUT),
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
ctx.logger.error('WebShare API request failed', {
|
|
||||||
status: response.status,
|
|
||||||
statusText: response.statusText,
|
|
||||||
});
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (!data.results || !Array.isArray(data.results)) {
|
|
||||||
ctx.logger.error('Invalid response format from WebShare API', { data });
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
|
|
||||||
// Transform proxy data to ProxyInfo format
|
|
||||||
const proxies: ProxyInfo[] = data.results.map((proxy: {
|
|
||||||
username: string;
|
|
||||||
password: string;
|
|
||||||
proxy_address: string;
|
|
||||||
port: number;
|
|
||||||
}) => ({
|
|
||||||
source: 'webshare',
|
|
||||||
protocol: 'http' as const,
|
|
||||||
host: proxy.proxy_address,
|
|
||||||
port: proxy.port,
|
|
||||||
username: proxy.username,
|
|
||||||
password: proxy.password,
|
|
||||||
isWorking: true, // WebShare provides working proxies
|
|
||||||
firstSeen: new Date(),
|
|
||||||
lastChecked: new Date(),
|
|
||||||
}));
|
|
||||||
|
|
||||||
ctx.logger.info('Successfully fetched proxies from WebShare', {
|
|
||||||
count: proxies.length,
|
|
||||||
total: data.count || proxies.length,
|
|
||||||
});
|
|
||||||
|
|
||||||
return proxies;
|
|
||||||
} catch (error) {
|
|
||||||
ctx.logger.error('Failed to fetch proxies from WebShare', { error });
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,81 +0,0 @@
|
||||||
/**
|
|
||||||
* WebShare Provider for proxy management with scheduled updates
|
|
||||||
*/
|
|
||||||
import { getLogger } from '@stock-bot/logger';
|
|
||||||
import {
|
|
||||||
createJobHandler,
|
|
||||||
handlerRegistry,
|
|
||||||
type HandlerConfigWithSchedule,
|
|
||||||
} from '@stock-bot/queue';
|
|
||||||
import { updateProxies } from '@stock-bot/utils';
|
|
||||||
|
|
||||||
const logger = getLogger('webshare-provider');
|
|
||||||
|
|
||||||
// Initialize and register the WebShare provider
|
|
||||||
export function initializeWebShareProvider() {
|
|
||||||
logger.debug('Registering WebShare provider with scheduled jobs...');
|
|
||||||
|
|
||||||
const webShareProviderConfig: HandlerConfigWithSchedule = {
|
|
||||||
name: 'webshare',
|
|
||||||
|
|
||||||
operations: {
|
|
||||||
'fetch-proxies': createJobHandler(async () => {
|
|
||||||
logger.info('Fetching proxies from WebShare API');
|
|
||||||
const { fetchWebShareProxies } = await import('./operations/fetch.operations');
|
|
||||||
|
|
||||||
try {
|
|
||||||
const proxies = await fetchWebShareProxies();
|
|
||||||
|
|
||||||
if (proxies.length > 0) {
|
|
||||||
// Update the centralized proxy manager
|
|
||||||
await updateProxies(proxies);
|
|
||||||
|
|
||||||
logger.info('Updated proxy manager with WebShare proxies', {
|
|
||||||
count: proxies.length,
|
|
||||||
workingCount: proxies.filter(p => p.isWorking !== false).length,
|
|
||||||
});
|
|
||||||
|
|
||||||
return {
|
|
||||||
success: true,
|
|
||||||
proxiesUpdated: proxies.length,
|
|
||||||
workingProxies: proxies.filter(p => p.isWorking !== false).length,
|
|
||||||
};
|
|
||||||
} else {
|
|
||||||
logger.warn('No proxies fetched from WebShare API');
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
proxiesUpdated: 0,
|
|
||||||
error: 'No proxies returned from API',
|
|
||||||
};
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to fetch and update proxies', { error });
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
proxiesUpdated: 0,
|
|
||||||
error: error instanceof Error ? error.message : 'Unknown error',
|
|
||||||
};
|
|
||||||
}
|
|
||||||
}),
|
|
||||||
},
|
|
||||||
|
|
||||||
scheduledJobs: [
|
|
||||||
{
|
|
||||||
type: 'webshare-fetch',
|
|
||||||
operation: 'fetch-proxies',
|
|
||||||
cronPattern: '0 */6 * * *', // Every 6 hours
|
|
||||||
priority: 3,
|
|
||||||
description: 'Fetch fresh proxies from WebShare API',
|
|
||||||
immediately: true, // Run on startup
|
|
||||||
},
|
|
||||||
],
|
|
||||||
};
|
|
||||||
|
|
||||||
handlerRegistry.registerWithSchedule(webShareProviderConfig);
|
|
||||||
logger.debug('WebShare provider registered successfully');
|
|
||||||
}
|
|
||||||
|
|
||||||
export const webShareProvider = {
|
|
||||||
initialize: initializeWebShareProvider,
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
@ -1,278 +0,0 @@
|
||||||
// Framework imports
|
|
||||||
import { initializeServiceConfig } from '@stock-bot/config';
|
|
||||||
import { Hono } from 'hono';
|
|
||||||
import { cors } from 'hono/cors';
|
|
||||||
// Library imports
|
|
||||||
import { getLogger, setLoggerConfig, shutdownLoggers } from '@stock-bot/logger';
|
|
||||||
import { connectMongoDB } from '@stock-bot/mongodb-client';
|
|
||||||
import { connectPostgreSQL } from '@stock-bot/postgres-client';
|
|
||||||
import { QueueManager, type QueueManagerConfig } from '@stock-bot/queue';
|
|
||||||
import { Shutdown } from '@stock-bot/shutdown';
|
|
||||||
import { ProxyManager } from '@stock-bot/utils';
|
|
||||||
// Local imports
|
|
||||||
import { exchangeRoutes, healthRoutes, queueRoutes } from './routes';
|
|
||||||
|
|
||||||
const config = initializeServiceConfig();
|
|
||||||
console.log('Data Service Configuration:', JSON.stringify(config, null, 2));
|
|
||||||
const serviceConfig = config.service;
|
|
||||||
const databaseConfig = config.database;
|
|
||||||
const queueConfig = config.queue;
|
|
||||||
|
|
||||||
if (config.log) {
|
|
||||||
setLoggerConfig({
|
|
||||||
logLevel: config.log.level,
|
|
||||||
logConsole: true,
|
|
||||||
logFile: false,
|
|
||||||
environment: config.environment,
|
|
||||||
hideObject: config.log.hideObject,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create logger AFTER config is set
|
|
||||||
const logger = getLogger('data-service');
|
|
||||||
|
|
||||||
const app = new Hono();
|
|
||||||
|
|
||||||
// Add CORS middleware
|
|
||||||
app.use(
|
|
||||||
'*',
|
|
||||||
cors({
|
|
||||||
origin: '*',
|
|
||||||
allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS', 'PATCH'],
|
|
||||||
allowHeaders: ['Content-Type', 'Authorization'],
|
|
||||||
credentials: false,
|
|
||||||
})
|
|
||||||
);
|
|
||||||
const PORT = serviceConfig.port;
|
|
||||||
let server: ReturnType<typeof Bun.serve> | null = null;
|
|
||||||
// Singleton clients are managed in libraries
|
|
||||||
let queueManager: QueueManager | null = null;
|
|
||||||
|
|
||||||
// Initialize shutdown manager
|
|
||||||
const shutdown = Shutdown.getInstance({ timeout: 15000 });
|
|
||||||
|
|
||||||
// Mount routes
|
|
||||||
app.route('/health', healthRoutes);
|
|
||||||
app.route('/api/exchanges', exchangeRoutes);
|
|
||||||
app.route('/api/queue', queueRoutes);
|
|
||||||
|
|
||||||
// Initialize services
|
|
||||||
async function initializeServices() {
|
|
||||||
logger.info('Initializing data service...');
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Initialize MongoDB client singleton
|
|
||||||
logger.debug('Connecting to MongoDB...');
|
|
||||||
const mongoConfig = databaseConfig.mongodb;
|
|
||||||
await connectMongoDB({
|
|
||||||
uri: mongoConfig.uri,
|
|
||||||
database: mongoConfig.database,
|
|
||||||
host: mongoConfig.host || 'localhost',
|
|
||||||
port: mongoConfig.port || 27017,
|
|
||||||
timeouts: {
|
|
||||||
connectTimeout: 30000,
|
|
||||||
socketTimeout: 30000,
|
|
||||||
serverSelectionTimeout: 5000,
|
|
||||||
},
|
|
||||||
});
|
|
||||||
logger.info('MongoDB connected');
|
|
||||||
|
|
||||||
// Initialize PostgreSQL client singleton
|
|
||||||
logger.debug('Connecting to PostgreSQL...');
|
|
||||||
const pgConfig = databaseConfig.postgres;
|
|
||||||
await connectPostgreSQL({
|
|
||||||
host: pgConfig.host,
|
|
||||||
port: pgConfig.port,
|
|
||||||
database: pgConfig.database,
|
|
||||||
username: pgConfig.user,
|
|
||||||
password: pgConfig.password,
|
|
||||||
poolSettings: {
|
|
||||||
min: 2,
|
|
||||||
max: pgConfig.poolSize || 10,
|
|
||||||
idleTimeoutMillis: pgConfig.idleTimeout || 30000,
|
|
||||||
},
|
|
||||||
});
|
|
||||||
logger.info('PostgreSQL connected');
|
|
||||||
|
|
||||||
// Initialize queue system (with delayed worker start)
|
|
||||||
logger.debug('Initializing queue system...');
|
|
||||||
const queueManagerConfig: QueueManagerConfig = {
|
|
||||||
redis: queueConfig?.redis || {
|
|
||||||
host: 'localhost',
|
|
||||||
port: 6379,
|
|
||||||
db: 1,
|
|
||||||
},
|
|
||||||
defaultQueueOptions: {
|
|
||||||
defaultJobOptions: queueConfig?.defaultJobOptions || {
|
|
||||||
attempts: 3,
|
|
||||||
backoff: {
|
|
||||||
type: 'exponential',
|
|
||||||
delay: 1000,
|
|
||||||
},
|
|
||||||
removeOnComplete: 10,
|
|
||||||
removeOnFail: 5,
|
|
||||||
},
|
|
||||||
workers: 2,
|
|
||||||
concurrency: 1,
|
|
||||||
enableMetrics: true,
|
|
||||||
enableDLQ: true,
|
|
||||||
},
|
|
||||||
enableScheduledJobs: true,
|
|
||||||
delayWorkerStart: true, // Prevent workers from starting until all singletons are ready
|
|
||||||
};
|
|
||||||
|
|
||||||
queueManager = QueueManager.getOrInitialize(queueManagerConfig);
|
|
||||||
logger.info('Queue system initialized');
|
|
||||||
|
|
||||||
// Initialize proxy manager
|
|
||||||
logger.debug('Initializing proxy manager...');
|
|
||||||
await ProxyManager.initialize();
|
|
||||||
logger.info('Proxy manager initialized');
|
|
||||||
|
|
||||||
// Initialize handlers (register handlers and scheduled jobs)
|
|
||||||
logger.debug('Initializing data handlers...');
|
|
||||||
const { initializeWebShareProvider } = await import('./handlers/webshare/webshare.handler');
|
|
||||||
const { initializeIBProvider } = await import('./handlers/ib/ib.handler');
|
|
||||||
const { initializeProxyProvider } = await import('./handlers/proxy/proxy.handler');
|
|
||||||
const { initializeQMProvider } = await import('./handlers/qm/qm.handler');
|
|
||||||
|
|
||||||
initializeWebShareProvider();
|
|
||||||
initializeIBProvider();
|
|
||||||
initializeProxyProvider();
|
|
||||||
initializeQMProvider();
|
|
||||||
|
|
||||||
logger.info('Data handlers initialized');
|
|
||||||
|
|
||||||
// Create scheduled jobs from registered handlers
|
|
||||||
logger.debug('Creating scheduled jobs from registered handlers...');
|
|
||||||
const { handlerRegistry } = await import('@stock-bot/queue');
|
|
||||||
const allHandlers = handlerRegistry.getAllHandlers();
|
|
||||||
|
|
||||||
let totalScheduledJobs = 0;
|
|
||||||
for (const [handlerName, config] of allHandlers) {
|
|
||||||
if (config.scheduledJobs && config.scheduledJobs.length > 0) {
|
|
||||||
const queue = queueManager.getQueue(handlerName);
|
|
||||||
|
|
||||||
for (const scheduledJob of config.scheduledJobs) {
|
|
||||||
// Include handler and operation info in job data
|
|
||||||
const jobData = {
|
|
||||||
handler: handlerName,
|
|
||||||
operation: scheduledJob.operation,
|
|
||||||
payload: scheduledJob.payload || {},
|
|
||||||
};
|
|
||||||
|
|
||||||
// Build job options from scheduled job config
|
|
||||||
const jobOptions = {
|
|
||||||
priority: scheduledJob.priority,
|
|
||||||
delay: scheduledJob.delay,
|
|
||||||
repeat: {
|
|
||||||
immediately: scheduledJob.immediately,
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
await queue.addScheduledJob(
|
|
||||||
scheduledJob.operation,
|
|
||||||
jobData,
|
|
||||||
scheduledJob.cronPattern,
|
|
||||||
jobOptions
|
|
||||||
);
|
|
||||||
totalScheduledJobs++;
|
|
||||||
logger.debug('Scheduled job created', {
|
|
||||||
handler: handlerName,
|
|
||||||
operation: scheduledJob.operation,
|
|
||||||
cronPattern: scheduledJob.cronPattern,
|
|
||||||
immediately: scheduledJob.immediately,
|
|
||||||
priority: scheduledJob.priority,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
logger.info('Scheduled jobs created', { totalJobs: totalScheduledJobs });
|
|
||||||
|
|
||||||
// Now that all singletons are initialized and jobs are scheduled, start the workers
|
|
||||||
logger.debug('Starting queue workers...');
|
|
||||||
queueManager.startAllWorkers();
|
|
||||||
logger.info('Queue workers started');
|
|
||||||
|
|
||||||
logger.info('All services initialized successfully');
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to initialize services', { error });
|
|
||||||
throw error;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start server
|
|
||||||
async function startServer() {
|
|
||||||
await initializeServices();
|
|
||||||
|
|
||||||
server = Bun.serve({
|
|
||||||
port: PORT,
|
|
||||||
fetch: app.fetch,
|
|
||||||
development: config.environment === 'development',
|
|
||||||
});
|
|
||||||
|
|
||||||
logger.info(`Data Service started on port ${PORT}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Register shutdown handlers with priorities
|
|
||||||
// Priority 1: Queue system (highest priority)
|
|
||||||
shutdown.onShutdownHigh(async () => {
|
|
||||||
logger.info('Shutting down queue system...');
|
|
||||||
try {
|
|
||||||
if (queueManager) {
|
|
||||||
await queueManager.shutdown();
|
|
||||||
}
|
|
||||||
logger.info('Queue system shut down');
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Error shutting down queue system', { error });
|
|
||||||
}
|
|
||||||
}, 'Queue System');
|
|
||||||
|
|
||||||
// Priority 1: HTTP Server (high priority)
|
|
||||||
shutdown.onShutdownHigh(async () => {
|
|
||||||
if (server) {
|
|
||||||
logger.info('Stopping HTTP server...');
|
|
||||||
try {
|
|
||||||
server.stop();
|
|
||||||
logger.info('HTTP server stopped');
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Error stopping HTTP server', { error });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}, 'HTTP Server');
|
|
||||||
|
|
||||||
// Priority 2: Database connections (medium priority)
|
|
||||||
shutdown.onShutdownMedium(async () => {
|
|
||||||
logger.info('Disconnecting from databases...');
|
|
||||||
try {
|
|
||||||
const { disconnectMongoDB } = await import('@stock-bot/mongodb-client');
|
|
||||||
const { disconnectPostgreSQL } = await import('@stock-bot/postgres-client');
|
|
||||||
|
|
||||||
await disconnectMongoDB();
|
|
||||||
await disconnectPostgreSQL();
|
|
||||||
logger.info('Database connections closed');
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Error closing database connections', { error });
|
|
||||||
}
|
|
||||||
}, 'Databases');
|
|
||||||
|
|
||||||
// Priority 3: Logger shutdown (lowest priority - runs last)
|
|
||||||
shutdown.onShutdownLow(async () => {
|
|
||||||
try {
|
|
||||||
logger.info('Shutting down loggers...');
|
|
||||||
await shutdownLoggers();
|
|
||||||
// Don't log after shutdown
|
|
||||||
} catch {
|
|
||||||
// Silently ignore logger shutdown errors
|
|
||||||
}
|
|
||||||
}, 'Loggers');
|
|
||||||
|
|
||||||
// Start the service
|
|
||||||
startServer().catch(error => {
|
|
||||||
logger.fatal('Failed to start data service', { error });
|
|
||||||
process.exit(1);
|
|
||||||
});
|
|
||||||
|
|
||||||
logger.info('Data service startup initiated');
|
|
||||||
|
|
||||||
// ProxyManager class and singleton instance are available via @stock-bot/utils
|
|
||||||
|
|
@ -1,121 +0,0 @@
|
||||||
/**
|
|
||||||
* Market data routes
|
|
||||||
*/
|
|
||||||
import { Hono } from 'hono';
|
|
||||||
import { getLogger } from '@stock-bot/logger';
|
|
||||||
import { processItems, QueueManager } from '@stock-bot/queue';
|
|
||||||
|
|
||||||
const logger = getLogger('market-data-routes');
|
|
||||||
|
|
||||||
export const marketDataRoutes = new Hono();
|
|
||||||
|
|
||||||
// Market data endpoints
|
|
||||||
marketDataRoutes.get('/api/live/:symbol', async c => {
|
|
||||||
const symbol = c.req.param('symbol');
|
|
||||||
logger.info('Live data request', { symbol });
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Queue job for live data using Yahoo provider
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const queue = queueManager.getQueue('yahoo-finance');
|
|
||||||
const job = await queue.add('live-data', {
|
|
||||||
handler: 'yahoo-finance',
|
|
||||||
operation: 'live-data',
|
|
||||||
payload: { symbol },
|
|
||||||
});
|
|
||||||
return c.json({
|
|
||||||
status: 'success',
|
|
||||||
message: 'Live data job queued',
|
|
||||||
jobId: job.id,
|
|
||||||
symbol,
|
|
||||||
});
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to queue live data job', { symbol, error });
|
|
||||||
return c.json({ status: 'error', message: 'Failed to queue live data job' }, 500);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
marketDataRoutes.get('/api/historical/:symbol', async c => {
|
|
||||||
const symbol = c.req.param('symbol');
|
|
||||||
const from = c.req.query('from');
|
|
||||||
const to = c.req.query('to');
|
|
||||||
|
|
||||||
logger.info('Historical data request', { symbol, from, to });
|
|
||||||
|
|
||||||
try {
|
|
||||||
const fromDate = from ? new Date(from) : new Date(Date.now() - 30 * 24 * 60 * 60 * 1000); // 30 days ago
|
|
||||||
const toDate = to ? new Date(to) : new Date(); // Now
|
|
||||||
|
|
||||||
// Queue job for historical data using Yahoo provider
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const queue = queueManager.getQueue('yahoo-finance');
|
|
||||||
const job = await queue.add('historical-data', {
|
|
||||||
handler: 'yahoo-finance',
|
|
||||||
operation: 'historical-data',
|
|
||||||
payload: {
|
|
||||||
symbol,
|
|
||||||
from: fromDate.toISOString(),
|
|
||||||
to: toDate.toISOString(),
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
return c.json({
|
|
||||||
status: 'success',
|
|
||||||
message: 'Historical data job queued',
|
|
||||||
jobId: job.id,
|
|
||||||
symbol,
|
|
||||||
from: fromDate,
|
|
||||||
to: toDate,
|
|
||||||
});
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to queue historical data job', { symbol, from, to, error });
|
|
||||||
return c.json({ status: 'error', message: 'Failed to queue historical data job' }, 500);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Batch processing endpoint using new queue system
|
|
||||||
marketDataRoutes.post('/api/process-symbols', async c => {
|
|
||||||
try {
|
|
||||||
const {
|
|
||||||
symbols,
|
|
||||||
provider = 'ib',
|
|
||||||
operation = 'fetch-session',
|
|
||||||
useBatching = true,
|
|
||||||
totalDelayHours = 0.0083, // ~30 seconds (30/3600 hours)
|
|
||||||
batchSize = 10,
|
|
||||||
} = await c.req.json();
|
|
||||||
|
|
||||||
if (!symbols || !Array.isArray(symbols) || symbols.length === 0) {
|
|
||||||
return c.json({ status: 'error', message: 'Invalid symbols array' }, 400);
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.info('Batch processing symbols', {
|
|
||||||
count: symbols.length,
|
|
||||||
provider,
|
|
||||||
operation,
|
|
||||||
useBatching,
|
|
||||||
});
|
|
||||||
|
|
||||||
const result = await processItems(symbols, provider, {
|
|
||||||
handler: provider,
|
|
||||||
operation,
|
|
||||||
totalDelayHours,
|
|
||||||
useBatching,
|
|
||||||
batchSize,
|
|
||||||
priority: 2,
|
|
||||||
retries: 2,
|
|
||||||
removeOnComplete: 5,
|
|
||||||
removeOnFail: 10,
|
|
||||||
});
|
|
||||||
|
|
||||||
return c.json({
|
|
||||||
status: 'success',
|
|
||||||
message: 'Batch processing initiated',
|
|
||||||
result,
|
|
||||||
symbols: symbols.length,
|
|
||||||
});
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to process symbols batch', { error });
|
|
||||||
return c.json({ status: 'error', message: 'Failed to process symbols batch' }, 500);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
@ -1,25 +0,0 @@
|
||||||
import { Hono } from 'hono';
|
|
||||||
import { getLogger } from '@stock-bot/logger';
|
|
||||||
import { QueueManager } from '@stock-bot/queue';
|
|
||||||
|
|
||||||
const logger = getLogger('queue-routes');
|
|
||||||
const queue = new Hono();
|
|
||||||
|
|
||||||
// Queue status endpoint
|
|
||||||
queue.get('/status', async c => {
|
|
||||||
try {
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const globalStats = await queueManager.getGlobalStats();
|
|
||||||
|
|
||||||
return c.json({
|
|
||||||
status: 'success',
|
|
||||||
data: globalStats,
|
|
||||||
message: 'Queue status retrieved successfully'
|
|
||||||
});
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to get queue status', { error });
|
|
||||||
return c.json({ status: 'error', message: 'Failed to get queue status' }, 500);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
export { queue as queueRoutes };
|
|
||||||
|
|
@ -1,14 +0,0 @@
|
||||||
{
|
|
||||||
"extends": "../../tsconfig.app.json",
|
|
||||||
"references": [
|
|
||||||
{ "path": "../../libs/types" },
|
|
||||||
{ "path": "../../libs/config" },
|
|
||||||
{ "path": "../../libs/logger" },
|
|
||||||
{ "path": "../../libs/cache" },
|
|
||||||
{ "path": "../../libs/queue" },
|
|
||||||
{ "path": "../../libs/mongodb-client" },
|
|
||||||
{ "path": "../../libs/postgres-client" },
|
|
||||||
{ "path": "../../libs/questdb-client" },
|
|
||||||
{ "path": "../../libs/shutdown" }
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
@ -1,15 +0,0 @@
|
||||||
{
|
|
||||||
"service": {
|
|
||||||
"name": "data-sync-service",
|
|
||||||
"port": 3005,
|
|
||||||
"host": "0.0.0.0",
|
|
||||||
"healthCheckPath": "/health",
|
|
||||||
"metricsPath": "/metrics",
|
|
||||||
"shutdownTimeout": 30000,
|
|
||||||
"cors": {
|
|
||||||
"enabled": true,
|
|
||||||
"origin": "*",
|
|
||||||
"credentials": false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,58 +0,0 @@
|
||||||
import { getLogger } from '@stock-bot/logger';
|
|
||||||
import { handlerRegistry, type HandlerConfig, type ScheduledJobConfig } from '@stock-bot/queue';
|
|
||||||
import { exchangeOperations } from './operations';
|
|
||||||
|
|
||||||
const logger = getLogger('exchanges-handler');
|
|
||||||
|
|
||||||
const HANDLER_NAME = 'exchanges';
|
|
||||||
|
|
||||||
const exchangesHandlerConfig: HandlerConfig = {
|
|
||||||
concurrency: 1,
|
|
||||||
maxAttempts: 3,
|
|
||||||
scheduledJobs: [
|
|
||||||
{
|
|
||||||
operation: 'sync-all-exchanges',
|
|
||||||
cronPattern: '0 0 * * 0', // Weekly on Sunday at midnight
|
|
||||||
payload: { clearFirst: true },
|
|
||||||
priority: 10,
|
|
||||||
immediately: false,
|
|
||||||
} as ScheduledJobConfig,
|
|
||||||
{
|
|
||||||
operation: 'sync-qm-exchanges',
|
|
||||||
cronPattern: '0 1 * * *', // Daily at 1 AM
|
|
||||||
payload: {},
|
|
||||||
priority: 5,
|
|
||||||
immediately: false,
|
|
||||||
} as ScheduledJobConfig,
|
|
||||||
{
|
|
||||||
operation: 'sync-ib-exchanges',
|
|
||||||
cronPattern: '0 3 * * *', // Daily at 3 AM
|
|
||||||
payload: {},
|
|
||||||
priority: 3,
|
|
||||||
immediately: false,
|
|
||||||
} as ScheduledJobConfig,
|
|
||||||
{
|
|
||||||
operation: 'sync-qm-provider-mappings',
|
|
||||||
cronPattern: '0 3 * * *', // Daily at 3 AM
|
|
||||||
payload: {},
|
|
||||||
priority: 7,
|
|
||||||
immediately: false,
|
|
||||||
} as ScheduledJobConfig,
|
|
||||||
],
|
|
||||||
operations: {
|
|
||||||
'sync-all-exchanges': exchangeOperations.syncAllExchanges,
|
|
||||||
'sync-qm-exchanges': exchangeOperations.syncQMExchanges,
|
|
||||||
'sync-ib-exchanges': exchangeOperations.syncIBExchanges,
|
|
||||||
'sync-qm-provider-mappings': exchangeOperations.syncQMProviderMappings,
|
|
||||||
'clear-postgresql-data': exchangeOperations.clearPostgreSQLData,
|
|
||||||
'get-exchange-stats': exchangeOperations.getExchangeStats,
|
|
||||||
'get-provider-mapping-stats': exchangeOperations.getProviderMappingStats,
|
|
||||||
'enhanced-sync-status': exchangeOperations['enhanced-sync-status'],
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
export function initializeExchangesHandler(): void {
|
|
||||||
logger.info('Registering exchanges handler...');
|
|
||||||
handlerRegistry.registerHandler(HANDLER_NAME, exchangesHandlerConfig);
|
|
||||||
logger.info('Exchanges handler registered successfully');
|
|
||||||
}
|
|
||||||
|
|
@ -1,41 +0,0 @@
|
||||||
import { getLogger } from '@stock-bot/logger';
|
|
||||||
import { handlerRegistry, type HandlerConfig, type ScheduledJobConfig } from '@stock-bot/queue';
|
|
||||||
import { symbolOperations } from './operations';
|
|
||||||
|
|
||||||
const logger = getLogger('symbols-handler');
|
|
||||||
|
|
||||||
const HANDLER_NAME = 'symbols';
|
|
||||||
|
|
||||||
const symbolsHandlerConfig: HandlerConfig = {
|
|
||||||
concurrency: 1,
|
|
||||||
maxAttempts: 3,
|
|
||||||
scheduledJobs: [
|
|
||||||
{
|
|
||||||
operation: 'sync-qm-symbols',
|
|
||||||
cronPattern: '0 2 * * *', // Daily at 2 AM
|
|
||||||
payload: {},
|
|
||||||
priority: 5,
|
|
||||||
immediately: false,
|
|
||||||
} as ScheduledJobConfig,
|
|
||||||
{
|
|
||||||
operation: 'sync-symbols-qm',
|
|
||||||
cronPattern: '0 4 * * *', // Daily at 4 AM
|
|
||||||
payload: { provider: 'qm', clearFirst: false },
|
|
||||||
priority: 5,
|
|
||||||
immediately: false,
|
|
||||||
} as ScheduledJobConfig,
|
|
||||||
],
|
|
||||||
operations: {
|
|
||||||
'sync-qm-symbols': symbolOperations.syncQMSymbols,
|
|
||||||
'sync-symbols-qm': symbolOperations.syncSymbolsFromProvider,
|
|
||||||
'sync-symbols-eod': symbolOperations.syncSymbolsFromProvider,
|
|
||||||
'sync-symbols-ib': symbolOperations.syncSymbolsFromProvider,
|
|
||||||
'sync-status': symbolOperations.getSyncStatus,
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
export function initializeSymbolsHandler(): void {
|
|
||||||
logger.info('Registering symbols handler...');
|
|
||||||
handlerRegistry.registerHandler(HANDLER_NAME, symbolsHandlerConfig);
|
|
||||||
logger.info('Symbols handler registered successfully');
|
|
||||||
}
|
|
||||||
|
|
@ -1,267 +0,0 @@
|
||||||
// Framework imports
|
|
||||||
import { initializeServiceConfig } from '@stock-bot/config';
|
|
||||||
import { Hono } from 'hono';
|
|
||||||
import { cors } from 'hono/cors';
|
|
||||||
// Library imports
|
|
||||||
import { getLogger, setLoggerConfig, shutdownLoggers } from '@stock-bot/logger';
|
|
||||||
import { connectMongoDB } from '@stock-bot/mongodb-client';
|
|
||||||
import { connectPostgreSQL } from '@stock-bot/postgres-client';
|
|
||||||
import { QueueManager, type QueueManagerConfig } from '@stock-bot/queue';
|
|
||||||
import { Shutdown } from '@stock-bot/shutdown';
|
|
||||||
// Local imports
|
|
||||||
import { healthRoutes, enhancedSyncRoutes, statsRoutes, syncRoutes } from './routes';
|
|
||||||
|
|
||||||
const config = initializeServiceConfig();
|
|
||||||
console.log('Data Sync Service Configuration:', JSON.stringify(config, null, 2));
|
|
||||||
const serviceConfig = config.service;
|
|
||||||
const databaseConfig = config.database;
|
|
||||||
const queueConfig = config.queue;
|
|
||||||
|
|
||||||
if (config.log) {
|
|
||||||
setLoggerConfig({
|
|
||||||
logLevel: config.log.level,
|
|
||||||
logConsole: true,
|
|
||||||
logFile: false,
|
|
||||||
environment: config.environment,
|
|
||||||
hideObject: config.log.hideObject,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create logger AFTER config is set
|
|
||||||
const logger = getLogger('data-sync-service');
|
|
||||||
|
|
||||||
const app = new Hono();
|
|
||||||
|
|
||||||
// Add CORS middleware
|
|
||||||
app.use(
|
|
||||||
'*',
|
|
||||||
cors({
|
|
||||||
origin: '*',
|
|
||||||
allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS', 'PATCH'],
|
|
||||||
allowHeaders: ['Content-Type', 'Authorization'],
|
|
||||||
credentials: false,
|
|
||||||
})
|
|
||||||
);
|
|
||||||
const PORT = serviceConfig.port;
|
|
||||||
let server: ReturnType<typeof Bun.serve> | null = null;
|
|
||||||
// Singleton clients are managed in libraries
|
|
||||||
let queueManager: QueueManager | null = null;
|
|
||||||
|
|
||||||
// Initialize shutdown manager
|
|
||||||
const shutdown = Shutdown.getInstance({ timeout: 15000 });
|
|
||||||
|
|
||||||
// Mount routes
|
|
||||||
app.route('/health', healthRoutes);
|
|
||||||
app.route('/sync', syncRoutes);
|
|
||||||
app.route('/sync', enhancedSyncRoutes);
|
|
||||||
app.route('/sync/stats', statsRoutes);
|
|
||||||
|
|
||||||
// Initialize services
|
|
||||||
async function initializeServices() {
|
|
||||||
logger.info('Initializing data sync service...');
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Initialize MongoDB client singleton
|
|
||||||
logger.debug('Connecting to MongoDB...');
|
|
||||||
const mongoConfig = databaseConfig.mongodb;
|
|
||||||
await connectMongoDB({
|
|
||||||
uri: mongoConfig.uri,
|
|
||||||
database: mongoConfig.database,
|
|
||||||
host: mongoConfig.host || 'localhost',
|
|
||||||
port: mongoConfig.port || 27017,
|
|
||||||
timeouts: {
|
|
||||||
connectTimeout: 30000,
|
|
||||||
socketTimeout: 30000,
|
|
||||||
serverSelectionTimeout: 5000,
|
|
||||||
},
|
|
||||||
});
|
|
||||||
logger.info('MongoDB connected');
|
|
||||||
|
|
||||||
// Initialize PostgreSQL client singleton
|
|
||||||
logger.debug('Connecting to PostgreSQL...');
|
|
||||||
const pgConfig = databaseConfig.postgres;
|
|
||||||
await connectPostgreSQL({
|
|
||||||
host: pgConfig.host,
|
|
||||||
port: pgConfig.port,
|
|
||||||
database: pgConfig.database,
|
|
||||||
username: pgConfig.user,
|
|
||||||
password: pgConfig.password,
|
|
||||||
poolSettings: {
|
|
||||||
min: 2,
|
|
||||||
max: pgConfig.poolSize || 10,
|
|
||||||
idleTimeoutMillis: pgConfig.idleTimeout || 30000,
|
|
||||||
},
|
|
||||||
});
|
|
||||||
logger.info('PostgreSQL connected');
|
|
||||||
|
|
||||||
// Initialize queue system (with delayed worker start)
|
|
||||||
logger.debug('Initializing queue system...');
|
|
||||||
const queueManagerConfig: QueueManagerConfig = {
|
|
||||||
redis: queueConfig?.redis || {
|
|
||||||
host: 'localhost',
|
|
||||||
port: 6379,
|
|
||||||
db: 1,
|
|
||||||
},
|
|
||||||
defaultQueueOptions: {
|
|
||||||
defaultJobOptions: queueConfig?.defaultJobOptions || {
|
|
||||||
attempts: 3,
|
|
||||||
backoff: {
|
|
||||||
type: 'exponential',
|
|
||||||
delay: 1000,
|
|
||||||
},
|
|
||||||
removeOnComplete: 10,
|
|
||||||
removeOnFail: 5,
|
|
||||||
},
|
|
||||||
workers: 2,
|
|
||||||
concurrency: 1,
|
|
||||||
enableMetrics: true,
|
|
||||||
enableDLQ: true,
|
|
||||||
},
|
|
||||||
enableScheduledJobs: true,
|
|
||||||
delayWorkerStart: true, // Prevent workers from starting until all singletons are ready
|
|
||||||
};
|
|
||||||
|
|
||||||
queueManager = QueueManager.getOrInitialize(queueManagerConfig);
|
|
||||||
logger.info('Queue system initialized');
|
|
||||||
|
|
||||||
// Initialize handlers (register handlers and scheduled jobs)
|
|
||||||
logger.debug('Initializing sync handlers...');
|
|
||||||
const { initializeExchangesHandler } = await import('./handlers/exchanges/exchanges.handler');
|
|
||||||
const { initializeSymbolsHandler } = await import('./handlers/symbols/symbols.handler');
|
|
||||||
|
|
||||||
initializeExchangesHandler();
|
|
||||||
initializeSymbolsHandler();
|
|
||||||
|
|
||||||
logger.info('Sync handlers initialized');
|
|
||||||
|
|
||||||
// Create scheduled jobs from registered handlers
|
|
||||||
logger.debug('Creating scheduled jobs from registered handlers...');
|
|
||||||
const { handlerRegistry } = await import('@stock-bot/queue');
|
|
||||||
const allHandlers = handlerRegistry.getAllHandlers();
|
|
||||||
|
|
||||||
let totalScheduledJobs = 0;
|
|
||||||
for (const [handlerName, config] of allHandlers) {
|
|
||||||
if (config.scheduledJobs && config.scheduledJobs.length > 0) {
|
|
||||||
const queue = queueManager.getQueue(handlerName);
|
|
||||||
|
|
||||||
for (const scheduledJob of config.scheduledJobs) {
|
|
||||||
// Include handler and operation info in job data
|
|
||||||
const jobData = {
|
|
||||||
handler: handlerName,
|
|
||||||
operation: scheduledJob.operation,
|
|
||||||
payload: scheduledJob.payload || {},
|
|
||||||
};
|
|
||||||
|
|
||||||
// Build job options from scheduled job config
|
|
||||||
const jobOptions = {
|
|
||||||
priority: scheduledJob.priority,
|
|
||||||
delay: scheduledJob.delay,
|
|
||||||
repeat: {
|
|
||||||
immediately: scheduledJob.immediately,
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
await queue.addScheduledJob(
|
|
||||||
scheduledJob.operation,
|
|
||||||
jobData,
|
|
||||||
scheduledJob.cronPattern,
|
|
||||||
jobOptions
|
|
||||||
);
|
|
||||||
totalScheduledJobs++;
|
|
||||||
logger.debug('Scheduled job created', {
|
|
||||||
handler: handlerName,
|
|
||||||
operation: scheduledJob.operation,
|
|
||||||
cronPattern: scheduledJob.cronPattern,
|
|
||||||
immediately: scheduledJob.immediately,
|
|
||||||
priority: scheduledJob.priority,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
logger.info('Scheduled jobs created', { totalJobs: totalScheduledJobs });
|
|
||||||
|
|
||||||
// Now that all singletons are initialized and jobs are scheduled, start the workers
|
|
||||||
logger.debug('Starting queue workers...');
|
|
||||||
queueManager.startAllWorkers();
|
|
||||||
logger.info('Queue workers started');
|
|
||||||
|
|
||||||
logger.info('All services initialized successfully');
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to initialize services', { error });
|
|
||||||
throw error;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start server
|
|
||||||
async function startServer() {
|
|
||||||
await initializeServices();
|
|
||||||
|
|
||||||
server = Bun.serve({
|
|
||||||
port: PORT,
|
|
||||||
fetch: app.fetch,
|
|
||||||
development: config.environment === 'development',
|
|
||||||
});
|
|
||||||
|
|
||||||
logger.info(`Data Sync Service started on port ${PORT}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Register shutdown handlers with priorities
|
|
||||||
// Priority 1: Queue system (highest priority)
|
|
||||||
shutdown.onShutdownHigh(async () => {
|
|
||||||
logger.info('Shutting down queue system...');
|
|
||||||
try {
|
|
||||||
if (queueManager) {
|
|
||||||
await queueManager.shutdown();
|
|
||||||
}
|
|
||||||
logger.info('Queue system shut down');
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Error shutting down queue system', { error });
|
|
||||||
}
|
|
||||||
}, 'Queue System');
|
|
||||||
|
|
||||||
// Priority 1: HTTP Server (high priority)
|
|
||||||
shutdown.onShutdownHigh(async () => {
|
|
||||||
if (server) {
|
|
||||||
logger.info('Stopping HTTP server...');
|
|
||||||
try {
|
|
||||||
server.stop();
|
|
||||||
logger.info('HTTP server stopped');
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Error stopping HTTP server', { error });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}, 'HTTP Server');
|
|
||||||
|
|
||||||
// Priority 2: Database connections (medium priority)
|
|
||||||
shutdown.onShutdownMedium(async () => {
|
|
||||||
logger.info('Disconnecting from databases...');
|
|
||||||
try {
|
|
||||||
const { disconnectMongoDB } = await import('@stock-bot/mongodb-client');
|
|
||||||
const { disconnectPostgreSQL } = await import('@stock-bot/postgres-client');
|
|
||||||
|
|
||||||
await disconnectMongoDB();
|
|
||||||
await disconnectPostgreSQL();
|
|
||||||
logger.info('Database connections closed');
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Error closing database connections', { error });
|
|
||||||
}
|
|
||||||
}, 'Databases');
|
|
||||||
|
|
||||||
// Priority 3: Logger shutdown (lowest priority - runs last)
|
|
||||||
shutdown.onShutdownLow(async () => {
|
|
||||||
try {
|
|
||||||
logger.info('Shutting down loggers...');
|
|
||||||
await shutdownLoggers();
|
|
||||||
// Don't log after shutdown
|
|
||||||
} catch {
|
|
||||||
// Silently ignore logger shutdown errors
|
|
||||||
}
|
|
||||||
}, 'Loggers');
|
|
||||||
|
|
||||||
// Start the service
|
|
||||||
startServer().catch(error => {
|
|
||||||
logger.fatal('Failed to start data sync service', { error });
|
|
||||||
process.exit(1);
|
|
||||||
});
|
|
||||||
|
|
||||||
logger.info('Data sync service startup initiated');
|
|
||||||
|
|
@ -1,96 +0,0 @@
|
||||||
import { Hono } from 'hono';
|
|
||||||
import { getLogger } from '@stock-bot/logger';
|
|
||||||
import { QueueManager } from '@stock-bot/queue';
|
|
||||||
|
|
||||||
const logger = getLogger('enhanced-sync-routes');
|
|
||||||
const enhancedSync = new Hono();
|
|
||||||
|
|
||||||
// Enhanced sync endpoints
|
|
||||||
enhancedSync.post('/exchanges/all', async c => {
|
|
||||||
try {
|
|
||||||
const clearFirst = c.req.query('clear') === 'true';
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
|
||||||
|
|
||||||
const job = await exchangesQueue.addJob('sync-all-exchanges', {
|
|
||||||
handler: 'exchanges',
|
|
||||||
operation: 'sync-all-exchanges',
|
|
||||||
payload: { clearFirst },
|
|
||||||
});
|
|
||||||
|
|
||||||
return c.json({ success: true, jobId: job.id, message: 'Enhanced exchange sync job queued' });
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to queue enhanced exchange sync job', { error });
|
|
||||||
return c.json(
|
|
||||||
{ success: false, error: error instanceof Error ? error.message : 'Unknown error' },
|
|
||||||
500
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
enhancedSync.post('/provider-mappings/qm', async c => {
|
|
||||||
try {
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
|
||||||
|
|
||||||
const job = await exchangesQueue.addJob('sync-qm-provider-mappings', {
|
|
||||||
handler: 'exchanges',
|
|
||||||
operation: 'sync-qm-provider-mappings',
|
|
||||||
payload: {},
|
|
||||||
});
|
|
||||||
|
|
||||||
return c.json({ success: true, jobId: job.id, message: 'QM provider mappings sync job queued' });
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to queue QM provider mappings sync job', { error });
|
|
||||||
return c.json(
|
|
||||||
{ success: false, error: error instanceof Error ? error.message : 'Unknown error' },
|
|
||||||
500
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
enhancedSync.post('/symbols/:provider', async c => {
|
|
||||||
try {
|
|
||||||
const provider = c.req.param('provider');
|
|
||||||
const clearFirst = c.req.query('clear') === 'true';
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const symbolsQueue = queueManager.getQueue('symbols');
|
|
||||||
|
|
||||||
const job = await symbolsQueue.addJob(`sync-symbols-${provider}`, {
|
|
||||||
handler: 'symbols',
|
|
||||||
operation: `sync-symbols-${provider}`,
|
|
||||||
payload: { provider, clearFirst },
|
|
||||||
});
|
|
||||||
|
|
||||||
return c.json({ success: true, jobId: job.id, message: `${provider} symbols sync job queued` });
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to queue enhanced symbol sync job', { error });
|
|
||||||
return c.json(
|
|
||||||
{ success: false, error: error instanceof Error ? error.message : 'Unknown error' },
|
|
||||||
500
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Enhanced status endpoints
|
|
||||||
enhancedSync.get('/status/enhanced', async c => {
|
|
||||||
try {
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
|
||||||
|
|
||||||
const job = await exchangesQueue.addJob('enhanced-sync-status', {
|
|
||||||
handler: 'exchanges',
|
|
||||||
operation: 'enhanced-sync-status',
|
|
||||||
payload: {},
|
|
||||||
});
|
|
||||||
|
|
||||||
// Wait for job to complete and return result
|
|
||||||
const result = await job.waitUntilFinished();
|
|
||||||
return c.json(result);
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to get enhanced sync status', { error });
|
|
||||||
return c.json({ error: error instanceof Error ? error.message : 'Unknown error' }, 500);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
export { enhancedSync as enhancedSyncRoutes };
|
|
||||||
|
|
@ -1,49 +0,0 @@
|
||||||
import { Hono } from 'hono';
|
|
||||||
import { getLogger } from '@stock-bot/logger';
|
|
||||||
import { QueueManager } from '@stock-bot/queue';
|
|
||||||
|
|
||||||
const logger = getLogger('stats-routes');
|
|
||||||
const stats = new Hono();
|
|
||||||
|
|
||||||
// Statistics endpoints
|
|
||||||
stats.get('/exchanges', async c => {
|
|
||||||
try {
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
|
||||||
|
|
||||||
const job = await exchangesQueue.addJob('get-exchange-stats', {
|
|
||||||
handler: 'exchanges',
|
|
||||||
operation: 'get-exchange-stats',
|
|
||||||
payload: {},
|
|
||||||
});
|
|
||||||
|
|
||||||
// Wait for job to complete and return result
|
|
||||||
const result = await job.waitUntilFinished();
|
|
||||||
return c.json(result);
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to get exchange stats', { error });
|
|
||||||
return c.json({ error: error instanceof Error ? error.message : 'Unknown error' }, 500);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
stats.get('/provider-mappings', async c => {
|
|
||||||
try {
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
|
||||||
|
|
||||||
const job = await exchangesQueue.addJob('get-provider-mapping-stats', {
|
|
||||||
handler: 'exchanges',
|
|
||||||
operation: 'get-provider-mapping-stats',
|
|
||||||
payload: {},
|
|
||||||
});
|
|
||||||
|
|
||||||
// Wait for job to complete and return result
|
|
||||||
const result = await job.waitUntilFinished();
|
|
||||||
return c.json(result);
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to get provider mapping stats', { error });
|
|
||||||
return c.json({ error: error instanceof Error ? error.message : 'Unknown error' }, 500);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
export { stats as statsRoutes };
|
|
||||||
|
|
@ -1,96 +0,0 @@
|
||||||
import { Hono } from 'hono';
|
|
||||||
import { getLogger } from '@stock-bot/logger';
|
|
||||||
import { QueueManager } from '@stock-bot/queue';
|
|
||||||
|
|
||||||
const logger = getLogger('sync-routes');
|
|
||||||
const sync = new Hono();
|
|
||||||
|
|
||||||
// Manual sync trigger endpoints
|
|
||||||
sync.post('/symbols', async c => {
|
|
||||||
try {
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const symbolsQueue = queueManager.getQueue('symbols');
|
|
||||||
|
|
||||||
const job = await symbolsQueue.addJob('sync-qm-symbols', {
|
|
||||||
handler: 'symbols',
|
|
||||||
operation: 'sync-qm-symbols',
|
|
||||||
payload: {},
|
|
||||||
});
|
|
||||||
|
|
||||||
return c.json({ success: true, jobId: job.id, message: 'QM symbols sync job queued' });
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to queue symbol sync job', { error });
|
|
||||||
return c.json(
|
|
||||||
{ success: false, error: error instanceof Error ? error.message : 'Unknown error' },
|
|
||||||
500
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
sync.post('/exchanges', async c => {
|
|
||||||
try {
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
|
||||||
|
|
||||||
const job = await exchangesQueue.addJob('sync-qm-exchanges', {
|
|
||||||
handler: 'exchanges',
|
|
||||||
operation: 'sync-qm-exchanges',
|
|
||||||
payload: {},
|
|
||||||
});
|
|
||||||
|
|
||||||
return c.json({ success: true, jobId: job.id, message: 'QM exchanges sync job queued' });
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to queue exchange sync job', { error });
|
|
||||||
return c.json(
|
|
||||||
{ success: false, error: error instanceof Error ? error.message : 'Unknown error' },
|
|
||||||
500
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Get sync status
|
|
||||||
sync.get('/status', async c => {
|
|
||||||
try {
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const symbolsQueue = queueManager.getQueue('symbols');
|
|
||||||
|
|
||||||
const job = await symbolsQueue.addJob('sync-status', {
|
|
||||||
handler: 'symbols',
|
|
||||||
operation: 'sync-status',
|
|
||||||
payload: {},
|
|
||||||
});
|
|
||||||
|
|
||||||
// Wait for job to complete and return result
|
|
||||||
const result = await job.waitUntilFinished();
|
|
||||||
return c.json(result);
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to get sync status', { error });
|
|
||||||
return c.json({ error: error instanceof Error ? error.message : 'Unknown error' }, 500);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Clear data endpoint
|
|
||||||
sync.post('/clear', async c => {
|
|
||||||
try {
|
|
||||||
const queueManager = QueueManager.getInstance();
|
|
||||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
|
||||||
|
|
||||||
const job = await exchangesQueue.addJob('clear-postgresql-data', {
|
|
||||||
handler: 'exchanges',
|
|
||||||
operation: 'clear-postgresql-data',
|
|
||||||
payload: {},
|
|
||||||
});
|
|
||||||
|
|
||||||
// Wait for job to complete and return result
|
|
||||||
const result = await job.waitUntilFinished();
|
|
||||||
return c.json({ success: true, result });
|
|
||||||
} catch (error) {
|
|
||||||
logger.error('Failed to clear PostgreSQL data', { error });
|
|
||||||
return c.json(
|
|
||||||
{ success: false, error: error instanceof Error ? error.message : 'Unknown error' },
|
|
||||||
500
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
export { sync as syncRoutes };
|
|
||||||
|
|
@ -1,14 +0,0 @@
|
||||||
{
|
|
||||||
"extends": "../../tsconfig.app.json",
|
|
||||||
"references": [
|
|
||||||
{ "path": "../../libs/types" },
|
|
||||||
{ "path": "../../libs/config" },
|
|
||||||
{ "path": "../../libs/logger" },
|
|
||||||
{ "path": "../../libs/cache" },
|
|
||||||
{ "path": "../../libs/queue" },
|
|
||||||
{ "path": "../../libs/mongodb-client" },
|
|
||||||
{ "path": "../../libs/postgres-client" },
|
|
||||||
{ "path": "../../libs/questdb-client" },
|
|
||||||
{ "path": "../../libs/shutdown" }
|
|
||||||
]
|
|
||||||
}
|
|
||||||
124
apps/stock/README.md
Normal file
124
apps/stock/README.md
Normal file
|
|
@ -0,0 +1,124 @@
|
||||||
|
# Stock Trading Bot Application
|
||||||
|
|
||||||
|
A comprehensive stock trading bot application with multiple microservices for data ingestion, processing, and API access.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
The stock bot consists of the following services:
|
||||||
|
|
||||||
|
- **Config**: Centralized configuration management
|
||||||
|
- **Data Ingestion**: Handles real-time and historical data collection
|
||||||
|
- **Data Pipeline**: Processes and transforms market data
|
||||||
|
- **Web API**: RESTful API for accessing stock data
|
||||||
|
- **Web App**: Frontend user interface
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
- Node.js >= 18.0.0
|
||||||
|
- Bun >= 1.1.0
|
||||||
|
- Turbo
|
||||||
|
- PostgreSQL, MongoDB, QuestDB, and Redis/Dragonfly running locally
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install all dependencies
|
||||||
|
bun install
|
||||||
|
|
||||||
|
# Build the configuration package first
|
||||||
|
bun run build:config
|
||||||
|
```
|
||||||
|
|
||||||
|
### Development
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run all services in development mode (using Turbo)
|
||||||
|
bun run dev
|
||||||
|
|
||||||
|
# Run only backend services
|
||||||
|
bun run dev:backend
|
||||||
|
|
||||||
|
# Run only frontend
|
||||||
|
bun run dev:frontend
|
||||||
|
|
||||||
|
# Run specific service
|
||||||
|
bun run dev:ingestion
|
||||||
|
bun run dev:pipeline
|
||||||
|
bun run dev:api
|
||||||
|
bun run dev:web
|
||||||
|
```
|
||||||
|
|
||||||
|
### Production
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build all services (using Turbo)
|
||||||
|
bun run build
|
||||||
|
|
||||||
|
# Start with PM2
|
||||||
|
bun run pm2:start
|
||||||
|
|
||||||
|
# Check status
|
||||||
|
bun run pm2:status
|
||||||
|
|
||||||
|
# View logs
|
||||||
|
bun run pm2:logs
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
Configuration is managed centrally in the `config` package.
|
||||||
|
|
||||||
|
- Default config: `config/config/default.json`
|
||||||
|
- Environment-specific: `config/config/[environment].json`
|
||||||
|
- Environment variables: Can override any config value
|
||||||
|
|
||||||
|
### Health Checks
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check all services health
|
||||||
|
bun run health:check
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Management
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run migrations
|
||||||
|
bun run db:migrate
|
||||||
|
|
||||||
|
# Seed database
|
||||||
|
bun run db:seed
|
||||||
|
```
|
||||||
|
|
||||||
|
## Available Scripts
|
||||||
|
|
||||||
|
| Script | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| `dev` | Run all services in development mode |
|
||||||
|
| `build` | Build all services |
|
||||||
|
| `start` | Start all backend services |
|
||||||
|
| `test` | Run tests for all services |
|
||||||
|
| `lint` | Lint all services |
|
||||||
|
| `clean` | Clean build artifacts and dependencies |
|
||||||
|
| `docker:build` | Build Docker images |
|
||||||
|
| `pm2:start` | Start services with PM2 |
|
||||||
|
| `health:check` | Check health of all services |
|
||||||
|
|
||||||
|
## Service Ports
|
||||||
|
|
||||||
|
- Data Ingestion: 2001
|
||||||
|
- Data Pipeline: 2002
|
||||||
|
- Web API: 2003
|
||||||
|
- Web App: 3000 (or next available)
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
Key environment variables:
|
||||||
|
|
||||||
|
- `NODE_ENV`: development, test, or production
|
||||||
|
- `PORT`: Override default service port
|
||||||
|
- Database connection strings
|
||||||
|
- API keys for data providers
|
||||||
|
|
||||||
|
See `config/config/default.json` for full configuration options.
|
||||||
228
apps/stock/config/config/default.json
Normal file
228
apps/stock/config/config/default.json
Normal file
|
|
@ -0,0 +1,228 @@
|
||||||
|
{
|
||||||
|
"name": "stock-bot",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"environment": "development",
|
||||||
|
"service": {
|
||||||
|
"name": "stock-bot",
|
||||||
|
"port": 3000,
|
||||||
|
"host": "0.0.0.0",
|
||||||
|
"healthCheckPath": "/health",
|
||||||
|
"metricsPath": "/metrics",
|
||||||
|
"shutdownTimeout": 30000,
|
||||||
|
"cors": {
|
||||||
|
"enabled": true,
|
||||||
|
"origin": "*",
|
||||||
|
"credentials": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"database": {
|
||||||
|
"postgres": {
|
||||||
|
"enabled": true,
|
||||||
|
"host": "localhost",
|
||||||
|
"port": 5432,
|
||||||
|
"database": "trading_bot",
|
||||||
|
"user": "trading_user",
|
||||||
|
"password": "trading_pass_dev",
|
||||||
|
"ssl": false,
|
||||||
|
"poolSize": 20,
|
||||||
|
"connectionTimeout": 30000,
|
||||||
|
"idleTimeout": 10000
|
||||||
|
},
|
||||||
|
"questdb": {
|
||||||
|
"host": "localhost",
|
||||||
|
"ilpPort": 9009,
|
||||||
|
"httpPort": 9000,
|
||||||
|
"pgPort": 8812,
|
||||||
|
"database": "questdb",
|
||||||
|
"user": "admin",
|
||||||
|
"password": "quest",
|
||||||
|
"bufferSize": 65536,
|
||||||
|
"flushInterval": 1000
|
||||||
|
},
|
||||||
|
"mongodb": {
|
||||||
|
"uri": "mongodb://trading_admin:trading_mongo_dev@localhost:27017/stock?authSource=admin",
|
||||||
|
"database": "stock",
|
||||||
|
"poolSize": 20
|
||||||
|
},
|
||||||
|
"dragonfly": {
|
||||||
|
"host": "localhost",
|
||||||
|
"port": 6379,
|
||||||
|
"db": 0,
|
||||||
|
"keyPrefix": "stock-bot:",
|
||||||
|
"maxRetries": 3,
|
||||||
|
"retryDelay": 100
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"log": {
|
||||||
|
"level": "info",
|
||||||
|
"format": "json",
|
||||||
|
"hideObject": false,
|
||||||
|
"loki": {
|
||||||
|
"enabled": false,
|
||||||
|
"host": "localhost",
|
||||||
|
"port": 3100,
|
||||||
|
"labels": {}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"redis": {
|
||||||
|
"enabled": true,
|
||||||
|
"host": "localhost",
|
||||||
|
"port": 6379,
|
||||||
|
"db": 0
|
||||||
|
},
|
||||||
|
"queue": {
|
||||||
|
"enabled": true,
|
||||||
|
"redis": {
|
||||||
|
"host": "localhost",
|
||||||
|
"port": 6379,
|
||||||
|
"db": 1
|
||||||
|
},
|
||||||
|
"workers": 1,
|
||||||
|
"concurrency": 1,
|
||||||
|
"enableScheduledJobs": true,
|
||||||
|
"delayWorkerStart": false,
|
||||||
|
"defaultJobOptions": {
|
||||||
|
"attempts": 3,
|
||||||
|
"backoff": {
|
||||||
|
"type": "exponential",
|
||||||
|
"delay": 1000
|
||||||
|
},
|
||||||
|
"removeOnComplete": 100,
|
||||||
|
"removeOnFail": 50,
|
||||||
|
"timeout": 300000
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"http": {
|
||||||
|
"timeout": 30000,
|
||||||
|
"retries": 3,
|
||||||
|
"retryDelay": 1000,
|
||||||
|
"userAgent": "StockBot/1.0",
|
||||||
|
"proxy": {
|
||||||
|
"enabled": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"webshare": {
|
||||||
|
"apiKey": "",
|
||||||
|
"apiUrl": "https://proxy.webshare.io/api/v2/",
|
||||||
|
"enabled": true
|
||||||
|
},
|
||||||
|
"browser": {
|
||||||
|
"headless": true,
|
||||||
|
"timeout": 30000
|
||||||
|
},
|
||||||
|
"proxy": {
|
||||||
|
"enabled": true,
|
||||||
|
"cachePrefix": "proxy:",
|
||||||
|
"ttl": 3600,
|
||||||
|
"webshare": {
|
||||||
|
"apiKey": "y8ay534rcbybdkk3evnzmt640xxfhy7252ce2t98",
|
||||||
|
"apiUrl": "https://proxy.webshare.io/api/v2/"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"providers": {
|
||||||
|
"yahoo": {
|
||||||
|
"name": "yahoo",
|
||||||
|
"enabled": true,
|
||||||
|
"priority": 1,
|
||||||
|
"rateLimit": {
|
||||||
|
"maxRequests": 5,
|
||||||
|
"windowMs": 60000
|
||||||
|
},
|
||||||
|
"timeout": 30000,
|
||||||
|
"baseUrl": "https://query1.finance.yahoo.com"
|
||||||
|
},
|
||||||
|
"qm": {
|
||||||
|
"name": "qm",
|
||||||
|
"enabled": false,
|
||||||
|
"priority": 2,
|
||||||
|
"username": "",
|
||||||
|
"password": "",
|
||||||
|
"baseUrl": "https://app.quotemedia.com/quotetools",
|
||||||
|
"webmasterId": ""
|
||||||
|
},
|
||||||
|
"ib": {
|
||||||
|
"name": "ib",
|
||||||
|
"enabled": false,
|
||||||
|
"priority": 3,
|
||||||
|
"gateway": {
|
||||||
|
"host": "localhost",
|
||||||
|
"port": 5000,
|
||||||
|
"clientId": 1
|
||||||
|
},
|
||||||
|
"marketDataType": "delayed"
|
||||||
|
},
|
||||||
|
"eod": {
|
||||||
|
"name": "eod",
|
||||||
|
"enabled": false,
|
||||||
|
"priority": 4,
|
||||||
|
"apiKey": "",
|
||||||
|
"baseUrl": "https://eodhistoricaldata.com/api",
|
||||||
|
"tier": "free"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"features": {
|
||||||
|
"realtime": true,
|
||||||
|
"backtesting": true,
|
||||||
|
"paperTrading": true,
|
||||||
|
"autoTrading": false,
|
||||||
|
"historicalData": true,
|
||||||
|
"realtimeData": true,
|
||||||
|
"fundamentalData": true,
|
||||||
|
"newsAnalysis": false,
|
||||||
|
"notifications": false,
|
||||||
|
"emailAlerts": false,
|
||||||
|
"smsAlerts": false,
|
||||||
|
"webhookAlerts": false,
|
||||||
|
"technicalAnalysis": true,
|
||||||
|
"sentimentAnalysis": false,
|
||||||
|
"patternRecognition": false,
|
||||||
|
"riskManagement": true,
|
||||||
|
"positionSizing": true,
|
||||||
|
"stopLoss": true,
|
||||||
|
"takeProfit": true
|
||||||
|
},
|
||||||
|
"services": {
|
||||||
|
"dataIngestion": {
|
||||||
|
"port": 2001,
|
||||||
|
"workers": 4,
|
||||||
|
"queues": {
|
||||||
|
"ceo": { "concurrency": 2 },
|
||||||
|
"webshare": { "concurrency": 1 },
|
||||||
|
"qm": { "concurrency": 2 },
|
||||||
|
"ib": { "concurrency": 1 },
|
||||||
|
"proxy": { "concurrency": 1 }
|
||||||
|
},
|
||||||
|
"rateLimit": {
|
||||||
|
"enabled": true,
|
||||||
|
"requestsPerSecond": 10
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"dataPipeline": {
|
||||||
|
"port": 2002,
|
||||||
|
"workers": 2,
|
||||||
|
"batchSize": 1000,
|
||||||
|
"processingInterval": 60000,
|
||||||
|
"queues": {
|
||||||
|
"exchanges": { "concurrency": 1 },
|
||||||
|
"symbols": { "concurrency": 2 }
|
||||||
|
},
|
||||||
|
"syncOptions": {
|
||||||
|
"maxRetries": 3,
|
||||||
|
"retryDelay": 5000,
|
||||||
|
"timeout": 300000
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"webApi": {
|
||||||
|
"port": 2003,
|
||||||
|
"rateLimitPerMinute": 60,
|
||||||
|
"cache": {
|
||||||
|
"ttl": 300,
|
||||||
|
"checkPeriod": 60
|
||||||
|
},
|
||||||
|
"cors": {
|
||||||
|
"origins": ["http://localhost:3000", "http://localhost:4200"],
|
||||||
|
"credentials": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
11
apps/stock/config/config/development.json
Normal file
11
apps/stock/config/config/development.json
Normal file
|
|
@ -0,0 +1,11 @@
|
||||||
|
{
|
||||||
|
"environment": "development",
|
||||||
|
"log": {
|
||||||
|
"level": "debug",
|
||||||
|
"format": "pretty"
|
||||||
|
},
|
||||||
|
"features": {
|
||||||
|
"autoTrading": false,
|
||||||
|
"paperTrading": true
|
||||||
|
}
|
||||||
|
}
|
||||||
42
apps/stock/config/config/production.json
Normal file
42
apps/stock/config/config/production.json
Normal file
|
|
@ -0,0 +1,42 @@
|
||||||
|
{
|
||||||
|
"environment": "production",
|
||||||
|
"log": {
|
||||||
|
"level": "warn",
|
||||||
|
"format": "json",
|
||||||
|
"loki": {
|
||||||
|
"enabled": true,
|
||||||
|
"host": "loki.production.example.com",
|
||||||
|
"port": 3100
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"database": {
|
||||||
|
"postgres": {
|
||||||
|
"host": "postgres.production.example.com",
|
||||||
|
"ssl": true,
|
||||||
|
"poolSize": 50
|
||||||
|
},
|
||||||
|
"questdb": {
|
||||||
|
"host": "questdb.production.example.com"
|
||||||
|
},
|
||||||
|
"mongodb": {
|
||||||
|
"uri": "mongodb+srv://prod_user:prod_pass@cluster.mongodb.net/stock?retryWrites=true&w=majority",
|
||||||
|
"poolSize": 50
|
||||||
|
},
|
||||||
|
"dragonfly": {
|
||||||
|
"host": "redis.production.example.com",
|
||||||
|
"password": "production_redis_password"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"queue": {
|
||||||
|
"redis": {
|
||||||
|
"host": "redis.production.example.com",
|
||||||
|
"password": "production_redis_password"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"features": {
|
||||||
|
"autoTrading": true,
|
||||||
|
"notifications": true,
|
||||||
|
"emailAlerts": true,
|
||||||
|
"webhookAlerts": true
|
||||||
|
}
|
||||||
|
}
|
||||||
22
apps/stock/config/package.json
Normal file
22
apps/stock/config/package.json
Normal file
|
|
@ -0,0 +1,22 @@
|
||||||
|
{
|
||||||
|
"name": "@stock-bot/stock-config",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"description": "Stock trading bot configuration",
|
||||||
|
"main": "dist/index.js",
|
||||||
|
"types": "dist/index.d.ts",
|
||||||
|
"scripts": {
|
||||||
|
"build": "tsc",
|
||||||
|
"clean": "rm -rf dist",
|
||||||
|
"dev": "tsc --watch",
|
||||||
|
"test": "jest",
|
||||||
|
"lint": "eslint src --ext .ts"
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"@stock-bot/config": "*",
|
||||||
|
"zod": "^3.22.4"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@types/node": "^20.11.0",
|
||||||
|
"typescript": "^5.3.3"
|
||||||
|
}
|
||||||
|
}
|
||||||
87
apps/stock/config/src/config-instance.ts
Normal file
87
apps/stock/config/src/config-instance.ts
Normal file
|
|
@ -0,0 +1,87 @@
|
||||||
|
import { ConfigManager, createAppConfig } from '@stock-bot/config';
|
||||||
|
import { stockAppSchema, type StockAppConfig } from './schemas';
|
||||||
|
import * as path from 'path';
|
||||||
|
import { getLogger } from '@stock-bot/logger';
|
||||||
|
|
||||||
|
let configInstance: ConfigManager<StockAppConfig> | null = null;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Initialize the stock application configuration
|
||||||
|
* @param serviceName - Optional service name to override port configuration
|
||||||
|
*/
|
||||||
|
export function initializeStockConfig(serviceName?: 'dataIngestion' | 'dataPipeline' | 'webApi'): StockAppConfig {
|
||||||
|
try {
|
||||||
|
if (!configInstance) {
|
||||||
|
configInstance = createAppConfig(stockAppSchema, {
|
||||||
|
configPath: path.join(__dirname, '../config'),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
const config = configInstance.initialize(stockAppSchema);
|
||||||
|
|
||||||
|
// If a service name is provided, override the service port
|
||||||
|
if (serviceName && config.services?.[serviceName]) {
|
||||||
|
const kebabName = serviceName.replace(/([A-Z])/g, '-$1').toLowerCase().replace(/^-/, '');
|
||||||
|
return {
|
||||||
|
...config,
|
||||||
|
service: {
|
||||||
|
...config.service,
|
||||||
|
port: config.services[serviceName].port,
|
||||||
|
name: serviceName, // Keep original for backward compatibility
|
||||||
|
serviceName: kebabName // Standard kebab-case name
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
return config;
|
||||||
|
} catch (error: any) {
|
||||||
|
const logger = getLogger('stock-config');
|
||||||
|
logger.error('Failed to initialize stock configuration:', error.message);
|
||||||
|
if (error.errors) {
|
||||||
|
logger.error('Validation errors:', error.errors);
|
||||||
|
}
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the current stock configuration
|
||||||
|
*/
|
||||||
|
export function getStockConfig(): StockAppConfig {
|
||||||
|
if (!configInstance) {
|
||||||
|
// Auto-initialize if not already done
|
||||||
|
return initializeStockConfig();
|
||||||
|
}
|
||||||
|
return configInstance.get();
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get configuration for a specific service
|
||||||
|
*/
|
||||||
|
export function getServiceConfig(service: 'dataIngestion' | 'dataPipeline' | 'webApi') {
|
||||||
|
const config = getStockConfig();
|
||||||
|
return config.services?.[service];
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get configuration for a specific provider
|
||||||
|
*/
|
||||||
|
export function getProviderConfig(provider: 'eod' | 'ib' | 'qm' | 'yahoo') {
|
||||||
|
const config = getStockConfig();
|
||||||
|
return config.providers[provider];
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if a feature is enabled
|
||||||
|
*/
|
||||||
|
export function isFeatureEnabled(feature: keyof StockAppConfig['features']): boolean {
|
||||||
|
const config = getStockConfig();
|
||||||
|
return config.features[feature];
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Reset configuration (useful for testing)
|
||||||
|
*/
|
||||||
|
export function resetStockConfig(): void {
|
||||||
|
configInstance = null;
|
||||||
|
}
|
||||||
15
apps/stock/config/src/index.ts
Normal file
15
apps/stock/config/src/index.ts
Normal file
|
|
@ -0,0 +1,15 @@
|
||||||
|
// Export schemas
|
||||||
|
export * from './schemas';
|
||||||
|
|
||||||
|
// Export config instance functions
|
||||||
|
export {
|
||||||
|
initializeStockConfig,
|
||||||
|
getStockConfig,
|
||||||
|
getServiceConfig,
|
||||||
|
getProviderConfig,
|
||||||
|
isFeatureEnabled,
|
||||||
|
resetStockConfig,
|
||||||
|
} from './config-instance';
|
||||||
|
|
||||||
|
// Re-export type for convenience
|
||||||
|
export type { StockAppConfig } from './schemas/stock-app.schema';
|
||||||
35
apps/stock/config/src/schemas/features.schema.ts
Normal file
35
apps/stock/config/src/schemas/features.schema.ts
Normal file
|
|
@ -0,0 +1,35 @@
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Feature flags for the stock trading application
|
||||||
|
*/
|
||||||
|
export const featuresSchema = z.object({
|
||||||
|
// Trading features
|
||||||
|
realtime: z.boolean().default(true),
|
||||||
|
backtesting: z.boolean().default(true),
|
||||||
|
paperTrading: z.boolean().default(true),
|
||||||
|
autoTrading: z.boolean().default(false),
|
||||||
|
|
||||||
|
// Data features
|
||||||
|
historicalData: z.boolean().default(true),
|
||||||
|
realtimeData: z.boolean().default(true),
|
||||||
|
fundamentalData: z.boolean().default(true),
|
||||||
|
newsAnalysis: z.boolean().default(false),
|
||||||
|
|
||||||
|
// Notification features
|
||||||
|
notifications: z.boolean().default(false),
|
||||||
|
emailAlerts: z.boolean().default(false),
|
||||||
|
smsAlerts: z.boolean().default(false),
|
||||||
|
webhookAlerts: z.boolean().default(false),
|
||||||
|
|
||||||
|
// Analysis features
|
||||||
|
technicalAnalysis: z.boolean().default(true),
|
||||||
|
sentimentAnalysis: z.boolean().default(false),
|
||||||
|
patternRecognition: z.boolean().default(false),
|
||||||
|
|
||||||
|
// Risk management
|
||||||
|
riskManagement: z.boolean().default(true),
|
||||||
|
positionSizing: z.boolean().default(true),
|
||||||
|
stopLoss: z.boolean().default(true),
|
||||||
|
takeProfit: z.boolean().default(true),
|
||||||
|
});
|
||||||
3
apps/stock/config/src/schemas/index.ts
Normal file
3
apps/stock/config/src/schemas/index.ts
Normal file
|
|
@ -0,0 +1,3 @@
|
||||||
|
export * from './stock-app.schema';
|
||||||
|
export * from './providers.schema';
|
||||||
|
export * from './features.schema';
|
||||||
67
apps/stock/config/src/schemas/providers.schema.ts
Normal file
67
apps/stock/config/src/schemas/providers.schema.ts
Normal file
|
|
@ -0,0 +1,67 @@
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
// Base provider configuration
|
||||||
|
export const baseProviderConfigSchema = z.object({
|
||||||
|
name: z.string(),
|
||||||
|
enabled: z.boolean().default(true),
|
||||||
|
priority: z.number().default(0),
|
||||||
|
rateLimit: z
|
||||||
|
.object({
|
||||||
|
maxRequests: z.number().default(100),
|
||||||
|
windowMs: z.number().default(60000),
|
||||||
|
})
|
||||||
|
.optional(),
|
||||||
|
timeout: z.number().default(30000),
|
||||||
|
retries: z.number().default(3),
|
||||||
|
});
|
||||||
|
|
||||||
|
// EOD Historical Data provider
|
||||||
|
export const eodProviderConfigSchema = baseProviderConfigSchema.extend({
|
||||||
|
apiKey: z.string(),
|
||||||
|
baseUrl: z.string().default('https://eodhistoricaldata.com/api'),
|
||||||
|
tier: z.enum(['free', 'fundamentals', 'all-in-one']).default('free'),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Interactive Brokers provider
|
||||||
|
export const ibProviderConfigSchema = baseProviderConfigSchema.extend({
|
||||||
|
gateway: z.object({
|
||||||
|
host: z.string().default('localhost'),
|
||||||
|
port: z.number().default(5000),
|
||||||
|
clientId: z.number().default(1),
|
||||||
|
}),
|
||||||
|
account: z.string().optional(),
|
||||||
|
marketDataType: z.enum(['live', 'delayed', 'frozen']).default('delayed'),
|
||||||
|
});
|
||||||
|
|
||||||
|
// QuoteMedia provider
|
||||||
|
export const qmProviderConfigSchema = baseProviderConfigSchema.extend({
|
||||||
|
username: z.string(),
|
||||||
|
password: z.string(),
|
||||||
|
baseUrl: z.string().default('https://app.quotemedia.com/quotetools'),
|
||||||
|
webmasterId: z.string(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Yahoo Finance provider
|
||||||
|
export const yahooProviderConfigSchema = baseProviderConfigSchema.extend({
|
||||||
|
baseUrl: z.string().default('https://query1.finance.yahoo.com'),
|
||||||
|
cookieJar: z.boolean().default(true),
|
||||||
|
crumb: z.string().optional(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Combined provider configuration
|
||||||
|
export const providersSchema = z.object({
|
||||||
|
eod: eodProviderConfigSchema.optional(),
|
||||||
|
ib: ibProviderConfigSchema.optional(),
|
||||||
|
qm: qmProviderConfigSchema.optional(),
|
||||||
|
yahoo: yahooProviderConfigSchema.optional(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Dynamic provider configuration type
|
||||||
|
export type ProviderName = 'eod' | 'ib' | 'qm' | 'yahoo';
|
||||||
|
|
||||||
|
export const providerSchemas = {
|
||||||
|
eod: eodProviderConfigSchema,
|
||||||
|
ib: ibProviderConfigSchema,
|
||||||
|
qm: qmProviderConfigSchema,
|
||||||
|
yahoo: yahooProviderConfigSchema,
|
||||||
|
} as const;
|
||||||
72
apps/stock/config/src/schemas/stock-app.schema.ts
Normal file
72
apps/stock/config/src/schemas/stock-app.schema.ts
Normal file
|
|
@ -0,0 +1,72 @@
|
||||||
|
import { z } from 'zod';
|
||||||
|
import {
|
||||||
|
baseAppSchema,
|
||||||
|
postgresConfigSchema,
|
||||||
|
mongodbConfigSchema,
|
||||||
|
questdbConfigSchema,
|
||||||
|
dragonflyConfigSchema
|
||||||
|
} from '@stock-bot/config';
|
||||||
|
import { providersSchema } from './providers.schema';
|
||||||
|
import { featuresSchema } from './features.schema';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Stock trading application configuration schema
|
||||||
|
*/
|
||||||
|
export const stockAppSchema = baseAppSchema.extend({
|
||||||
|
// Stock app uses all databases
|
||||||
|
database: z.object({
|
||||||
|
postgres: postgresConfigSchema,
|
||||||
|
mongodb: mongodbConfigSchema,
|
||||||
|
questdb: questdbConfigSchema,
|
||||||
|
dragonfly: dragonflyConfigSchema,
|
||||||
|
}),
|
||||||
|
|
||||||
|
// Stock-specific providers
|
||||||
|
providers: providersSchema,
|
||||||
|
|
||||||
|
// Feature flags
|
||||||
|
features: featuresSchema,
|
||||||
|
|
||||||
|
// Service-specific configurations
|
||||||
|
services: z.object({
|
||||||
|
dataIngestion: z.object({
|
||||||
|
port: z.number().default(2001),
|
||||||
|
workers: z.number().default(4),
|
||||||
|
queues: z.record(z.object({
|
||||||
|
concurrency: z.number().default(1),
|
||||||
|
})).optional(),
|
||||||
|
rateLimit: z.object({
|
||||||
|
enabled: z.boolean().default(true),
|
||||||
|
requestsPerSecond: z.number().default(10),
|
||||||
|
}).optional(),
|
||||||
|
}).optional(),
|
||||||
|
dataPipeline: z.object({
|
||||||
|
port: z.number().default(2002),
|
||||||
|
workers: z.number().default(2),
|
||||||
|
batchSize: z.number().default(1000),
|
||||||
|
processingInterval: z.number().default(60000),
|
||||||
|
queues: z.record(z.object({
|
||||||
|
concurrency: z.number().default(1),
|
||||||
|
})).optional(),
|
||||||
|
syncOptions: z.object({
|
||||||
|
maxRetries: z.number().default(3),
|
||||||
|
retryDelay: z.number().default(5000),
|
||||||
|
timeout: z.number().default(300000),
|
||||||
|
}).optional(),
|
||||||
|
}).optional(),
|
||||||
|
webApi: z.object({
|
||||||
|
port: z.number().default(2003),
|
||||||
|
rateLimitPerMinute: z.number().default(60),
|
||||||
|
cache: z.object({
|
||||||
|
ttl: z.number().default(300),
|
||||||
|
checkPeriod: z.number().default(60),
|
||||||
|
}).optional(),
|
||||||
|
cors: z.object({
|
||||||
|
origins: z.array(z.string()).default(['http://localhost:3000']),
|
||||||
|
credentials: z.boolean().default(true),
|
||||||
|
}).optional(),
|
||||||
|
}).optional(),
|
||||||
|
}).optional(),
|
||||||
|
});
|
||||||
|
|
||||||
|
export type StockAppConfig = z.infer<typeof stockAppSchema>;
|
||||||
15
apps/stock/config/tsconfig.json
Normal file
15
apps/stock/config/tsconfig.json
Normal file
|
|
@ -0,0 +1,15 @@
|
||||||
|
{
|
||||||
|
"extends": "../../../tsconfig.json",
|
||||||
|
"compilerOptions": {
|
||||||
|
"outDir": "./dist",
|
||||||
|
"rootDir": "./src",
|
||||||
|
"composite": true,
|
||||||
|
"declaration": true,
|
||||||
|
"declarationMap": true
|
||||||
|
},
|
||||||
|
"include": ["src/**/*"],
|
||||||
|
"exclude": ["node_modules", "dist", "**/*.test.ts"],
|
||||||
|
"references": [
|
||||||
|
{ "path": "../../../libs/core/config" }
|
||||||
|
]
|
||||||
|
}
|
||||||
85
apps/stock/data-ingestion/AWILIX-MIGRATION.md
Normal file
85
apps/stock/data-ingestion/AWILIX-MIGRATION.md
Normal file
|
|
@ -0,0 +1,85 @@
|
||||||
|
# Awilix DI Container Migration Guide
|
||||||
|
|
||||||
|
This guide explains how to use the new Awilix dependency injection container in the data-ingestion service.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The Awilix container provides proper dependency injection for decoupled libraries, allowing them to be reused in other projects without stock-bot specific dependencies.
|
||||||
|
|
||||||
|
## Current Implementation
|
||||||
|
|
||||||
|
The data-ingestion service now uses a hybrid approach:
|
||||||
|
1. Awilix container for ProxyManager and other decoupled services
|
||||||
|
2. Legacy service factory for backward compatibility
|
||||||
|
|
||||||
|
## Usage Example
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Create Awilix container
|
||||||
|
const awilixConfig = {
|
||||||
|
redis: {
|
||||||
|
host: config.database.dragonfly.host,
|
||||||
|
port: config.database.dragonfly.port,
|
||||||
|
db: config.database.dragonfly.db,
|
||||||
|
},
|
||||||
|
mongodb: {
|
||||||
|
uri: config.database.mongodb.uri,
|
||||||
|
database: config.database.mongodb.database,
|
||||||
|
},
|
||||||
|
postgres: {
|
||||||
|
host: config.database.postgres.host,
|
||||||
|
port: config.database.postgres.port,
|
||||||
|
database: config.database.postgres.database,
|
||||||
|
user: config.database.postgres.user,
|
||||||
|
password: config.database.postgres.password,
|
||||||
|
},
|
||||||
|
proxy: {
|
||||||
|
cachePrefix: 'proxy:',
|
||||||
|
ttl: 3600,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const container = createServiceContainer(awilixConfig);
|
||||||
|
await initializeServices(container);
|
||||||
|
|
||||||
|
// Access services from container
|
||||||
|
const proxyManager = container.resolve('proxyManager');
|
||||||
|
const cache = container.resolve('cache');
|
||||||
|
```
|
||||||
|
|
||||||
|
## Handler Integration
|
||||||
|
|
||||||
|
Handlers receive services through the enhanced service container:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Create service adapter with proxy from Awilix
|
||||||
|
const serviceContainerWithProxy = createServiceAdapter(services);
|
||||||
|
Object.defineProperty(serviceContainerWithProxy, 'proxy', {
|
||||||
|
get: () => container.resolve('proxyManager'),
|
||||||
|
enumerable: true,
|
||||||
|
configurable: true
|
||||||
|
});
|
||||||
|
|
||||||
|
// Handlers can now access proxy service
|
||||||
|
class MyHandler extends BaseHandler {
|
||||||
|
async myOperation() {
|
||||||
|
const proxy = this.proxy.getRandomProxy();
|
||||||
|
// Use proxy...
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Benefits
|
||||||
|
|
||||||
|
1. **Decoupled Libraries**: Libraries no longer depend on @stock-bot/config
|
||||||
|
2. **Reusability**: Libraries can be used in other projects
|
||||||
|
3. **Testability**: Easy to mock dependencies for testing
|
||||||
|
4. **Type Safety**: Full TypeScript support with Awilix
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
To fully migrate to Awilix:
|
||||||
|
1. Update HTTP library to accept dependencies via constructor
|
||||||
|
2. Update Queue library to accept Redis config via constructor
|
||||||
|
3. Create actual MongoDB, PostgreSQL, and QuestDB clients in the container
|
||||||
|
4. Remove legacy service factory once all services are migrated
|
||||||
|
|
@ -1,7 +1,7 @@
|
||||||
{
|
{
|
||||||
"name": "@stock-bot/data-sync-service",
|
"name": "@stock-bot/data-ingestion",
|
||||||
"version": "1.0.0",
|
"version": "1.0.0",
|
||||||
"description": "Sync service from MongoDB raw data to PostgreSQL master records",
|
"description": "Market data ingestion from multiple providers with proxy support and rate limiting",
|
||||||
"main": "dist/index.js",
|
"main": "dist/index.js",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
|
|
@ -14,12 +14,16 @@
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@stock-bot/cache": "*",
|
"@stock-bot/cache": "*",
|
||||||
"@stock-bot/config": "*",
|
"@stock-bot/config": "*",
|
||||||
|
"@stock-bot/stock-config": "*",
|
||||||
|
"@stock-bot/di": "*",
|
||||||
|
"@stock-bot/handlers": "*",
|
||||||
"@stock-bot/logger": "*",
|
"@stock-bot/logger": "*",
|
||||||
"@stock-bot/mongodb-client": "*",
|
"@stock-bot/mongodb": "*",
|
||||||
"@stock-bot/postgres-client": "*",
|
"@stock-bot/postgres": "*",
|
||||||
"@stock-bot/questdb-client": "*",
|
"@stock-bot/questdb": "*",
|
||||||
"@stock-bot/queue": "*",
|
"@stock-bot/queue": "*",
|
||||||
"@stock-bot/shutdown": "*",
|
"@stock-bot/shutdown": "*",
|
||||||
|
"@stock-bot/utils": "*",
|
||||||
"hono": "^4.0.0"
|
"hono": "^4.0.0"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
|
|
@ -0,0 +1,3 @@
|
||||||
|
export { updateCeoChannels } from './update-ceo-channels.action';
|
||||||
|
export { updateUniqueSymbols } from './update-unique-symbols.action';
|
||||||
|
export { processIndividualSymbol } from './process-individual-symbol.action';
|
||||||
|
|
@ -0,0 +1,117 @@
|
||||||
|
import { getRandomUserAgent } from '@stock-bot/utils';
|
||||||
|
import type { CeoHandler } from '../ceo.handler';
|
||||||
|
|
||||||
|
export async function processIndividualSymbol(
|
||||||
|
this: CeoHandler,
|
||||||
|
payload: any,
|
||||||
|
_context: any
|
||||||
|
): Promise<unknown> {
|
||||||
|
const { ceoId, symbol, timestamp } = payload;
|
||||||
|
const proxy = this.proxy?.getProxy();
|
||||||
|
if (!proxy) {
|
||||||
|
this.logger.warn('No proxy available for processing individual CEO symbol');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
this.logger.debug('Processing individual CEO symbol', {
|
||||||
|
ceoId,
|
||||||
|
timestamp,
|
||||||
|
});
|
||||||
|
try {
|
||||||
|
// Fetch detailed information for the individual symbol
|
||||||
|
const response = await this.http.get(
|
||||||
|
`https://api.ceo.ca/api/get_spiels?channel=${ceoId}&load_more=top` +
|
||||||
|
(timestamp ? `&until=${timestamp}` : ''),
|
||||||
|
{
|
||||||
|
proxy: proxy,
|
||||||
|
headers: {
|
||||||
|
'User-Agent': getRandomUserAgent(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
throw new Error(`Failed to fetch details for ceoId ${ceoId}: ${response.statusText}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const data = await response.json();
|
||||||
|
|
||||||
|
const spielCount = data.spiels.length;
|
||||||
|
if (spielCount === 0) {
|
||||||
|
this.logger.warn(`No spiels found for ceoId ${ceoId}`);
|
||||||
|
return null; // No data to process
|
||||||
|
}
|
||||||
|
const latestSpielTime = data.spiels[0]?.timestamp;
|
||||||
|
const posts = data.spiels.map((spiel: any) => ({
|
||||||
|
ceoId,
|
||||||
|
spiel: spiel.spiel,
|
||||||
|
spielReplyToId: spiel.spiel_reply_to_id,
|
||||||
|
spielReplyTo: spiel.spiel_reply_to,
|
||||||
|
spielReplyToName: spiel.spiel_reply_to_name,
|
||||||
|
spielReplyToEdited: spiel.spiel_reply_to_edited,
|
||||||
|
userId: spiel.user_id,
|
||||||
|
name: spiel.name,
|
||||||
|
timestamp: spiel.timestamp,
|
||||||
|
spielId: spiel.spiel_id,
|
||||||
|
color: spiel.color,
|
||||||
|
parentId: spiel.parent_id,
|
||||||
|
publicId: spiel.public_id,
|
||||||
|
parentChannel: spiel.parent_channel,
|
||||||
|
parentTimestamp: spiel.parent_timestamp,
|
||||||
|
votes: spiel.votes,
|
||||||
|
editable: spiel.editable,
|
||||||
|
edited: spiel.edited,
|
||||||
|
featured: spiel.featured,
|
||||||
|
verified: spiel.verified,
|
||||||
|
fake: spiel.fake,
|
||||||
|
bot: spiel.bot,
|
||||||
|
voted: spiel.voted,
|
||||||
|
flagged: spiel.flagged,
|
||||||
|
ownSpiel: spiel.own_spiel,
|
||||||
|
score: spiel.score,
|
||||||
|
savedId: spiel.saved_id,
|
||||||
|
savedTimestamp: spiel.saved_timestamp,
|
||||||
|
poll: spiel.poll,
|
||||||
|
votedInPoll: spiel.voted_in_poll,
|
||||||
|
}));
|
||||||
|
|
||||||
|
await this.mongodb.batchUpsert('ceoPosts', posts, ['spielId']);
|
||||||
|
this.logger.info(`Fetched ${spielCount} spiels for ceoId ${ceoId}`);
|
||||||
|
|
||||||
|
// Update Shorts
|
||||||
|
const shortRes = await this.http.get(
|
||||||
|
`https://api.ceo.ca/api/short_positions/one?symbol=${symbol}`,
|
||||||
|
{
|
||||||
|
proxy: proxy,
|
||||||
|
headers: {
|
||||||
|
'User-Agent': getRandomUserAgent(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
if (shortRes.ok) {
|
||||||
|
const shortData = await shortRes.json();
|
||||||
|
if (shortData && shortData.positions) {
|
||||||
|
await this.mongodb.batchUpsert('ceoShorts', shortData.positions, ['id']);
|
||||||
|
}
|
||||||
|
|
||||||
|
await this.scheduleOperation('process-individual-symbol', {
|
||||||
|
ceoId: ceoId,
|
||||||
|
timestamp: latestSpielTime,
|
||||||
|
}, {priority: 0});
|
||||||
|
}
|
||||||
|
|
||||||
|
this.logger.info(
|
||||||
|
`Successfully processed channel ${ceoId} and added channel ${ceoId} at timestamp ${latestSpielTime}`
|
||||||
|
);
|
||||||
|
|
||||||
|
return { ceoId, spielCount, timestamp };
|
||||||
|
} catch (error) {
|
||||||
|
this.logger.error(`Failed to process individual symbol ${symbol}`, {
|
||||||
|
error,
|
||||||
|
ceoId,
|
||||||
|
timestamp,
|
||||||
|
});
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,72 @@
|
||||||
|
import { getRandomUserAgent } from '@stock-bot/utils';
|
||||||
|
import type { CeoHandler } from '../ceo.handler';
|
||||||
|
|
||||||
|
export async function updateCeoChannels(
|
||||||
|
this: CeoHandler,
|
||||||
|
payload: number | undefined
|
||||||
|
): Promise<unknown> {
|
||||||
|
const proxy = this.proxy?.getProxy();
|
||||||
|
if (!proxy) {
|
||||||
|
this.logger.warn('No proxy available for CEO channels update');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
let page;
|
||||||
|
if (payload === undefined) {
|
||||||
|
page = 1;
|
||||||
|
} else {
|
||||||
|
page = payload;
|
||||||
|
}
|
||||||
|
|
||||||
|
this.logger.info(`Fetching CEO channels for page ${page} with proxy ${proxy}`);
|
||||||
|
const response = await this.http.get(
|
||||||
|
'https://api.ceo.ca/api/home?exchange=all&sort_by=symbol§or=All&tab=companies&page=' + page,
|
||||||
|
{
|
||||||
|
proxy: proxy,
|
||||||
|
headers: {
|
||||||
|
'User-Agent': getRandomUserAgent(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
);
|
||||||
|
const results = await response.json();
|
||||||
|
const channels = results.channel_categories[0].channels;
|
||||||
|
const totalChannels = results.channel_categories[0].total_channels;
|
||||||
|
const totalPages = Math.ceil(totalChannels / channels.length);
|
||||||
|
const exchanges: { exchange: string; countryCode: string }[] = [];
|
||||||
|
const symbols = channels.map((channel: any) => {
|
||||||
|
// check if exchange is in the exchanges array object
|
||||||
|
if (!exchanges.find((e: any) => e.exchange === channel.exchange)) {
|
||||||
|
exchanges.push({
|
||||||
|
exchange: channel.exchange,
|
||||||
|
countryCode: 'CA',
|
||||||
|
});
|
||||||
|
}
|
||||||
|
const details = channel.company_details || {};
|
||||||
|
return {
|
||||||
|
symbol: channel.symbol,
|
||||||
|
exchange: channel.exchange,
|
||||||
|
name: channel.title,
|
||||||
|
type: channel.type,
|
||||||
|
ceoId: channel.channel,
|
||||||
|
marketCap: details.market_cap,
|
||||||
|
volumeRatio: details.volume_ratio,
|
||||||
|
avgVolume: details.avg_volume,
|
||||||
|
stockType: details.stock_type,
|
||||||
|
issueType: details.issue_type,
|
||||||
|
sharesOutstanding: details.shares_outstanding,
|
||||||
|
float: details.float,
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
await this.mongodb.batchUpsert('ceoSymbols', symbols, ['symbol', 'exchange']);
|
||||||
|
await this.mongodb.batchUpsert('ceoExchanges', exchanges, ['exchange']);
|
||||||
|
|
||||||
|
if (page === 1) {
|
||||||
|
for (let i = 2; i <= totalPages; i++) {
|
||||||
|
this.logger.info(`Scheduling page ${i} of ${totalPages} for CEO channels`);
|
||||||
|
await this.scheduleOperation('update-ceo-channels', i);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
this.logger.info(`Fetched CEO channels for page ${page}/${totalPages}`);
|
||||||
|
return { page, totalPages };
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,71 @@
|
||||||
|
import type { CeoHandler } from '../ceo.handler';
|
||||||
|
|
||||||
|
export async function updateUniqueSymbols(
|
||||||
|
this: CeoHandler,
|
||||||
|
_payload: unknown,
|
||||||
|
_context: any
|
||||||
|
): Promise<unknown> {
|
||||||
|
this.logger.info('Starting update to get unique CEO symbols by ceoId');
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Get unique ceoId values from the ceoSymbols collection
|
||||||
|
const uniqueCeoIds = await this.mongodb.collection('ceoSymbols').distinct('ceoId');
|
||||||
|
|
||||||
|
this.logger.info(`Found ${uniqueCeoIds.length} unique CEO IDs`);
|
||||||
|
|
||||||
|
// Get detailed records for each unique ceoId (latest/first record)
|
||||||
|
const uniqueSymbols = [];
|
||||||
|
for (const ceoId of uniqueCeoIds) {
|
||||||
|
const symbol = await this.mongodb
|
||||||
|
.collection('ceoSymbols')
|
||||||
|
.findOne({ ceoId }, { sort: { _id: -1 } }); // Get latest record
|
||||||
|
|
||||||
|
if (symbol) {
|
||||||
|
uniqueSymbols.push(symbol);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
this.logger.info(`Retrieved ${uniqueSymbols.length} unique symbol records`);
|
||||||
|
|
||||||
|
// Schedule individual jobs for each unique symbol
|
||||||
|
let scheduledJobs = 0;
|
||||||
|
for (const symbol of uniqueSymbols) {
|
||||||
|
// Schedule a job to process this individual symbol
|
||||||
|
await this.scheduleOperation('process-individual-symbol', {
|
||||||
|
ceoId: symbol.ceoId,
|
||||||
|
symbol: symbol.symbol,
|
||||||
|
}, {priority: 10 });
|
||||||
|
scheduledJobs++;
|
||||||
|
|
||||||
|
// Add small delay to avoid overwhelming the queue
|
||||||
|
if (scheduledJobs % 10 === 0) {
|
||||||
|
this.logger.debug(`Scheduled ${scheduledJobs} jobs so far`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
this.logger.info(`Successfully scheduled ${scheduledJobs} individual symbol update jobs`);
|
||||||
|
|
||||||
|
// Cache the results for monitoring
|
||||||
|
await this.cacheSet(
|
||||||
|
'unique-symbols-last-run',
|
||||||
|
{
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
totalUniqueIds: uniqueCeoIds.length,
|
||||||
|
totalRecords: uniqueSymbols.length,
|
||||||
|
scheduledJobs,
|
||||||
|
},
|
||||||
|
1800
|
||||||
|
); // Cache for 30 minutes
|
||||||
|
|
||||||
|
return {
|
||||||
|
success: true,
|
||||||
|
uniqueCeoIds: uniqueCeoIds.length,
|
||||||
|
uniqueRecords: uniqueSymbols.length,
|
||||||
|
scheduledJobs,
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
};
|
||||||
|
} catch (error) {
|
||||||
|
this.logger.error('Failed to update unique CEO symbols', { error });
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
34
apps/stock/data-ingestion/src/handlers/ceo/ceo.handler.ts
Normal file
34
apps/stock/data-ingestion/src/handlers/ceo/ceo.handler.ts
Normal file
|
|
@ -0,0 +1,34 @@
|
||||||
|
import {
|
||||||
|
BaseHandler,
|
||||||
|
Handler,
|
||||||
|
Operation,
|
||||||
|
ScheduledOperation,
|
||||||
|
type IServiceContainer,
|
||||||
|
} from '@stock-bot/handlers';
|
||||||
|
import { processIndividualSymbol, updateCeoChannels, updateUniqueSymbols } from './actions';
|
||||||
|
|
||||||
|
@Handler('ceo')
|
||||||
|
// @Disabled()
|
||||||
|
export class CeoHandler extends BaseHandler {
|
||||||
|
constructor(services: IServiceContainer) {
|
||||||
|
super(services); // Handler name read from @Handler decorator
|
||||||
|
}
|
||||||
|
|
||||||
|
@ScheduledOperation('update-ceo-channels', '0 */15 * * *', {
|
||||||
|
priority: 7,
|
||||||
|
immediately: false,
|
||||||
|
description: 'Get all CEO symbols and exchanges',
|
||||||
|
})
|
||||||
|
updateCeoChannels = updateCeoChannels;
|
||||||
|
|
||||||
|
@Operation('update-unique-symbols')
|
||||||
|
@ScheduledOperation('process-unique-symbols', '0 0 1 * *', {
|
||||||
|
priority: 5,
|
||||||
|
immediately: false,
|
||||||
|
description: 'Process unique CEO symbols and schedule individual jobs',
|
||||||
|
})
|
||||||
|
updateUniqueSymbols = updateUniqueSymbols;
|
||||||
|
|
||||||
|
@Operation('process-individual-symbol')
|
||||||
|
processIndividualSymbol = processIndividualSymbol;
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,96 @@
|
||||||
|
/**
|
||||||
|
* Example Handler - Demonstrates ergonomic handler patterns
|
||||||
|
* Shows inline operations, service helpers, and scheduled operations
|
||||||
|
*/
|
||||||
|
|
||||||
|
import {
|
||||||
|
BaseHandler,
|
||||||
|
Disabled,
|
||||||
|
Handler,
|
||||||
|
Operation,
|
||||||
|
ScheduledOperation,
|
||||||
|
type ExecutionContext,
|
||||||
|
type IServiceContainer,
|
||||||
|
} from '@stock-bot/handlers';
|
||||||
|
|
||||||
|
@Handler('example')
|
||||||
|
@Disabled()
|
||||||
|
export class ExampleHandler extends BaseHandler {
|
||||||
|
constructor(services: IServiceContainer) {
|
||||||
|
super(services);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Simple inline operation - no separate action file needed
|
||||||
|
*/
|
||||||
|
@Operation('get-stats')
|
||||||
|
async getStats(): Promise<{ total: number; active: number; cached: boolean }> {
|
||||||
|
// Use collection helper for cleaner MongoDB access
|
||||||
|
const total = await this.collection('items').countDocuments();
|
||||||
|
const active = await this.collection('items').countDocuments({ status: 'active' });
|
||||||
|
|
||||||
|
// Use cache helpers with automatic prefixing
|
||||||
|
const cached = await this.cacheGet<number>('last-total');
|
||||||
|
await this.cacheSet('last-total', total, 300); // 5 minutes
|
||||||
|
|
||||||
|
// Use log helper with automatic handler context
|
||||||
|
this.log('info', 'Stats retrieved', { total, active });
|
||||||
|
|
||||||
|
return { total, active, cached: cached !== null };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Scheduled operation using combined decorator
|
||||||
|
*/
|
||||||
|
@ScheduledOperation('cleanup-old-items', '0 2 * * *', {
|
||||||
|
priority: 5,
|
||||||
|
description: 'Clean up items older than 30 days',
|
||||||
|
})
|
||||||
|
async cleanupOldItems(): Promise<{ deleted: number }> {
|
||||||
|
const thirtyDaysAgo = new Date(Date.now() - 30 * 24 * 60 * 60 * 1000);
|
||||||
|
|
||||||
|
const result = await this.collection('items').deleteMany({
|
||||||
|
createdAt: { $lt: thirtyDaysAgo },
|
||||||
|
});
|
||||||
|
|
||||||
|
this.log('info', 'Cleanup completed', { deleted: result.deletedCount });
|
||||||
|
|
||||||
|
// Schedule a follow-up task
|
||||||
|
await this.scheduleIn('generate-report', { type: 'cleanup' }, 60); // 1 minute
|
||||||
|
|
||||||
|
return { deleted: result.deletedCount };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Operation that uses proxy service
|
||||||
|
*/
|
||||||
|
@Operation('fetch-external-data')
|
||||||
|
async fetchExternalData(input: { url: string }): Promise<{ data: any }> {
|
||||||
|
const proxyUrl = this.proxy.getProxy();
|
||||||
|
|
||||||
|
if (!proxyUrl) {
|
||||||
|
throw new Error('No proxy available');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use HTTP client with proxy
|
||||||
|
const response = await this.http.get(input.url, {
|
||||||
|
proxy: proxyUrl,
|
||||||
|
timeout: 10000,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Cache the result
|
||||||
|
await this.cacheSet(`external:${input.url}`, response.data, 3600);
|
||||||
|
|
||||||
|
return { data: response.data };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Complex operation that still uses action file
|
||||||
|
*/
|
||||||
|
@Operation('process-batch')
|
||||||
|
async processBatch(input: any, _context: ExecutionContext): Promise<unknown> {
|
||||||
|
// For complex operations, still use action files
|
||||||
|
const { processBatch } = await import('./actions/batch.action');
|
||||||
|
return processBatch(this, input);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,42 @@
|
||||||
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
|
import { fetchSession } from './fetch-session.action';
|
||||||
|
import { fetchExchanges } from './fetch-exchanges.action';
|
||||||
|
import { fetchSymbols } from './fetch-symbols.action';
|
||||||
|
|
||||||
|
export async function fetchExchangesAndSymbols(services: IServiceContainer): Promise<unknown> {
|
||||||
|
services.logger.info('Starting IB exchanges and symbols fetch job');
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Fetch session headers first
|
||||||
|
const sessionHeaders = await fetchSession(services);
|
||||||
|
if (!sessionHeaders) {
|
||||||
|
services.logger.error('Failed to get session headers for IB job');
|
||||||
|
return { success: false, error: 'No session headers' };
|
||||||
|
}
|
||||||
|
|
||||||
|
services.logger.info('Session headers obtained, fetching exchanges...');
|
||||||
|
|
||||||
|
// Fetch exchanges
|
||||||
|
const exchanges = await fetchExchanges(services);
|
||||||
|
services.logger.info('Fetched exchanges from IB', { count: exchanges?.length || 0 });
|
||||||
|
|
||||||
|
// Fetch symbols
|
||||||
|
services.logger.info('Fetching symbols...');
|
||||||
|
const symbols = await fetchSymbols(services);
|
||||||
|
services.logger.info('Fetched symbols from IB', { count: symbols?.length || 0 });
|
||||||
|
|
||||||
|
return {
|
||||||
|
success: true,
|
||||||
|
exchangesCount: exchanges?.length || 0,
|
||||||
|
symbolsCount: symbols?.length || 0,
|
||||||
|
};
|
||||||
|
} catch (error) {
|
||||||
|
services.logger.error('Failed to fetch IB exchanges and symbols', { error });
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: error instanceof Error ? error.message : 'Unknown error',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -1,16 +1,16 @@
|
||||||
/**
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
* IB Exchanges Operations - Fetching exchange data from IB API
|
|
||||||
*/
|
|
||||||
import { getMongoDBClient } from '@stock-bot/mongodb-client';
|
|
||||||
import { OperationContext } from '@stock-bot/utils';
|
|
||||||
|
|
||||||
import { IB_CONFIG } from '../shared/config';
|
import { IB_CONFIG } from '../shared/config';
|
||||||
|
import { fetchSession } from './fetch-session.action';
|
||||||
|
|
||||||
export async function fetchExchanges(sessionHeaders: Record<string, string>): Promise<unknown[] | null> {
|
export async function fetchExchanges(services: IServiceContainer): Promise<unknown[] | null> {
|
||||||
const ctx = OperationContext.create('ib', 'exchanges');
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
ctx.logger.info('🔍 Fetching exchanges with session headers...');
|
// First get session headers
|
||||||
|
const sessionHeaders = await fetchSession(services);
|
||||||
|
if (!sessionHeaders) {
|
||||||
|
throw new Error('Failed to get session headers');
|
||||||
|
}
|
||||||
|
|
||||||
|
services.logger.info('🔍 Fetching exchanges with session headers...');
|
||||||
|
|
||||||
// The URL for the exchange data API
|
// The URL for the exchange data API
|
||||||
const exchangeUrl = IB_CONFIG.BASE_URL + IB_CONFIG.EXCHANGE_API;
|
const exchangeUrl = IB_CONFIG.BASE_URL + IB_CONFIG.EXCHANGE_API;
|
||||||
|
|
@ -28,7 +28,7 @@ export async function fetchExchanges(sessionHeaders: Record<string, string>): Pr
|
||||||
'X-Requested-With': 'XMLHttpRequest',
|
'X-Requested-With': 'XMLHttpRequest',
|
||||||
};
|
};
|
||||||
|
|
||||||
ctx.logger.info('📤 Making request to exchange API...', {
|
services.logger.info('📤 Making request to exchange API...', {
|
||||||
url: exchangeUrl,
|
url: exchangeUrl,
|
||||||
headerCount: Object.keys(requestHeaders).length,
|
headerCount: Object.keys(requestHeaders).length,
|
||||||
});
|
});
|
||||||
|
|
@ -41,7 +41,7 @@ export async function fetchExchanges(sessionHeaders: Record<string, string>): Pr
|
||||||
});
|
});
|
||||||
|
|
||||||
if (!response.ok) {
|
if (!response.ok) {
|
||||||
ctx.logger.error('❌ Exchange API request failed', {
|
services.logger.error('❌ Exchange API request failed', {
|
||||||
status: response.status,
|
status: response.status,
|
||||||
statusText: response.statusText,
|
statusText: response.statusText,
|
||||||
});
|
});
|
||||||
|
|
@ -50,18 +50,19 @@ export async function fetchExchanges(sessionHeaders: Record<string, string>): Pr
|
||||||
|
|
||||||
const data = await response.json();
|
const data = await response.json();
|
||||||
const exchanges = data?.exchanges || [];
|
const exchanges = data?.exchanges || [];
|
||||||
ctx.logger.info('✅ Exchange data fetched successfully');
|
services.logger.info('✅ Exchange data fetched successfully');
|
||||||
|
|
||||||
ctx.logger.info('Saving IB exchanges to MongoDB...');
|
services.logger.info('Saving IB exchanges to MongoDB...');
|
||||||
const client = getMongoDBClient();
|
await services.mongodb.batchUpsert('ibExchanges', exchanges, ['id', 'country_code']);
|
||||||
await client.batchUpsert('ibExchanges', exchanges, ['id', 'country_code']);
|
services.logger.info('✅ Exchange IB data saved to MongoDB:', {
|
||||||
ctx.logger.info('✅ Exchange IB data saved to MongoDB:', {
|
|
||||||
count: exchanges.length,
|
count: exchanges.length,
|
||||||
});
|
});
|
||||||
|
|
||||||
return exchanges;
|
return exchanges;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
ctx.logger.error('❌ Failed to fetch exchanges', { error });
|
services.logger.error('❌ Failed to fetch exchanges', { error });
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -0,0 +1,84 @@
|
||||||
|
import { Browser } from '@stock-bot/browser';
|
||||||
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
|
import { IB_CONFIG } from '../shared/config';
|
||||||
|
|
||||||
|
export async function fetchSession(services: IServiceContainer): Promise<Record<string, string> | undefined> {
|
||||||
|
try {
|
||||||
|
await Browser.initialize({
|
||||||
|
headless: true,
|
||||||
|
timeout: IB_CONFIG.BROWSER_TIMEOUT,
|
||||||
|
blockResources: false,
|
||||||
|
});
|
||||||
|
services.logger.info('✅ Browser initialized');
|
||||||
|
|
||||||
|
const { page } = await Browser.createPageWithProxy(
|
||||||
|
IB_CONFIG.BASE_URL + IB_CONFIG.PRODUCTS_PAGE,
|
||||||
|
IB_CONFIG.DEFAULT_PROXY
|
||||||
|
);
|
||||||
|
services.logger.info('✅ Page created with proxy');
|
||||||
|
|
||||||
|
const headersPromise = new Promise<Record<string, string> | undefined>(resolve => {
|
||||||
|
let resolved = false;
|
||||||
|
|
||||||
|
page.onNetworkEvent(event => {
|
||||||
|
if (event.url.includes('/webrest/search/product-types/summary')) {
|
||||||
|
if (event.type === 'request') {
|
||||||
|
try {
|
||||||
|
resolve(event.headers);
|
||||||
|
} catch (e) {
|
||||||
|
resolve(undefined);
|
||||||
|
services.logger.debug('Raw Summary Response error', { error: (e as Error).message });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Timeout fallback
|
||||||
|
setTimeout(() => {
|
||||||
|
if (!resolved) {
|
||||||
|
resolved = true;
|
||||||
|
services.logger.warn('Timeout waiting for headers');
|
||||||
|
resolve(undefined);
|
||||||
|
}
|
||||||
|
}, IB_CONFIG.HEADERS_TIMEOUT);
|
||||||
|
});
|
||||||
|
|
||||||
|
services.logger.info('⏳ Waiting for page load...');
|
||||||
|
await page.waitForLoadState('domcontentloaded', { timeout: IB_CONFIG.PAGE_LOAD_TIMEOUT });
|
||||||
|
services.logger.info('✅ Page loaded');
|
||||||
|
|
||||||
|
//Products tabs
|
||||||
|
services.logger.info('🔍 Looking for Products tab...');
|
||||||
|
const productsTab = page.locator('#productSearchTab[role="tab"][href="#products"]');
|
||||||
|
await productsTab.waitFor({ timeout: IB_CONFIG.ELEMENT_TIMEOUT });
|
||||||
|
services.logger.info('✅ Found Products tab');
|
||||||
|
services.logger.info('🖱️ Clicking Products tab...');
|
||||||
|
await productsTab.click();
|
||||||
|
services.logger.info('✅ Products tab clicked');
|
||||||
|
|
||||||
|
// New Products Checkbox
|
||||||
|
services.logger.info('🔍 Looking for "New Products Only" radio button...');
|
||||||
|
const radioButton = page.locator('span.checkbox-text:has-text("New Products Only")');
|
||||||
|
await radioButton.waitFor({ timeout: IB_CONFIG.ELEMENT_TIMEOUT });
|
||||||
|
services.logger.info(`🎯 Found "New Products Only" radio button`);
|
||||||
|
await radioButton.first().click();
|
||||||
|
services.logger.info('✅ "New Products Only" radio button clicked');
|
||||||
|
|
||||||
|
// Wait for and return headers immediately when captured
|
||||||
|
services.logger.info('⏳ Waiting for headers to be captured...');
|
||||||
|
const headers = await headersPromise;
|
||||||
|
page.close();
|
||||||
|
if (headers) {
|
||||||
|
services.logger.info('✅ Headers captured successfully');
|
||||||
|
} else {
|
||||||
|
services.logger.warn('⚠️ No headers were captured');
|
||||||
|
}
|
||||||
|
|
||||||
|
return headers;
|
||||||
|
} catch (error) {
|
||||||
|
services.logger.error('Failed to fetch IB symbol summary', { error });
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -1,17 +1,16 @@
|
||||||
/**
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
* IB Symbols Operations - Fetching symbol data from IB API
|
|
||||||
*/
|
|
||||||
import { getMongoDBClient } from '@stock-bot/mongodb-client';
|
|
||||||
import { OperationContext } from '@stock-bot/utils';
|
|
||||||
|
|
||||||
import { IB_CONFIG } from '../shared/config';
|
import { IB_CONFIG } from '../shared/config';
|
||||||
|
import { fetchSession } from './fetch-session.action';
|
||||||
|
|
||||||
// Fetch symbols from IB using the session headers
|
export async function fetchSymbols(services: IServiceContainer): Promise<unknown[] | null> {
|
||||||
export async function fetchSymbols(sessionHeaders: Record<string, string>): Promise<unknown[] | null> {
|
|
||||||
const ctx = OperationContext.create('ib', 'symbols');
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
ctx.logger.info('🔍 Fetching symbols with session headers...');
|
// First get session headers
|
||||||
|
const sessionHeaders = await fetchSession(services);
|
||||||
|
if (!sessionHeaders) {
|
||||||
|
throw new Error('Failed to get session headers');
|
||||||
|
}
|
||||||
|
|
||||||
|
services.logger.info('🔍 Fetching symbols with session headers...');
|
||||||
|
|
||||||
// Prepare headers - include all session headers plus any additional ones
|
// Prepare headers - include all session headers plus any additional ones
|
||||||
const requestHeaders = {
|
const requestHeaders = {
|
||||||
|
|
@ -39,18 +38,15 @@ export async function fetchSymbols(sessionHeaders: Record<string, string>): Prom
|
||||||
};
|
};
|
||||||
|
|
||||||
// Get Summary
|
// Get Summary
|
||||||
const summaryResponse = await fetch(
|
const summaryResponse = await fetch(IB_CONFIG.BASE_URL + IB_CONFIG.SUMMARY_API, {
|
||||||
IB_CONFIG.BASE_URL + IB_CONFIG.SUMMARY_API,
|
method: 'POST',
|
||||||
{
|
headers: requestHeaders,
|
||||||
method: 'POST',
|
proxy: IB_CONFIG.DEFAULT_PROXY,
|
||||||
headers: requestHeaders,
|
body: JSON.stringify(requestBody),
|
||||||
proxy: IB_CONFIG.DEFAULT_PROXY,
|
});
|
||||||
body: JSON.stringify(requestBody),
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
if (!summaryResponse.ok) {
|
if (!summaryResponse.ok) {
|
||||||
ctx.logger.error('❌ Summary API request failed', {
|
services.logger.error('❌ Summary API request failed', {
|
||||||
status: summaryResponse.status,
|
status: summaryResponse.status,
|
||||||
statusText: summaryResponse.statusText,
|
statusText: summaryResponse.statusText,
|
||||||
});
|
});
|
||||||
|
|
@ -58,36 +54,33 @@ export async function fetchSymbols(sessionHeaders: Record<string, string>): Prom
|
||||||
}
|
}
|
||||||
|
|
||||||
const summaryData = await summaryResponse.json();
|
const summaryData = await summaryResponse.json();
|
||||||
ctx.logger.info('✅ IB Summary data fetched successfully', {
|
services.logger.info('✅ IB Summary data fetched successfully', {
|
||||||
totalCount: summaryData[0].totalCount,
|
totalCount: summaryData[0].totalCount,
|
||||||
});
|
});
|
||||||
|
|
||||||
const symbols = [];
|
const symbols = [];
|
||||||
requestBody.pageSize = IB_CONFIG.PAGE_SIZE;
|
requestBody.pageSize = IB_CONFIG.PAGE_SIZE;
|
||||||
const pageCount = Math.ceil(summaryData[0].totalCount / IB_CONFIG.PAGE_SIZE) || 0;
|
const pageCount = Math.ceil(summaryData[0].totalCount / IB_CONFIG.PAGE_SIZE) || 0;
|
||||||
ctx.logger.info('Fetching Symbols for IB', { pageCount });
|
services.logger.info('Fetching Symbols for IB', { pageCount });
|
||||||
|
|
||||||
const symbolPromises = [];
|
const symbolPromises = [];
|
||||||
for (let page = 1; page <= pageCount; page++) {
|
for (let page = 1; page <= pageCount; page++) {
|
||||||
requestBody.pageNumber = page;
|
requestBody.pageNumber = page;
|
||||||
|
|
||||||
// Fetch symbols for the current page
|
// Fetch symbols for the current page
|
||||||
const symbolsResponse = fetch(
|
const symbolsResponse = fetch(IB_CONFIG.BASE_URL + IB_CONFIG.PRODUCTS_API, {
|
||||||
IB_CONFIG.BASE_URL + IB_CONFIG.PRODUCTS_API,
|
method: 'POST',
|
||||||
{
|
headers: requestHeaders,
|
||||||
method: 'POST',
|
proxy: IB_CONFIG.DEFAULT_PROXY,
|
||||||
headers: requestHeaders,
|
body: JSON.stringify(requestBody),
|
||||||
proxy: IB_CONFIG.DEFAULT_PROXY,
|
});
|
||||||
body: JSON.stringify(requestBody),
|
|
||||||
}
|
|
||||||
);
|
|
||||||
symbolPromises.push(symbolsResponse);
|
symbolPromises.push(symbolsResponse);
|
||||||
}
|
}
|
||||||
|
|
||||||
const responses = await Promise.all(symbolPromises);
|
const responses = await Promise.all(symbolPromises);
|
||||||
for (const response of responses) {
|
for (const response of responses) {
|
||||||
if (!response.ok) {
|
if (!response.ok) {
|
||||||
ctx.logger.error('❌ Symbols API request failed', {
|
services.logger.error('❌ Symbols API request failed', {
|
||||||
status: response.status,
|
status: response.status,
|
||||||
statusText: response.statusText,
|
statusText: response.statusText,
|
||||||
});
|
});
|
||||||
|
|
@ -98,28 +91,29 @@ export async function fetchSymbols(sessionHeaders: Record<string, string>): Prom
|
||||||
if (symJson && symJson.length > 0) {
|
if (symJson && symJson.length > 0) {
|
||||||
symbols.push(...symJson);
|
symbols.push(...symJson);
|
||||||
} else {
|
} else {
|
||||||
ctx.logger.warn('⚠️ No symbols found in response');
|
services.logger.warn('⚠️ No symbols found in response');
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (symbols.length === 0) {
|
if (symbols.length === 0) {
|
||||||
ctx.logger.warn('⚠️ No symbols fetched from IB');
|
services.logger.warn('⚠️ No symbols fetched from IB');
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx.logger.info('✅ IB symbols fetched successfully, saving to DB...', {
|
services.logger.info('✅ IB symbols fetched successfully, saving to DB...', {
|
||||||
totalSymbols: symbols.length,
|
totalSymbols: symbols.length,
|
||||||
});
|
});
|
||||||
const client = getMongoDBClient();
|
await services.mongodb.batchUpsert('ib_symbols', symbols, ['symbol', 'exchangeId']);
|
||||||
await client.batchUpsert('ib_symbols', symbols, ['symbol', 'exchangeId']);
|
services.logger.info('Saved IB symbols to DB', {
|
||||||
ctx.logger.info('Saved IB symbols to DB', {
|
|
||||||
totalSymbols: symbols.length,
|
totalSymbols: symbols.length,
|
||||||
});
|
});
|
||||||
|
|
||||||
return symbols;
|
return symbols;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
ctx.logger.error('❌ Failed to fetch symbols', { error });
|
services.logger.error('❌ Failed to fetch symbols', { error });
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -0,0 +1,5 @@
|
||||||
|
export { fetchSession } from './fetch-session.action';
|
||||||
|
export { fetchExchanges } from './fetch-exchanges.action';
|
||||||
|
export { fetchSymbols } from './fetch-symbols.action';
|
||||||
|
export { fetchExchangesAndSymbols } from './fetch-exchanges-and-symbols.action';
|
||||||
|
|
||||||
42
apps/stock/data-ingestion/src/handlers/ib/ib.handler.ts
Normal file
42
apps/stock/data-ingestion/src/handlers/ib/ib.handler.ts
Normal file
|
|
@ -0,0 +1,42 @@
|
||||||
|
import {
|
||||||
|
BaseHandler,
|
||||||
|
Handler,
|
||||||
|
Operation,
|
||||||
|
ScheduledOperation,
|
||||||
|
type IServiceContainer,
|
||||||
|
} from '@stock-bot/handlers';
|
||||||
|
import { fetchExchanges, fetchExchangesAndSymbols, fetchSession, fetchSymbols } from './actions';
|
||||||
|
|
||||||
|
@Handler('ib')
|
||||||
|
export class IbHandler extends BaseHandler {
|
||||||
|
constructor(services: IServiceContainer) {
|
||||||
|
super(services);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Operation('fetch-session')
|
||||||
|
async fetchSession(): Promise<Record<string, string> | undefined> {
|
||||||
|
return fetchSession(this);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Operation('fetch-exchanges')
|
||||||
|
async fetchExchanges(): Promise<unknown[] | null> {
|
||||||
|
return fetchExchanges(this);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Operation('fetch-symbols')
|
||||||
|
async fetchSymbols(): Promise<unknown[] | null> {
|
||||||
|
return fetchSymbols(this);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Operation('ib-exchanges-and-symbols')
|
||||||
|
@ScheduledOperation('ib-exchanges-and-symbols', '0 0 * * 0', {
|
||||||
|
priority: 5,
|
||||||
|
description: 'Fetch and update IB exchanges and symbols data',
|
||||||
|
immediately: false,
|
||||||
|
})
|
||||||
|
async fetchExchangesAndSymbols(): Promise<unknown> {
|
||||||
|
return fetchExchangesAndSymbols(this);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -21,3 +21,4 @@ export const IB_CONFIG = {
|
||||||
PRODUCT_COUNTRIES: ['CA', 'US'],
|
PRODUCT_COUNTRIES: ['CA', 'US'],
|
||||||
PRODUCT_TYPES: ['STK'],
|
PRODUCT_TYPES: ['STK'],
|
||||||
};
|
};
|
||||||
|
|
||||||
61
apps/stock/data-ingestion/src/handlers/index.ts
Normal file
61
apps/stock/data-ingestion/src/handlers/index.ts
Normal file
|
|
@ -0,0 +1,61 @@
|
||||||
|
/**
|
||||||
|
* Handler auto-registration
|
||||||
|
* Automatically discovers and registers all handlers
|
||||||
|
*/
|
||||||
|
|
||||||
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
|
import { autoRegisterHandlers } from '@stock-bot/handlers';
|
||||||
|
import { getLogger } from '@stock-bot/logger';
|
||||||
|
// Import handlers for bundling (ensures they're included in the build)
|
||||||
|
import './ceo/ceo.handler';
|
||||||
|
import './ib/ib.handler';
|
||||||
|
import './proxy/proxy.handler';
|
||||||
|
import './qm/qm.handler';
|
||||||
|
import './webshare/webshare.handler';
|
||||||
|
|
||||||
|
// Add more handler imports as needed
|
||||||
|
|
||||||
|
const logger = getLogger('handler-init');
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Initialize and register all handlers automatically
|
||||||
|
*/
|
||||||
|
export async function initializeAllHandlers(serviceContainer: IServiceContainer): Promise<void> {
|
||||||
|
try {
|
||||||
|
// Auto-register all handlers in this directory
|
||||||
|
const result = await autoRegisterHandlers(__dirname, serviceContainer, {
|
||||||
|
pattern: '.handler.',
|
||||||
|
exclude: ['test', 'spec'],
|
||||||
|
dryRun: false,
|
||||||
|
serviceName: 'data-ingestion',
|
||||||
|
});
|
||||||
|
|
||||||
|
logger.info('Handler auto-registration complete', {
|
||||||
|
registered: result.registered,
|
||||||
|
failed: result.failed,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (result.failed.length > 0) {
|
||||||
|
logger.error('Some handlers failed to register', { failed: result.failed });
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('Handler auto-registration failed', { error });
|
||||||
|
// Fall back to manual registration
|
||||||
|
await manualHandlerRegistration(serviceContainer);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Manual fallback registration
|
||||||
|
*/
|
||||||
|
async function manualHandlerRegistration(_serviceContainer: any): Promise<void> {
|
||||||
|
logger.warn('Falling back to manual handler registration');
|
||||||
|
|
||||||
|
try {
|
||||||
|
|
||||||
|
logger.info('Manual handler registration complete');
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('Manual handler registration failed', { error });
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -1,27 +1,22 @@
|
||||||
/**
|
/**
|
||||||
* Proxy Check Operations - Checking proxy functionality
|
* Proxy Check Operations - Checking proxy functionality
|
||||||
*/
|
*/
|
||||||
import { HttpClient, ProxyInfo } from '@stock-bot/http';
|
import type { OperationContext } from '@stock-bot/di';
|
||||||
import { OperationContext } from '@stock-bot/utils';
|
import { getLogger } from '@stock-bot/logger';
|
||||||
|
import type { ProxyInfo } from '@stock-bot/proxy';
|
||||||
|
import { fetch } from '@stock-bot/utils';
|
||||||
import { PROXY_CONFIG } from '../shared/config';
|
import { PROXY_CONFIG } from '../shared/config';
|
||||||
import { ProxyStatsManager } from '../shared/proxy-manager';
|
|
||||||
|
|
||||||
// Shared HTTP client
|
|
||||||
let httpClient: HttpClient;
|
|
||||||
|
|
||||||
function getHttpClient(ctx: OperationContext): HttpClient {
|
|
||||||
if (!httpClient) {
|
|
||||||
httpClient = new HttpClient({ timeout: 10000 }, ctx.logger);
|
|
||||||
}
|
|
||||||
return httpClient;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Check if a proxy is working
|
* Check if a proxy is working
|
||||||
*/
|
*/
|
||||||
export async function checkProxy(proxy: ProxyInfo): Promise<ProxyInfo> {
|
export async function checkProxy(proxy: ProxyInfo): Promise<ProxyInfo> {
|
||||||
const ctx = OperationContext.create('proxy', 'check');
|
const ctx = {
|
||||||
|
logger: getLogger('proxy-check'),
|
||||||
|
resolve: (_name: string) => {
|
||||||
|
throw new Error(`Service container not available for proxy operations`);
|
||||||
|
},
|
||||||
|
} as any;
|
||||||
|
|
||||||
let success = false;
|
let success = false;
|
||||||
ctx.logger.debug(`Checking Proxy:`, {
|
ctx.logger.debug(`Checking Proxy:`, {
|
||||||
|
|
@ -31,22 +26,28 @@ export async function checkProxy(proxy: ProxyInfo): Promise<ProxyInfo> {
|
||||||
});
|
});
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Test the proxy
|
// Test the proxy using fetch with proxy support
|
||||||
const client = getHttpClient(ctx);
|
const proxyUrl =
|
||||||
const response = await client.get(PROXY_CONFIG.CHECK_URL, {
|
proxy.username && proxy.password
|
||||||
proxy,
|
? `${proxy.protocol}://${encodeURIComponent(proxy.username)}:${encodeURIComponent(proxy.password)}@${proxy.host}:${proxy.port}`
|
||||||
timeout: PROXY_CONFIG.CHECK_TIMEOUT,
|
: `${proxy.protocol}://${proxy.host}:${proxy.port}`;
|
||||||
});
|
|
||||||
|
|
||||||
const isWorking = response.status >= 200 && response.status < 300;
|
const response = await fetch(PROXY_CONFIG.CHECK_URL, {
|
||||||
|
proxy: proxyUrl,
|
||||||
|
signal: AbortSignal.timeout(PROXY_CONFIG.CHECK_TIMEOUT),
|
||||||
|
logger: ctx.logger,
|
||||||
|
} as any);
|
||||||
|
|
||||||
|
const data = await response.text();
|
||||||
|
|
||||||
|
const isWorking = response.ok;
|
||||||
const result: ProxyInfo = {
|
const result: ProxyInfo = {
|
||||||
...proxy,
|
...proxy,
|
||||||
isWorking,
|
isWorking,
|
||||||
lastChecked: new Date(),
|
lastChecked: new Date(),
|
||||||
responseTime: response.responseTime,
|
|
||||||
};
|
};
|
||||||
|
|
||||||
if (isWorking && !JSON.stringify(response.data).includes(PROXY_CONFIG.CHECK_IP)) {
|
if (isWorking && !data.includes(PROXY_CONFIG.CHECK_IP)) {
|
||||||
success = true;
|
success = true;
|
||||||
await updateProxyInCache(result, true, ctx);
|
await updateProxyInCache(result, true, ctx);
|
||||||
} else {
|
} else {
|
||||||
|
|
@ -93,11 +94,17 @@ export async function checkProxy(proxy: ProxyInfo): Promise<ProxyInfo> {
|
||||||
/**
|
/**
|
||||||
* Update proxy data in cache with working/total stats and average response time
|
* Update proxy data in cache with working/total stats and average response time
|
||||||
*/
|
*/
|
||||||
async function updateProxyInCache(proxy: ProxyInfo, isWorking: boolean, ctx: OperationContext): Promise<void> {
|
async function updateProxyInCache(
|
||||||
const cacheKey = `${PROXY_CONFIG.CACHE_KEY}:${proxy.protocol}://${proxy.host}:${proxy.port}`;
|
proxy: ProxyInfo,
|
||||||
|
isWorking: boolean,
|
||||||
|
ctx: OperationContext
|
||||||
|
): Promise<void> {
|
||||||
|
// const _cacheKey = `${PROXY_CONFIG.CACHE_KEY}:${proxy.protocol}://${proxy.host}:${proxy.port}`;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const existing: ProxyInfo | null = await ctx.cache.get(cacheKey);
|
// For now, skip cache operations without service container
|
||||||
|
// TODO: Pass service container to operations
|
||||||
|
const existing: ProxyInfo | null = null;
|
||||||
|
|
||||||
// For failed proxies, only update if they already exist
|
// For failed proxies, only update if they already exist
|
||||||
if (!isWorking && !existing) {
|
if (!isWorking && !existing) {
|
||||||
|
|
@ -140,8 +147,9 @@ async function updateProxyInCache(proxy: ProxyInfo, isWorking: boolean, ctx: Ope
|
||||||
updated.successRate = updated.total > 0 ? (updated.working / updated.total) * 100 : 0;
|
updated.successRate = updated.total > 0 ? (updated.working / updated.total) * 100 : 0;
|
||||||
|
|
||||||
// Save to cache: reset TTL for working proxies, keep existing TTL for failed ones
|
// Save to cache: reset TTL for working proxies, keep existing TTL for failed ones
|
||||||
const cacheOptions = isWorking ? { ttl: PROXY_CONFIG.CACHE_TTL } : undefined;
|
// const _cacheOptions = isWorking ? { ttl: PROXY_CONFIG.CACHE_TTL } : undefined;
|
||||||
await ctx.cache.set(cacheKey, updated, cacheOptions);
|
// Skip cache operations without service container
|
||||||
|
// TODO: Pass service container to operations
|
||||||
|
|
||||||
ctx.logger.debug(`Updated ${isWorking ? 'working' : 'failed'} proxy in cache`, {
|
ctx.logger.debug(`Updated ${isWorking ? 'working' : 'failed'} proxy in cache`, {
|
||||||
proxy: `${proxy.host}:${proxy.port}`,
|
proxy: `${proxy.host}:${proxy.port}`,
|
||||||
|
|
@ -161,15 +169,8 @@ async function updateProxyInCache(proxy: ProxyInfo, isWorking: boolean, ctx: Ope
|
||||||
}
|
}
|
||||||
|
|
||||||
function updateProxyStats(sourceId: string, success: boolean, ctx: OperationContext) {
|
function updateProxyStats(sourceId: string, success: boolean, ctx: OperationContext) {
|
||||||
const statsManager = ProxyStatsManager.getInstance();
|
// Stats are now handled by the global ProxyManager
|
||||||
const source = statsManager.updateSourceStats(sourceId, success);
|
ctx.logger.debug('Proxy check result', { sourceId, success });
|
||||||
|
|
||||||
if (!source) {
|
// TODO: Integrate with global ProxyManager stats if needed
|
||||||
ctx.logger.warn(`Unknown proxy source: ${sourceId}`);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cache the updated stats
|
|
||||||
ctx.cache.set(`${PROXY_CONFIG.CACHE_STATS_KEY}:${source.id}`, source, { ttl: PROXY_CONFIG.CACHE_TTL })
|
|
||||||
.catch(error => ctx.logger.debug('Failed to cache proxy stats', { error }));
|
|
||||||
}
|
}
|
||||||
|
|
@ -1,28 +1,20 @@
|
||||||
/**
|
/**
|
||||||
* Proxy Fetch Operations - Fetching proxies from sources
|
* Proxy Fetch Operations - Fetching proxies from sources
|
||||||
*/
|
*/
|
||||||
import { HttpClient, ProxyInfo } from '@stock-bot/http';
|
import type { ProxyInfo } from '@stock-bot/proxy';
|
||||||
import { OperationContext } from '@stock-bot/utils';
|
import { OperationContext } from '@stock-bot/di';
|
||||||
|
import { getLogger } from '@stock-bot/logger';
|
||||||
|
import { fetch } from '@stock-bot/utils';
|
||||||
|
|
||||||
import { PROXY_CONFIG } from '../shared/config';
|
import { PROXY_CONFIG } from '../shared/config';
|
||||||
import { ProxyStatsManager } from '../shared/proxy-manager';
|
|
||||||
import type { ProxySource } from '../shared/types';
|
import type { ProxySource } from '../shared/types';
|
||||||
|
|
||||||
// Shared HTTP client
|
|
||||||
let httpClient: HttpClient;
|
|
||||||
|
|
||||||
function getHttpClient(ctx: OperationContext): HttpClient {
|
|
||||||
if (!httpClient) {
|
|
||||||
httpClient = new HttpClient({ timeout: 10000 }, ctx.logger);
|
|
||||||
}
|
|
||||||
return httpClient;
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function fetchProxiesFromSources(): Promise<ProxyInfo[]> {
|
export async function fetchProxiesFromSources(): Promise<ProxyInfo[]> {
|
||||||
const ctx = OperationContext.create('proxy', 'fetch-sources');
|
const ctx = {
|
||||||
|
logger: getLogger('proxy-fetch')
|
||||||
|
} as any;
|
||||||
|
|
||||||
const statsManager = ProxyStatsManager.getInstance();
|
ctx.logger.info('Starting proxy fetch from sources');
|
||||||
statsManager.resetStats();
|
|
||||||
|
|
||||||
const fetchPromises = PROXY_CONFIG.PROXY_SOURCES.map(source => fetchProxiesFromSource(source, ctx));
|
const fetchPromises = PROXY_CONFIG.PROXY_SOURCES.map(source => fetchProxiesFromSource(source, ctx));
|
||||||
const results = await Promise.all(fetchPromises);
|
const results = await Promise.all(fetchPromises);
|
||||||
|
|
@ -43,17 +35,17 @@ export async function fetchProxiesFromSource(source: ProxySource, ctx?: Operatio
|
||||||
try {
|
try {
|
||||||
ctx.logger.info(`Fetching proxies from ${source.url}`);
|
ctx.logger.info(`Fetching proxies from ${source.url}`);
|
||||||
|
|
||||||
const client = getHttpClient(ctx);
|
const response = await fetch(source.url, {
|
||||||
const response = await client.get(source.url, {
|
signal: AbortSignal.timeout(10000),
|
||||||
timeout: 10000,
|
logger: ctx.logger
|
||||||
});
|
} as any);
|
||||||
|
|
||||||
if (response.status !== 200) {
|
if (!response.ok) {
|
||||||
ctx.logger.warn(`Failed to fetch from ${source.url}: ${response.status}`);
|
ctx.logger.warn(`Failed to fetch from ${source.url}: ${response.status}`);
|
||||||
return [];
|
return [];
|
||||||
}
|
}
|
||||||
|
|
||||||
const text = response.data;
|
const text = await response.text();
|
||||||
const lines = text.split('\n').filter((line: string) => line.trim());
|
const lines = text.split('\n').filter((line: string) => line.trim());
|
||||||
|
|
||||||
for (const line of lines) {
|
for (const line of lines) {
|
||||||
|
|
@ -68,7 +60,7 @@ export async function fetchProxiesFromSource(source: ProxySource, ctx?: Operatio
|
||||||
if (parts.length >= 2) {
|
if (parts.length >= 2) {
|
||||||
const proxy: ProxyInfo = {
|
const proxy: ProxyInfo = {
|
||||||
source: source.id,
|
source: source.id,
|
||||||
protocol: source.protocol as 'http' | 'https' | 'socks4' | 'socks5',
|
protocol: source.protocol as 'http' | 'https',
|
||||||
host: parts[0],
|
host: parts[0],
|
||||||
port: parseInt(parts[1]),
|
port: parseInt(parts[1]),
|
||||||
};
|
};
|
||||||
|
|
@ -1,9 +1,8 @@
|
||||||
/**
|
/**
|
||||||
* Proxy Query Operations - Getting active proxies from cache
|
* Proxy Query Operations - Getting active proxies from cache
|
||||||
*/
|
*/
|
||||||
import { ProxyInfo } from '@stock-bot/http';
|
import { OperationContext } from '@stock-bot/di';
|
||||||
import { OperationContext } from '@stock-bot/utils';
|
import type { ProxyInfo } from '@stock-bot/proxy';
|
||||||
|
|
||||||
import { PROXY_CONFIG } from '../shared/config';
|
import { PROXY_CONFIG } from '../shared/config';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -56,7 +55,10 @@ export async function getRandomActiveProxy(
|
||||||
return proxyData;
|
return proxyData;
|
||||||
}
|
}
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
ctx.logger.debug('Error reading proxy from cache', { key, error: (error as Error).message });
|
ctx.logger.debug('Error reading proxy from cache', {
|
||||||
|
key,
|
||||||
|
error: (error as Error).message,
|
||||||
|
});
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
@ -1,14 +1,18 @@
|
||||||
/**
|
/**
|
||||||
* Proxy Queue Operations - Queueing proxy operations
|
* Proxy Queue Operations - Queueing proxy operations
|
||||||
*/
|
*/
|
||||||
import { ProxyInfo } from '@stock-bot/http';
|
import { OperationContext } from '@stock-bot/di';
|
||||||
import { QueueManager } from '@stock-bot/queue';
|
import type { ProxyInfo } from '@stock-bot/proxy';
|
||||||
import { OperationContext } from '@stock-bot/utils';
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
|
|
||||||
export async function queueProxyFetch(): Promise<string> {
|
export async function queueProxyFetch(container: IServiceContainer): Promise<string> {
|
||||||
const ctx = OperationContext.create('proxy', 'queue-fetch');
|
const ctx = OperationContext.create('proxy', 'queue-fetch');
|
||||||
|
|
||||||
const queueManager = QueueManager.getInstance();
|
const queueManager = container.queue;
|
||||||
|
if (!queueManager) {
|
||||||
|
throw new Error('Queue manager not available');
|
||||||
|
}
|
||||||
|
|
||||||
const queue = queueManager.getQueue('proxy');
|
const queue = queueManager.getQueue('proxy');
|
||||||
const job = await queue.add('proxy-fetch', {
|
const job = await queue.add('proxy-fetch', {
|
||||||
handler: 'proxy',
|
handler: 'proxy',
|
||||||
|
|
@ -22,10 +26,14 @@ export async function queueProxyFetch(): Promise<string> {
|
||||||
return jobId;
|
return jobId;
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function queueProxyCheck(proxies: ProxyInfo[]): Promise<string> {
|
export async function queueProxyCheck(proxies: ProxyInfo[], container: IServiceContainer): Promise<string> {
|
||||||
const ctx = OperationContext.create('proxy', 'queue-check');
|
const ctx = OperationContext.create('proxy', 'queue-check');
|
||||||
|
|
||||||
const queueManager = QueueManager.getInstance();
|
const queueManager = container.queue;
|
||||||
|
if (!queueManager) {
|
||||||
|
throw new Error('Queue manager not available');
|
||||||
|
}
|
||||||
|
|
||||||
const queue = queueManager.getQueue('proxy');
|
const queue = queueManager.getQueue('proxy');
|
||||||
const job = await queue.add('proxy-check', {
|
const job = await queue.add('proxy-check', {
|
||||||
handler: 'proxy',
|
handler: 'proxy',
|
||||||
|
|
@ -0,0 +1,86 @@
|
||||||
|
import {
|
||||||
|
BaseHandler,
|
||||||
|
Handler,
|
||||||
|
Operation,
|
||||||
|
ScheduledOperation,
|
||||||
|
type IServiceContainer,
|
||||||
|
} from '@stock-bot/handlers';
|
||||||
|
import type { ProxyInfo } from '@stock-bot/proxy';
|
||||||
|
import { processItems } from '@stock-bot/queue';
|
||||||
|
import { fetchProxiesFromSources } from './operations/fetch.operations';
|
||||||
|
import { checkProxy } from './operations/check.operations';
|
||||||
|
|
||||||
|
@Handler('proxy')
|
||||||
|
export class ProxyHandler extends BaseHandler {
|
||||||
|
constructor(services: IServiceContainer) {
|
||||||
|
super(services);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Operation('fetch-from-sources')
|
||||||
|
@ScheduledOperation('proxy-fetch-and-check', '0 0 * * 0', {
|
||||||
|
priority: 0,
|
||||||
|
description: 'Fetch and validate proxy list from sources',
|
||||||
|
// immediately: true, // Don't run immediately during startup to avoid conflicts
|
||||||
|
})
|
||||||
|
async fetchFromSources(): Promise<{
|
||||||
|
processed: number;
|
||||||
|
jobsCreated: number;
|
||||||
|
batchesCreated?: number;
|
||||||
|
mode: string;
|
||||||
|
}> {
|
||||||
|
// Fetch proxies from all configured sources
|
||||||
|
this.logger.info('Processing fetch proxies from sources request');
|
||||||
|
|
||||||
|
const proxies = await fetchProxiesFromSources();
|
||||||
|
this.logger.info('Fetched proxies from sources', { count: proxies.length });
|
||||||
|
|
||||||
|
if (proxies.length === 0) {
|
||||||
|
this.logger.warn('No proxies fetched from sources');
|
||||||
|
return { processed: 0, jobsCreated: 0, mode: 'direct' };
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get QueueManager from service container
|
||||||
|
const queueManager = this.queue;
|
||||||
|
if (!queueManager) {
|
||||||
|
throw new Error('Queue manager not available');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Batch process the proxies through check-proxy operation
|
||||||
|
const batchResult = await processItems(proxies, 'proxy', {
|
||||||
|
handler: 'proxy',
|
||||||
|
operation: 'check-proxy',
|
||||||
|
totalDelayHours: 0.083, // 5 minutes (5/60 hours)
|
||||||
|
batchSize: 50, // Process 50 proxies per batch
|
||||||
|
priority: 3,
|
||||||
|
useBatching: true,
|
||||||
|
retries: 1,
|
||||||
|
ttl: 30000, // 30 second timeout per proxy check
|
||||||
|
removeOnComplete: 5,
|
||||||
|
removeOnFail: 3,
|
||||||
|
}, queueManager);
|
||||||
|
|
||||||
|
this.logger.info('Batch proxy validation completed', {
|
||||||
|
totalProxies: proxies.length,
|
||||||
|
jobsCreated: batchResult.jobsCreated,
|
||||||
|
mode: batchResult.mode,
|
||||||
|
batchesCreated: batchResult.batchesCreated,
|
||||||
|
duration: `${batchResult.duration}ms`,
|
||||||
|
});
|
||||||
|
|
||||||
|
return {
|
||||||
|
processed: proxies.length,
|
||||||
|
jobsCreated: batchResult.jobsCreated,
|
||||||
|
batchesCreated: batchResult.batchesCreated,
|
||||||
|
mode: batchResult.mode,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
@Operation('check-proxy')
|
||||||
|
async checkProxyOperation(payload: ProxyInfo): Promise<unknown> {
|
||||||
|
// payload is now the raw proxy info object
|
||||||
|
this.logger.debug('Processing proxy check request', {
|
||||||
|
proxy: `${payload.host}:${payload.port}`,
|
||||||
|
});
|
||||||
|
return checkProxy(payload);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,19 @@
|
||||||
|
/**
|
||||||
|
* QM Exchanges Operations - Simple exchange data fetching
|
||||||
|
*/
|
||||||
|
|
||||||
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
|
|
||||||
|
export async function fetchExchanges(services: IServiceContainer): Promise<any[]> {
|
||||||
|
// Get exchanges from MongoDB
|
||||||
|
const exchanges = await services.mongodb.collection('qm_exchanges').find({}).toArray();
|
||||||
|
|
||||||
|
return exchanges;
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function getExchangeByCode(services: IServiceContainer, code: string): Promise<any> {
|
||||||
|
// Get specific exchange by code
|
||||||
|
const exchange = await services.mongodb.collection('qm_exchanges').findOne({ code });
|
||||||
|
|
||||||
|
return exchange;
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,72 @@
|
||||||
|
/**
|
||||||
|
* QM Session Actions - Session management and creation
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { BaseHandler } from '@stock-bot/core/handlers';
|
||||||
|
import { QM_SESSION_IDS, SESSION_CONFIG } from '../shared/config';
|
||||||
|
import { QMSessionManager } from '../shared/session-manager';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check existing sessions and queue creation jobs for needed sessions
|
||||||
|
*/
|
||||||
|
export async function checkSessions(handler: BaseHandler): Promise<{
|
||||||
|
cleaned: number;
|
||||||
|
queued: number;
|
||||||
|
message: string;
|
||||||
|
}> {
|
||||||
|
const sessionManager = QMSessionManager.getInstance();
|
||||||
|
const cleanedCount = sessionManager.cleanupFailedSessions();
|
||||||
|
// Check which session IDs need more sessions and queue creation jobs
|
||||||
|
let queuedCount = 0;
|
||||||
|
for (const [sessionType, sessionId] of Object.entries(QM_SESSION_IDS)) {
|
||||||
|
handler.logger.debug(`Checking session ID: ${sessionId}`);
|
||||||
|
if (sessionManager.needsMoreSessions(sessionId)) {
|
||||||
|
const currentCount = sessionManager.getSessions(sessionId).length;
|
||||||
|
const neededSessions = SESSION_CONFIG.MAX_SESSIONS - currentCount;
|
||||||
|
for (let i = 0; i < neededSessions; i++) {
|
||||||
|
await handler.scheduleOperation('create-session', { sessionId, sessionType });
|
||||||
|
handler.logger.info(`Queued job to create session for ${sessionType}`);
|
||||||
|
queuedCount++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
cleaned: cleanedCount,
|
||||||
|
queued: queuedCount,
|
||||||
|
message: `Session check completed: cleaned ${cleanedCount}, queued ${queuedCount}`,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a single session for a specific session ID
|
||||||
|
*/
|
||||||
|
export async function createSingleSession(
|
||||||
|
handler: BaseHandler,
|
||||||
|
input: any
|
||||||
|
): Promise<{ sessionId: string; status: string; sessionType: string }> {
|
||||||
|
const { sessionId: _sessionId, sessionType } = input || {};
|
||||||
|
const _sessionManager = QMSessionManager.getInstance();
|
||||||
|
|
||||||
|
// Get proxy from proxy service
|
||||||
|
const _proxyString = handler.proxy.getProxy();
|
||||||
|
|
||||||
|
// const session = {
|
||||||
|
// proxy: proxyString || 'http://proxy:8080',
|
||||||
|
// headers: sessionManager.getQmHeaders(),
|
||||||
|
// successfulCalls: 0,
|
||||||
|
// failedCalls: 0,
|
||||||
|
// lastUsed: new Date()
|
||||||
|
// };
|
||||||
|
|
||||||
|
handler.logger.info(`Creating session for ${sessionType}`);
|
||||||
|
|
||||||
|
// Add session to manager
|
||||||
|
// sessionManager.addSession(sessionType, session);
|
||||||
|
|
||||||
|
return {
|
||||||
|
sessionId: sessionType,
|
||||||
|
status: 'created',
|
||||||
|
sessionType,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,33 @@
|
||||||
|
/**
|
||||||
|
* QM Spider Operations - Simple symbol discovery
|
||||||
|
*/
|
||||||
|
|
||||||
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
|
import type { SymbolSpiderJob } from '../shared/types';
|
||||||
|
|
||||||
|
export async function spiderSymbolSearch(
|
||||||
|
services: IServiceContainer,
|
||||||
|
config: SymbolSpiderJob
|
||||||
|
): Promise<{ foundSymbols: number; depth: number }> {
|
||||||
|
// Simple spider implementation
|
||||||
|
// TODO: Implement actual API calls to discover symbols
|
||||||
|
|
||||||
|
// For now, just return mock results
|
||||||
|
const foundSymbols = Math.floor(Math.random() * 10) + 1;
|
||||||
|
|
||||||
|
return {
|
||||||
|
foundSymbols,
|
||||||
|
depth: config.depth,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function queueSymbolDiscovery(
|
||||||
|
services: IServiceContainer,
|
||||||
|
searchTerms: string[]
|
||||||
|
): Promise<void> {
|
||||||
|
// Queue symbol discovery jobs
|
||||||
|
for (const term of searchTerms) {
|
||||||
|
// TODO: Queue actual discovery jobs
|
||||||
|
await services.cache.set(`discovery:${term}`, { queued: true }, 3600);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,19 @@
|
||||||
|
/**
|
||||||
|
* QM Symbols Operations - Simple symbol fetching
|
||||||
|
*/
|
||||||
|
|
||||||
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
|
|
||||||
|
export async function searchSymbols(services: IServiceContainer): Promise<any[]> {
|
||||||
|
// Get symbols from MongoDB
|
||||||
|
const symbols = await services.mongodb.collection('qm_symbols').find({}).limit(50).toArray();
|
||||||
|
|
||||||
|
return symbols;
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function fetchSymbolData(services: IServiceContainer, symbol: string): Promise<any> {
|
||||||
|
// Fetch data for a specific symbol
|
||||||
|
const symbolData = await services.mongodb.collection('qm_symbols').findOne({ symbol });
|
||||||
|
|
||||||
|
return symbolData;
|
||||||
|
}
|
||||||
103
apps/stock/data-ingestion/src/handlers/qm/qm.handler.ts
Normal file
103
apps/stock/data-ingestion/src/handlers/qm/qm.handler.ts
Normal file
|
|
@ -0,0 +1,103 @@
|
||||||
|
import { BaseHandler, Handler, type IServiceContainer } from '@stock-bot/handlers';
|
||||||
|
|
||||||
|
@Handler('qm')
|
||||||
|
export class QMHandler extends BaseHandler {
|
||||||
|
constructor(services: IServiceContainer) {
|
||||||
|
super(services); // Handler name read from @Handler decorator
|
||||||
|
}
|
||||||
|
|
||||||
|
// @Operation('check-sessions')
|
||||||
|
// @QueueSchedule('0 */15 * * *', {
|
||||||
|
// priority: 7,
|
||||||
|
// immediately: true,
|
||||||
|
// description: 'Check and maintain QM sessions'
|
||||||
|
// })
|
||||||
|
// async checkSessions(input: unknown, context: ExecutionContext): Promise<unknown> {
|
||||||
|
// // Call the session maintenance action
|
||||||
|
// const { checkSessions } = await import('./actions/session.action');
|
||||||
|
// return await checkSessions(this);
|
||||||
|
// }
|
||||||
|
|
||||||
|
// @Operation('create-session')
|
||||||
|
// async createSession(input: unknown, context: ExecutionContext): Promise<unknown> {
|
||||||
|
// // Call the individual session creation action
|
||||||
|
// const { createSingleSession } = await import('./actions/session.action');
|
||||||
|
// return await createSingleSession(this, input);
|
||||||
|
// }
|
||||||
|
|
||||||
|
// @Operation('search-symbols')
|
||||||
|
// async searchSymbols(_input: unknown, _context: ExecutionContext): Promise<unknown> {
|
||||||
|
// this.logger.info('Searching QM symbols with new DI pattern...');
|
||||||
|
// try {
|
||||||
|
// // Check existing symbols in MongoDB
|
||||||
|
// const symbolsCollection = this.mongodb.collection('qm_symbols');
|
||||||
|
// const symbols = await symbolsCollection.find({}).limit(100).toArray();
|
||||||
|
|
||||||
|
// this.logger.info('QM symbol search completed', { count: symbols.length });
|
||||||
|
|
||||||
|
// if (symbols && symbols.length > 0) {
|
||||||
|
// // Cache result for performance
|
||||||
|
// await this.cache.set('qm-symbols-sample', symbols.slice(0, 10), 1800);
|
||||||
|
|
||||||
|
// return {
|
||||||
|
// success: true,
|
||||||
|
// message: 'QM symbol search completed successfully',
|
||||||
|
// count: symbols.length,
|
||||||
|
// symbols: symbols.slice(0, 10), // Return first 10 symbols as sample
|
||||||
|
// };
|
||||||
|
// } else {
|
||||||
|
// // No symbols found - this is expected initially
|
||||||
|
// this.logger.info('No QM symbols found in database yet');
|
||||||
|
// return {
|
||||||
|
// success: true,
|
||||||
|
// message: 'No symbols found yet - database is empty',
|
||||||
|
// count: 0,
|
||||||
|
// };
|
||||||
|
// }
|
||||||
|
|
||||||
|
// } catch (error) {
|
||||||
|
// this.logger.error('Failed to search QM symbols', { error });
|
||||||
|
// throw error;
|
||||||
|
// }
|
||||||
|
// }
|
||||||
|
|
||||||
|
// @Operation('spider-symbol-search')
|
||||||
|
// @QueueSchedule('0 0 * * 0', {
|
||||||
|
// priority: 10,
|
||||||
|
// immediately: false,
|
||||||
|
// description: 'Comprehensive symbol search using QM API'
|
||||||
|
// })
|
||||||
|
// async spiderSymbolSearch(payload: SymbolSpiderJob | undefined, context: ExecutionContext): Promise<unknown> {
|
||||||
|
// // Set default payload for scheduled runs
|
||||||
|
// const jobPayload: SymbolSpiderJob = payload || {
|
||||||
|
// prefix: null,
|
||||||
|
// depth: 1,
|
||||||
|
// source: 'qm',
|
||||||
|
// maxDepth: 4
|
||||||
|
// };
|
||||||
|
|
||||||
|
// this.logger.info('Starting QM spider symbol search', { payload: jobPayload });
|
||||||
|
|
||||||
|
// // Store spider job info in cache (temporary data)
|
||||||
|
// const spiderJobId = `spider:qm:${Date.now()}:${Math.random().toString(36).substr(2, 9)}`;
|
||||||
|
// const spiderResult = {
|
||||||
|
// payload: jobPayload,
|
||||||
|
// startTime: new Date().toISOString(),
|
||||||
|
// status: 'started',
|
||||||
|
// jobId: spiderJobId
|
||||||
|
// };
|
||||||
|
|
||||||
|
// // Store in cache with 1 hour TTL (temporary data)
|
||||||
|
// await this.cache.set(spiderJobId, spiderResult, 3600);
|
||||||
|
// this.logger.debug('Spider job stored in cache', { spiderJobId, ttl: 3600 });
|
||||||
|
|
||||||
|
// // Schedule follow-up processing if needed
|
||||||
|
// await this.scheduleOperation('search-symbols', { source: 'spider', spiderJobId }, { delay: 5000 });
|
||||||
|
|
||||||
|
// return {
|
||||||
|
// success: true,
|
||||||
|
// message: 'QM spider search initiated',
|
||||||
|
// spiderJobId
|
||||||
|
// };
|
||||||
|
// }
|
||||||
|
}
|
||||||
|
|
@ -2,12 +2,10 @@
|
||||||
* Shared configuration for QM operations
|
* Shared configuration for QM operations
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { getRandomUserAgent } from '@stock-bot/http';
|
|
||||||
|
|
||||||
// QM Session IDs for different endpoints
|
// QM Session IDs for different endpoints
|
||||||
export const QM_SESSION_IDS = {
|
export const QM_SESSION_IDS = {
|
||||||
LOOKUP: 'dc8c9930437f65d30f6597768800957017bac203a0a50342932757c8dfa158d6', // lookup endpoint
|
LOOKUP: 'dc8c9930437f65d30f6597768800957017bac203a0a50342932757c8dfa158d6', // lookup endpoint
|
||||||
// '5ad521e05faf5778d567f6d0012ec34d6cdbaeb2462f41568f66558bc7b4ced9': [], //4488d072b
|
// '5ad521e05faf5778d567f6d0012ec34d6cdbaeb2462f41568f66558bc7b4ced9': [], //4488d072b
|
||||||
// cc1cbdaf040f76db8f4c94f7d156b9b9b716e1a7509ec9c74a48a47f6b6b9f87: [], //97ff00cf3 // getQuotes
|
// cc1cbdaf040f76db8f4c94f7d156b9b9b716e1a7509ec9c74a48a47f6b6b9f87: [], //97ff00cf3 // getQuotes
|
||||||
// '74963ff42f1db2320d051762b5d3950ff9eab23f9d5c5b592551b4ca0441d086': [], //32ca24e394b // getSplitsBySymbol getBrokerRatingsBySymbol getDividendsBySymbol getEarningsSurprisesBySymbol getEarningsEventsBySymbol
|
// '74963ff42f1db2320d051762b5d3950ff9eab23f9d5c5b592551b4ca0441d086': [], //32ca24e394b // getSplitsBySymbol getBrokerRatingsBySymbol getDividendsBySymbol getEarningsSurprisesBySymbol getEarningsEventsBySymbol
|
||||||
// '1e1d7cb1de1fd2fe52684abdea41a446919a5fe12776dfab88615ac1ce1ec2f6': [], //fb5721812d2c // getEnhancedQuotes getProfiles
|
// '1e1d7cb1de1fd2fe52684abdea41a446919a5fe12776dfab88615ac1ce1ec2f6': [], //fb5721812d2c // getEnhancedQuotes getProfiles
|
||||||
|
|
@ -28,8 +26,6 @@ export const QM_CONFIG = {
|
||||||
BASE_URL: 'https://app.quotemedia.com',
|
BASE_URL: 'https://app.quotemedia.com',
|
||||||
AUTH_PATH: '/auth/g/authenticate/dataTool/v0/500',
|
AUTH_PATH: '/auth/g/authenticate/dataTool/v0/500',
|
||||||
LOOKUP_URL: 'https://app.quotemedia.com/datatool/lookup.json',
|
LOOKUP_URL: 'https://app.quotemedia.com/datatool/lookup.json',
|
||||||
ORIGIN: 'https://www.quotemedia.com',
|
|
||||||
REFERER: 'https://www.quotemedia.com/',
|
|
||||||
} as const;
|
} as const;
|
||||||
|
|
||||||
// Session management settings
|
// Session management settings
|
||||||
|
|
@ -40,17 +36,3 @@ export const SESSION_CONFIG = {
|
||||||
SESSION_TIMEOUT: 10000, // 10 seconds
|
SESSION_TIMEOUT: 10000, // 10 seconds
|
||||||
API_TIMEOUT: 15000, // 15 seconds
|
API_TIMEOUT: 15000, // 15 seconds
|
||||||
} as const;
|
} as const;
|
||||||
|
|
||||||
/**
|
|
||||||
* Generate standard QM headers
|
|
||||||
*/
|
|
||||||
export function getQmHeaders(): Record<string, string> {
|
|
||||||
return {
|
|
||||||
'User-Agent': getRandomUserAgent(),
|
|
||||||
Accept: '*/*',
|
|
||||||
'Accept-Language': 'en',
|
|
||||||
'Sec-Fetch-Mode': 'cors',
|
|
||||||
Origin: QM_CONFIG.ORIGIN,
|
|
||||||
Referer: QM_CONFIG.REFERER,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
@ -2,8 +2,9 @@
|
||||||
* QM Session Manager - Centralized session state management
|
* QM Session Manager - Centralized session state management
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import type { QMSession } from './types';
|
import { getRandomUserAgent } from '@stock-bot/utils';
|
||||||
import { QM_SESSION_IDS, SESSION_CONFIG } from './config';
|
import { QM_SESSION_IDS, SESSION_CONFIG } from './config';
|
||||||
|
import type { QMSession } from './types';
|
||||||
|
|
||||||
export class QMSessionManager {
|
export class QMSessionManager {
|
||||||
private static instance: QMSessionManager | null = null;
|
private static instance: QMSessionManager | null = null;
|
||||||
|
|
@ -34,7 +35,9 @@ export class QMSessionManager {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Filter out sessions with excessive failures
|
// Filter out sessions with excessive failures
|
||||||
const validSessions = sessions.filter(session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS);
|
const validSessions = sessions.filter(
|
||||||
|
session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS
|
||||||
|
);
|
||||||
if (validSessions.length === 0) {
|
if (validSessions.length === 0) {
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
|
@ -83,12 +86,25 @@ export class QMSessionManager {
|
||||||
return removedCount;
|
return removedCount;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
getQmHeaders(): Record<string, string> {
|
||||||
|
return {
|
||||||
|
'User-Agent': getRandomUserAgent(),
|
||||||
|
Accept: '*/*',
|
||||||
|
'Accept-Language': 'en',
|
||||||
|
'Sec-Fetch-Mode': 'cors',
|
||||||
|
Origin: 'https://www.quotemedia.com',
|
||||||
|
Referer: 'https://www.quotemedia.com/',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Check if more sessions are needed for a session ID
|
* Check if more sessions are needed for a session ID
|
||||||
*/
|
*/
|
||||||
needsMoreSessions(sessionId: string): boolean {
|
needsMoreSessions(sessionId: string): boolean {
|
||||||
const sessions = this.sessionCache[sessionId] || [];
|
const sessions = this.sessionCache[sessionId] || [];
|
||||||
const validSessions = sessions.filter(session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS);
|
const validSessions = sessions.filter(
|
||||||
|
session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS
|
||||||
|
);
|
||||||
return validSessions.length < SESSION_CONFIG.MIN_SESSIONS;
|
return validSessions.length < SESSION_CONFIG.MIN_SESSIONS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -107,13 +123,17 @@ export class QMSessionManager {
|
||||||
const stats: Record<string, { total: number; valid: number; failed: number }> = {};
|
const stats: Record<string, { total: number; valid: number; failed: number }> = {};
|
||||||
|
|
||||||
Object.entries(this.sessionCache).forEach(([sessionId, sessions]) => {
|
Object.entries(this.sessionCache).forEach(([sessionId, sessions]) => {
|
||||||
const validSessions = sessions.filter(session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS);
|
const validSessions = sessions.filter(
|
||||||
const failedSessions = sessions.filter(session => session.failedCalls > SESSION_CONFIG.MAX_FAILED_CALLS);
|
session => session.failedCalls <= SESSION_CONFIG.MAX_FAILED_CALLS
|
||||||
|
);
|
||||||
|
const failedSessions = sessions.filter(
|
||||||
|
session => session.failedCalls > SESSION_CONFIG.MAX_FAILED_CALLS
|
||||||
|
);
|
||||||
|
|
||||||
stats[sessionId] = {
|
stats[sessionId] = {
|
||||||
total: sessions.length,
|
total: sessions.length,
|
||||||
valid: validSessions.length,
|
valid: validSessions.length,
|
||||||
failed: failedSessions.length
|
failed: failedSessions.length,
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|
@ -0,0 +1,66 @@
|
||||||
|
import {
|
||||||
|
BaseHandler,
|
||||||
|
Handler,
|
||||||
|
Operation,
|
||||||
|
QueueSchedule,
|
||||||
|
type ExecutionContext,
|
||||||
|
type IServiceContainer
|
||||||
|
} from '@stock-bot/handlers';
|
||||||
|
|
||||||
|
@Handler('webshare')
|
||||||
|
export class WebShareHandler extends BaseHandler {
|
||||||
|
constructor(services: IServiceContainer) {
|
||||||
|
super(services);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Operation('fetch-proxies')
|
||||||
|
@QueueSchedule('0 */6 * * *', { // every 6 hours
|
||||||
|
priority: 3,
|
||||||
|
immediately: false, // Don't run immediately since ProxyManager fetches on startup
|
||||||
|
description: 'Refresh proxies from WebShare API',
|
||||||
|
})
|
||||||
|
async fetchProxies(_input: unknown, _context: ExecutionContext): Promise<unknown> {
|
||||||
|
this.logger.info('Refreshing proxies from WebShare API');
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Check if proxy manager is available
|
||||||
|
if (!this.proxy) {
|
||||||
|
this.logger.warn('Proxy manager is not initialized, cannot refresh proxies');
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: 'Proxy manager not initialized',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use the proxy manager's refresh method
|
||||||
|
await this.proxy.refreshProxies();
|
||||||
|
|
||||||
|
// Get stats after refresh
|
||||||
|
const stats = this.proxy.getStats();
|
||||||
|
const lastFetchTime = this.proxy.getLastFetchTime();
|
||||||
|
|
||||||
|
this.logger.info('Successfully refreshed proxies', {
|
||||||
|
total: stats.total,
|
||||||
|
working: stats.working,
|
||||||
|
failed: stats.failed,
|
||||||
|
lastFetchTime,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Cache proxy stats for monitoring using handler's cache methods
|
||||||
|
await this.cacheSet('proxy-count', stats.total, 3600);
|
||||||
|
await this.cacheSet('working-count', stats.working, 3600);
|
||||||
|
await this.cacheSet('last-fetch', lastFetchTime?.toISOString() || 'unknown', 1800);
|
||||||
|
|
||||||
|
return {
|
||||||
|
success: true,
|
||||||
|
proxiesUpdated: stats.total,
|
||||||
|
workingProxies: stats.working,
|
||||||
|
failedProxies: stats.failed,
|
||||||
|
lastFetchTime,
|
||||||
|
};
|
||||||
|
} catch (error) {
|
||||||
|
this.logger.error('Failed to refresh proxies', { error });
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
80
apps/stock/data-ingestion/src/index.ts
Normal file
80
apps/stock/data-ingestion/src/index.ts
Normal file
|
|
@ -0,0 +1,80 @@
|
||||||
|
/**
|
||||||
|
* Data Ingestion Service
|
||||||
|
* Simplified entry point using ServiceApplication framework
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { initializeStockConfig } from '@stock-bot/stock-config';
|
||||||
|
import {
|
||||||
|
ServiceApplication,
|
||||||
|
} from '@stock-bot/di';
|
||||||
|
import { getLogger } from '@stock-bot/logger';
|
||||||
|
|
||||||
|
// Local imports
|
||||||
|
import { initializeAllHandlers } from './handlers';
|
||||||
|
import { createRoutes } from './routes/create-routes';
|
||||||
|
|
||||||
|
// Initialize configuration with service-specific overrides
|
||||||
|
const config = initializeStockConfig('dataIngestion');
|
||||||
|
|
||||||
|
// Log the full configuration
|
||||||
|
const logger = getLogger('data-ingestion');
|
||||||
|
logger.info('Service configuration:', config);
|
||||||
|
|
||||||
|
// Create service application
|
||||||
|
const app = new ServiceApplication(
|
||||||
|
config,
|
||||||
|
{
|
||||||
|
serviceName: 'data-ingestion',
|
||||||
|
enableHandlers: true,
|
||||||
|
enableScheduledJobs: true,
|
||||||
|
corsConfig: {
|
||||||
|
origin: '*',
|
||||||
|
allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS', 'PATCH'],
|
||||||
|
allowHeaders: ['Content-Type', 'Authorization'],
|
||||||
|
credentials: false,
|
||||||
|
},
|
||||||
|
serviceMetadata: {
|
||||||
|
version: '1.0.0',
|
||||||
|
description: 'Market data ingestion from multiple providers',
|
||||||
|
endpoints: {
|
||||||
|
health: '/health',
|
||||||
|
handlers: '/api/handlers',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
// Lifecycle hooks if needed
|
||||||
|
onStarted: (_port) => {
|
||||||
|
const logger = getLogger('data-ingestion');
|
||||||
|
logger.info('Data ingestion service startup initiated with ServiceApplication framework');
|
||||||
|
},
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
// Container factory function
|
||||||
|
async function createContainer(config: any) {
|
||||||
|
const { ServiceContainerBuilder } = await import('@stock-bot/di');
|
||||||
|
|
||||||
|
const container = await new ServiceContainerBuilder()
|
||||||
|
.withConfig(config)
|
||||||
|
.withOptions({
|
||||||
|
enableQuestDB: false, // Data ingestion doesn't need QuestDB yet
|
||||||
|
enableMongoDB: true,
|
||||||
|
enablePostgres: config.database?.postgres?.enabled ?? false,
|
||||||
|
enableCache: true,
|
||||||
|
enableQueue: true,
|
||||||
|
enableBrowser: true, // Data ingestion needs browser for web scraping
|
||||||
|
enableProxy: true, // Data ingestion needs proxy for rate limiting
|
||||||
|
})
|
||||||
|
.build(); // This automatically initializes services
|
||||||
|
|
||||||
|
return container;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
// Start the service
|
||||||
|
app.start(createContainer, createRoutes, initializeAllHandlers).catch(error => {
|
||||||
|
const logger = getLogger('data-ingestion');
|
||||||
|
logger.fatal('Failed to start data service', { error });
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
74
apps/stock/data-ingestion/src/routes/create-routes.ts
Normal file
74
apps/stock/data-ingestion/src/routes/create-routes.ts
Normal file
|
|
@ -0,0 +1,74 @@
|
||||||
|
/**
|
||||||
|
* Routes creation with improved DI pattern
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Hono } from 'hono';
|
||||||
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
|
import { exchangeRoutes } from './exchange.routes';
|
||||||
|
import { healthRoutes } from './health.routes';
|
||||||
|
import { createQueueRoutes } from './queue.routes';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Creates all routes with access to type-safe services
|
||||||
|
*/
|
||||||
|
export function createRoutes(services: IServiceContainer): Hono {
|
||||||
|
const app = new Hono();
|
||||||
|
|
||||||
|
// Mount routes that don't need services
|
||||||
|
app.route('/health', healthRoutes);
|
||||||
|
|
||||||
|
// Mount routes that need services
|
||||||
|
app.route('/api/exchanges', exchangeRoutes);
|
||||||
|
app.route('/api/queue', createQueueRoutes(services));
|
||||||
|
|
||||||
|
// Store services in app context for handlers that need it
|
||||||
|
app.use('*', async (c, next) => {
|
||||||
|
c.set('services', services);
|
||||||
|
await next();
|
||||||
|
});
|
||||||
|
|
||||||
|
// Add a new endpoint to test the improved DI
|
||||||
|
app.get('/api/di-test', async c => {
|
||||||
|
try {
|
||||||
|
const services = c.get('services') as IServiceContainer;
|
||||||
|
|
||||||
|
// Test MongoDB connection
|
||||||
|
const mongoStats = services.mongodb?.getPoolMetrics?.() || {
|
||||||
|
status: services.mongodb ? 'connected' : 'disabled',
|
||||||
|
};
|
||||||
|
|
||||||
|
// Test PostgreSQL connection
|
||||||
|
const pgConnected = services.postgres?.connected || false;
|
||||||
|
|
||||||
|
// Test cache
|
||||||
|
const cacheReady = services.cache?.isReady() || false;
|
||||||
|
|
||||||
|
// Test queue
|
||||||
|
const queueStats = services.queue?.getGlobalStats() || { status: 'disabled' };
|
||||||
|
|
||||||
|
return c.json({
|
||||||
|
success: true,
|
||||||
|
message: 'Improved DI pattern is working!',
|
||||||
|
services: {
|
||||||
|
mongodb: mongoStats,
|
||||||
|
postgres: { connected: pgConnected },
|
||||||
|
cache: { ready: cacheReady },
|
||||||
|
queue: queueStats,
|
||||||
|
},
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
const services = c.get('services') as IServiceContainer;
|
||||||
|
services.logger.error('DI test endpoint failed', { error });
|
||||||
|
return c.json(
|
||||||
|
{
|
||||||
|
success: false,
|
||||||
|
error: error instanceof Error ? error.message : String(error),
|
||||||
|
},
|
||||||
|
500
|
||||||
|
);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return app;
|
||||||
|
}
|
||||||
|
|
@ -11,7 +11,7 @@ exchange.get('/', async c => {
|
||||||
return c.json({
|
return c.json({
|
||||||
status: 'success',
|
status: 'success',
|
||||||
data: [],
|
data: [],
|
||||||
message: 'Exchange endpoints will be implemented with database integration'
|
message: 'Exchange endpoints will be implemented with database integration',
|
||||||
});
|
});
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
logger.error('Failed to get exchanges', { error });
|
logger.error('Failed to get exchanges', { error });
|
||||||
|
|
@ -6,7 +6,7 @@ const health = new Hono();
|
||||||
health.get('/', c => {
|
health.get('/', c => {
|
||||||
return c.json({
|
return c.json({
|
||||||
status: 'healthy',
|
status: 'healthy',
|
||||||
service: 'data-service',
|
service: 'data-ingestion',
|
||||||
timestamp: new Date().toISOString(),
|
timestamp: new Date().toISOString(),
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
142
apps/stock/data-ingestion/src/routes/market-data.routes.ts
Normal file
142
apps/stock/data-ingestion/src/routes/market-data.routes.ts
Normal file
|
|
@ -0,0 +1,142 @@
|
||||||
|
/**
|
||||||
|
* Market data routes
|
||||||
|
*/
|
||||||
|
import { Hono } from 'hono';
|
||||||
|
import { getLogger } from '@stock-bot/logger';
|
||||||
|
import { processItems } from '@stock-bot/queue';
|
||||||
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
|
|
||||||
|
const logger = getLogger('market-data-routes');
|
||||||
|
|
||||||
|
export function createMarketDataRoutes(container: IServiceContainer) {
|
||||||
|
const marketDataRoutes = new Hono();
|
||||||
|
|
||||||
|
// Market data endpoints
|
||||||
|
marketDataRoutes.get('/api/live/:symbol', async c => {
|
||||||
|
const symbol = c.req.param('symbol');
|
||||||
|
logger.info('Live data request', { symbol });
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Queue job for live data using Yahoo provider
|
||||||
|
const queueManager = container.queue;
|
||||||
|
if (!queueManager) {
|
||||||
|
return c.json({ status: 'error', message: 'Queue manager not available' }, 503);
|
||||||
|
}
|
||||||
|
|
||||||
|
const queue = queueManager.getQueue('yahoo-finance');
|
||||||
|
const job = await queue.add('live-data', {
|
||||||
|
handler: 'yahoo-finance',
|
||||||
|
operation: 'live-data',
|
||||||
|
payload: { symbol },
|
||||||
|
});
|
||||||
|
return c.json({
|
||||||
|
status: 'success',
|
||||||
|
message: 'Live data job queued',
|
||||||
|
jobId: job.id,
|
||||||
|
symbol,
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('Failed to queue live data job', { symbol, error });
|
||||||
|
return c.json({ status: 'error', message: 'Failed to queue live data job' }, 500);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
marketDataRoutes.get('/api/historical/:symbol', async c => {
|
||||||
|
const symbol = c.req.param('symbol');
|
||||||
|
const from = c.req.query('from');
|
||||||
|
const to = c.req.query('to');
|
||||||
|
|
||||||
|
logger.info('Historical data request', { symbol, from, to });
|
||||||
|
|
||||||
|
try {
|
||||||
|
const fromDate = from ? new Date(from) : new Date(Date.now() - 30 * 24 * 60 * 60 * 1000); // 30 days ago
|
||||||
|
const toDate = to ? new Date(to) : new Date(); // Now
|
||||||
|
|
||||||
|
// Queue job for historical data using Yahoo provider
|
||||||
|
const queueManager = container.queue;
|
||||||
|
if (!queueManager) {
|
||||||
|
return c.json({ status: 'error', message: 'Queue manager not available' }, 503);
|
||||||
|
}
|
||||||
|
|
||||||
|
const queue = queueManager.getQueue('yahoo-finance');
|
||||||
|
const job = await queue.add('historical-data', {
|
||||||
|
handler: 'yahoo-finance',
|
||||||
|
operation: 'historical-data',
|
||||||
|
payload: {
|
||||||
|
symbol,
|
||||||
|
from: fromDate.toISOString(),
|
||||||
|
to: toDate.toISOString(),
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
return c.json({
|
||||||
|
status: 'success',
|
||||||
|
message: 'Historical data job queued',
|
||||||
|
jobId: job.id,
|
||||||
|
symbol,
|
||||||
|
from: fromDate,
|
||||||
|
to: toDate,
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('Failed to queue historical data job', { symbol, from, to, error });
|
||||||
|
return c.json({ status: 'error', message: 'Failed to queue historical data job' }, 500);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Batch processing endpoint using new queue system
|
||||||
|
marketDataRoutes.post('/api/process-symbols', async c => {
|
||||||
|
try {
|
||||||
|
const {
|
||||||
|
symbols,
|
||||||
|
provider = 'ib',
|
||||||
|
operation = 'fetch-session',
|
||||||
|
useBatching = true,
|
||||||
|
totalDelayHours = 0.0083, // ~30 seconds (30/3600 hours)
|
||||||
|
batchSize = 10,
|
||||||
|
} = await c.req.json();
|
||||||
|
|
||||||
|
if (!symbols || !Array.isArray(symbols) || symbols.length === 0) {
|
||||||
|
return c.json({ status: 'error', message: 'Invalid symbols array' }, 400);
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info('Batch processing symbols', {
|
||||||
|
count: symbols.length,
|
||||||
|
provider,
|
||||||
|
operation,
|
||||||
|
useBatching,
|
||||||
|
});
|
||||||
|
|
||||||
|
const queueManager = container.queue;
|
||||||
|
if (!queueManager) {
|
||||||
|
return c.json({ status: 'error', message: 'Queue manager not available' }, 503);
|
||||||
|
}
|
||||||
|
|
||||||
|
const result = await processItems(symbols, provider, {
|
||||||
|
handler: provider,
|
||||||
|
operation,
|
||||||
|
totalDelayHours,
|
||||||
|
useBatching,
|
||||||
|
batchSize,
|
||||||
|
priority: 2,
|
||||||
|
retries: 2,
|
||||||
|
removeOnComplete: 5,
|
||||||
|
removeOnFail: 10,
|
||||||
|
}, queueManager);
|
||||||
|
|
||||||
|
return c.json({
|
||||||
|
status: 'success',
|
||||||
|
message: 'Batch processing initiated',
|
||||||
|
result,
|
||||||
|
symbols: symbols.length,
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('Failed to process symbols batch', { error });
|
||||||
|
return c.json({ status: 'error', message: 'Failed to process symbols batch' }, 500);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return marketDataRoutes;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Legacy export for backward compatibility
|
||||||
|
export const marketDataRoutes = createMarketDataRoutes({} as IServiceContainer);
|
||||||
35
apps/stock/data-ingestion/src/routes/queue.routes.ts
Normal file
35
apps/stock/data-ingestion/src/routes/queue.routes.ts
Normal file
|
|
@ -0,0 +1,35 @@
|
||||||
|
import { Hono } from 'hono';
|
||||||
|
import { getLogger } from '@stock-bot/logger';
|
||||||
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
|
|
||||||
|
const logger = getLogger('queue-routes');
|
||||||
|
|
||||||
|
export function createQueueRoutes(container: IServiceContainer) {
|
||||||
|
const queue = new Hono();
|
||||||
|
|
||||||
|
// Queue status endpoint
|
||||||
|
queue.get('/status', async c => {
|
||||||
|
try {
|
||||||
|
const queueManager = container.queue;
|
||||||
|
if (!queueManager) {
|
||||||
|
return c.json({ status: 'error', message: 'Queue manager not available' }, 503);
|
||||||
|
}
|
||||||
|
|
||||||
|
const globalStats = await queueManager.getGlobalStats();
|
||||||
|
|
||||||
|
return c.json({
|
||||||
|
status: 'success',
|
||||||
|
data: globalStats,
|
||||||
|
message: 'Queue status retrieved successfully',
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('Failed to get queue status', { error });
|
||||||
|
return c.json({ status: 'error', message: 'Failed to get queue status' }, 500);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return queue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Legacy export for backward compatibility
|
||||||
|
export const queueRoutes = createQueueRoutes({} as IServiceContainer);
|
||||||
|
|
@ -1,5 +1,5 @@
|
||||||
|
import { sleep } from '@stock-bot/di';
|
||||||
import { getLogger } from '@stock-bot/logger';
|
import { getLogger } from '@stock-bot/logger';
|
||||||
import { sleep } from '@stock-bot/utils';
|
|
||||||
|
|
||||||
const logger = getLogger('symbol-search-util');
|
const logger = getLogger('symbol-search-util');
|
||||||
|
|
||||||
18
apps/stock/data-ingestion/tsconfig.json
Normal file
18
apps/stock/data-ingestion/tsconfig.json
Normal file
|
|
@ -0,0 +1,18 @@
|
||||||
|
{
|
||||||
|
"extends": "../../tsconfig.app.json",
|
||||||
|
"references": [
|
||||||
|
{ "path": "../../libs/core/types" },
|
||||||
|
{ "path": "../../libs/core/config" },
|
||||||
|
{ "path": "../../libs/core/logger" },
|
||||||
|
{ "path": "../../libs/core/di" },
|
||||||
|
{ "path": "../../libs/core/handlers" },
|
||||||
|
{ "path": "../../libs/data/cache" },
|
||||||
|
{ "path": "../../libs/data/mongodb" },
|
||||||
|
{ "path": "../../libs/data/postgres" },
|
||||||
|
{ "path": "../../libs/data/questdb" },
|
||||||
|
{ "path": "../../libs/services/queue" },
|
||||||
|
{ "path": "../../libs/services/shutdown" },
|
||||||
|
{ "path": "../../libs/utils" },
|
||||||
|
{ "path": "../config" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
@ -1,7 +1,7 @@
|
||||||
{
|
{
|
||||||
"name": "@stock-bot/data-service",
|
"name": "@stock-bot/data-pipeline",
|
||||||
"version": "1.0.0",
|
"version": "1.0.0",
|
||||||
"description": "Combined data ingestion and historical data service",
|
"description": "Data processing pipeline for syncing and transforming raw data to normalized records",
|
||||||
"main": "dist/index.js",
|
"main": "dist/index.js",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
|
|
@ -14,10 +14,11 @@
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@stock-bot/cache": "*",
|
"@stock-bot/cache": "*",
|
||||||
"@stock-bot/config": "*",
|
"@stock-bot/config": "*",
|
||||||
|
"@stock-bot/stock-config": "*",
|
||||||
"@stock-bot/logger": "*",
|
"@stock-bot/logger": "*",
|
||||||
"@stock-bot/mongodb-client": "*",
|
"@stock-bot/mongodb": "*",
|
||||||
"@stock-bot/postgres-client": "*",
|
"@stock-bot/postgres": "*",
|
||||||
"@stock-bot/questdb-client": "*",
|
"@stock-bot/questdb": "*",
|
||||||
"@stock-bot/queue": "*",
|
"@stock-bot/queue": "*",
|
||||||
"@stock-bot/shutdown": "*",
|
"@stock-bot/shutdown": "*",
|
||||||
"hono": "^4.0.0"
|
"hono": "^4.0.0"
|
||||||
34
apps/stock/data-pipeline/src/container-setup.ts
Normal file
34
apps/stock/data-pipeline/src/container-setup.ts
Normal file
|
|
@ -0,0 +1,34 @@
|
||||||
|
/**
|
||||||
|
* Service Container Setup for Data Pipeline
|
||||||
|
* Configures dependency injection for the data pipeline service
|
||||||
|
*/
|
||||||
|
|
||||||
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
|
import { getLogger } from '@stock-bot/logger';
|
||||||
|
import type { AppConfig } from '@stock-bot/config';
|
||||||
|
|
||||||
|
const logger = getLogger('data-pipeline-container');
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Configure the service container for data pipeline workloads
|
||||||
|
*/
|
||||||
|
export function setupServiceContainer(
|
||||||
|
config: AppConfig,
|
||||||
|
container: IServiceContainer
|
||||||
|
): IServiceContainer {
|
||||||
|
logger.info('Configuring data pipeline service container...');
|
||||||
|
|
||||||
|
// Data pipeline specific configuration
|
||||||
|
// This service does more complex queries and transformations
|
||||||
|
const poolSizes = {
|
||||||
|
mongodb: config.environment === 'production' ? 40 : 20,
|
||||||
|
postgres: config.environment === 'production' ? 50 : 25,
|
||||||
|
cache: config.environment === 'production' ? 30 : 15,
|
||||||
|
};
|
||||||
|
|
||||||
|
logger.info('Data pipeline pool sizes configured', poolSizes);
|
||||||
|
|
||||||
|
// The container is already configured with connections
|
||||||
|
// Just return it with our logging
|
||||||
|
return container;
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,111 @@
|
||||||
|
import {
|
||||||
|
BaseHandler,
|
||||||
|
Handler,
|
||||||
|
Operation,
|
||||||
|
ScheduledOperation,
|
||||||
|
type IServiceContainer,
|
||||||
|
} from '@stock-bot/handlers';
|
||||||
|
import { clearPostgreSQLData } from './operations/clear-postgresql-data.operations';
|
||||||
|
import { getSyncStatus } from './operations/enhanced-sync-status.operations';
|
||||||
|
import { getExchangeStats } from './operations/exchange-stats.operations';
|
||||||
|
import { getProviderMappingStats } from './operations/provider-mapping-stats.operations';
|
||||||
|
import { syncQMExchanges } from './operations/qm-exchanges.operations';
|
||||||
|
import { syncAllExchanges } from './operations/sync-all-exchanges.operations';
|
||||||
|
import { syncIBExchanges } from './operations/sync-ib-exchanges.operations';
|
||||||
|
import { syncQMProviderMappings } from './operations/sync-qm-provider-mappings.operations';
|
||||||
|
|
||||||
|
@Handler('exchanges')
|
||||||
|
export class ExchangesHandler extends BaseHandler {
|
||||||
|
constructor(services: IServiceContainer) {
|
||||||
|
super(services);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Sync all exchanges - weekly full sync
|
||||||
|
*/
|
||||||
|
@Operation('sync-all-exchanges')
|
||||||
|
@ScheduledOperation('sync-all-exchanges', '0 0 * * 0', {
|
||||||
|
priority: 10,
|
||||||
|
description: 'Weekly full exchange sync on Sunday at midnight',
|
||||||
|
})
|
||||||
|
async syncAllExchanges(payload?: { clearFirst?: boolean }): Promise<unknown> {
|
||||||
|
const finalPayload = payload || { clearFirst: true };
|
||||||
|
this.log('info', 'Starting sync of all exchanges', finalPayload);
|
||||||
|
return syncAllExchanges(finalPayload, this.services);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Sync exchanges from QuestionsAndMethods
|
||||||
|
*/
|
||||||
|
@Operation('sync-qm-exchanges')
|
||||||
|
@ScheduledOperation('sync-qm-exchanges', '0 1 * * *', {
|
||||||
|
priority: 5,
|
||||||
|
description: 'Daily sync of QM exchanges at 1 AM',
|
||||||
|
})
|
||||||
|
async syncQMExchanges(): Promise<unknown> {
|
||||||
|
this.log('info', 'Starting QM exchanges sync...');
|
||||||
|
return syncQMExchanges({}, this.services);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Sync exchanges from Interactive Brokers
|
||||||
|
*/
|
||||||
|
@Operation('sync-ib-exchanges')
|
||||||
|
@ScheduledOperation('sync-ib-exchanges', '0 3 * * *', {
|
||||||
|
priority: 3,
|
||||||
|
description: 'Daily sync of IB exchanges at 3 AM',
|
||||||
|
})
|
||||||
|
async syncIBExchanges(): Promise<unknown> {
|
||||||
|
this.log('info', 'Starting IB exchanges sync...');
|
||||||
|
return syncIBExchanges({}, this.services);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Sync provider mappings from QuestionsAndMethods
|
||||||
|
*/
|
||||||
|
@Operation('sync-qm-provider-mappings')
|
||||||
|
@ScheduledOperation('sync-qm-provider-mappings', '0 3 * * *', {
|
||||||
|
priority: 7,
|
||||||
|
description: 'Daily sync of QM provider mappings at 3 AM',
|
||||||
|
})
|
||||||
|
async syncQMProviderMappings(): Promise<unknown> {
|
||||||
|
this.log('info', 'Starting QM provider mappings sync...');
|
||||||
|
return syncQMProviderMappings({}, this.services);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Clear PostgreSQL data - maintenance operation
|
||||||
|
*/
|
||||||
|
@Operation('clear-postgresql-data')
|
||||||
|
async clearPostgreSQLData(payload: { type?: 'exchanges' | 'provider_mappings' | 'all' }): Promise<unknown> {
|
||||||
|
this.log('warn', 'Clearing PostgreSQL data', payload);
|
||||||
|
return clearPostgreSQLData(payload, this.services);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get exchange statistics
|
||||||
|
*/
|
||||||
|
@Operation('get-exchange-stats')
|
||||||
|
async getExchangeStats(): Promise<unknown> {
|
||||||
|
this.log('info', 'Getting exchange statistics...');
|
||||||
|
return getExchangeStats({}, this.services);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get provider mapping statistics
|
||||||
|
*/
|
||||||
|
@Operation('get-provider-mapping-stats')
|
||||||
|
async getProviderMappingStats(): Promise<unknown> {
|
||||||
|
this.log('info', 'Getting provider mapping statistics...');
|
||||||
|
return getProviderMappingStats({}, this.services);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get enhanced sync status
|
||||||
|
*/
|
||||||
|
@Operation('enhanced-sync-status')
|
||||||
|
async getEnhancedSyncStatus(): Promise<unknown> {
|
||||||
|
this.log('info', 'Getting enhanced sync status...');
|
||||||
|
return getSyncStatus({}, this.services);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -1,10 +1,13 @@
|
||||||
import { getLogger } from '@stock-bot/logger';
|
import { getLogger } from '@stock-bot/logger';
|
||||||
import { getPostgreSQLClient } from '@stock-bot/postgres-client';
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
import type { JobPayload } from '../../../types/job-payloads';
|
import type { JobPayload } from '../../../types/job-payloads';
|
||||||
|
|
||||||
const logger = getLogger('enhanced-sync-clear-postgresql-data');
|
const logger = getLogger('enhanced-sync-clear-postgresql-data');
|
||||||
|
|
||||||
export async function clearPostgreSQLData(payload: JobPayload): Promise<{
|
export async function clearPostgreSQLData(
|
||||||
|
payload: JobPayload,
|
||||||
|
container: IServiceContainer
|
||||||
|
): Promise<{
|
||||||
exchangesCleared: number;
|
exchangesCleared: number;
|
||||||
symbolsCleared: number;
|
symbolsCleared: number;
|
||||||
mappingsCleared: number;
|
mappingsCleared: number;
|
||||||
|
|
@ -12,7 +15,7 @@ export async function clearPostgreSQLData(payload: JobPayload): Promise<{
|
||||||
logger.info('Clearing existing PostgreSQL data...');
|
logger.info('Clearing existing PostgreSQL data...');
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const postgresClient = getPostgreSQLClient();
|
const postgresClient = container.postgres;
|
||||||
|
|
||||||
// Start transaction for atomic operations
|
// Start transaction for atomic operations
|
||||||
await postgresClient.query('BEGIN');
|
await postgresClient.query('BEGIN');
|
||||||
|
|
@ -21,9 +24,7 @@ export async function clearPostgreSQLData(payload: JobPayload): Promise<{
|
||||||
const exchangeCountResult = await postgresClient.query(
|
const exchangeCountResult = await postgresClient.query(
|
||||||
'SELECT COUNT(*) as count FROM exchanges'
|
'SELECT COUNT(*) as count FROM exchanges'
|
||||||
);
|
);
|
||||||
const symbolCountResult = await postgresClient.query(
|
const symbolCountResult = await postgresClient.query('SELECT COUNT(*) as count FROM symbols');
|
||||||
'SELECT COUNT(*) as count FROM symbols'
|
|
||||||
);
|
|
||||||
const mappingCountResult = await postgresClient.query(
|
const mappingCountResult = await postgresClient.query(
|
||||||
'SELECT COUNT(*) as count FROM provider_mappings'
|
'SELECT COUNT(*) as count FROM provider_mappings'
|
||||||
);
|
);
|
||||||
|
|
@ -52,7 +53,7 @@ export async function clearPostgreSQLData(payload: JobPayload): Promise<{
|
||||||
|
|
||||||
return { exchangesCleared, symbolsCleared, mappingsCleared };
|
return { exchangesCleared, symbolsCleared, mappingsCleared };
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
const postgresClient = getPostgreSQLClient();
|
const postgresClient = container.postgres;
|
||||||
await postgresClient.query('ROLLBACK');
|
await postgresClient.query('ROLLBACK');
|
||||||
logger.error('Failed to clear PostgreSQL data', { error });
|
logger.error('Failed to clear PostgreSQL data', { error });
|
||||||
throw error;
|
throw error;
|
||||||
|
|
@ -1,14 +1,17 @@
|
||||||
import { getLogger } from '@stock-bot/logger';
|
import { getLogger } from '@stock-bot/logger';
|
||||||
import { getPostgreSQLClient } from '@stock-bot/postgres-client';
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
import type { JobPayload, SyncStatus } from '../../../types/job-payloads';
|
import type { JobPayload, SyncStatus } from '../../../types/job-payloads';
|
||||||
|
|
||||||
const logger = getLogger('enhanced-sync-status');
|
const logger = getLogger('enhanced-sync-status');
|
||||||
|
|
||||||
export async function getSyncStatus(payload: JobPayload): Promise<SyncStatus[]> {
|
export async function getSyncStatus(
|
||||||
|
payload: JobPayload,
|
||||||
|
container: IServiceContainer
|
||||||
|
): Promise<SyncStatus[]> {
|
||||||
logger.info('Getting comprehensive sync status...');
|
logger.info('Getting comprehensive sync status...');
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const postgresClient = getPostgreSQLClient();
|
const postgresClient = container.postgres;
|
||||||
const query = `
|
const query = `
|
||||||
SELECT provider, data_type as "dataType", last_sync_at as "lastSyncAt",
|
SELECT provider, data_type as "dataType", last_sync_at as "lastSyncAt",
|
||||||
last_sync_count as "lastSyncCount", sync_errors as "syncErrors"
|
last_sync_count as "lastSyncCount", sync_errors as "syncErrors"
|
||||||
|
|
@ -1,14 +1,17 @@
|
||||||
import { getLogger } from '@stock-bot/logger';
|
import { getLogger } from '@stock-bot/logger';
|
||||||
import { getPostgreSQLClient } from '@stock-bot/postgres-client';
|
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||||
import type { JobPayload } from '../../../types/job-payloads';
|
import type { JobPayload } from '../../../types/job-payloads';
|
||||||
|
|
||||||
const logger = getLogger('enhanced-sync-exchange-stats');
|
const logger = getLogger('enhanced-sync-exchange-stats');
|
||||||
|
|
||||||
export async function getExchangeStats(payload: JobPayload): Promise<any> {
|
export async function getExchangeStats(
|
||||||
|
payload: JobPayload,
|
||||||
|
container: IServiceContainer
|
||||||
|
): Promise<any> {
|
||||||
logger.info('Getting exchange statistics...');
|
logger.info('Getting exchange statistics...');
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const postgresClient = getPostgreSQLClient();
|
const postgresClient = container.postgres;
|
||||||
const query = `
|
const query = `
|
||||||
SELECT
|
SELECT
|
||||||
COUNT(*) as total_exchanges,
|
COUNT(*) as total_exchanges,
|
||||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue