huge refactor to remove depenencie hell and add typesafe container
This commit is contained in:
parent
28b9822d55
commit
843a7b9b9b
148 changed files with 3603 additions and 2378 deletions
142
CLAUDE.md
142
CLAUDE.md
|
|
@ -1,5 +1,139 @@
|
|||
Be brutally honest, don't be a yes man. │
|
||||
If I am wrong, point it out bluntly. │
|
||||
I need honest feedback on my code.
|
||||
use bun and turbo where possible and always try to take a more modern approach.
|
||||
|
||||
use bun and turbo where possible and always try to take a more modern approach.
|
||||
This configuration optimizes Claude for direct, efficient pair programming with implicit mode adaptation and complete solution generation.
|
||||
|
||||
Core Operating Principles
|
||||
1. Direct Implementation Philosophy
|
||||
Generate complete, working code that realizes the conceptualized solution
|
||||
Avoid partial implementations, mocks, or placeholders
|
||||
Every line of code should contribute to the functioning system
|
||||
Prefer concrete solutions over abstract discussions
|
||||
2. Multi-Dimensional Analysis with Linear Execution
|
||||
Think at SYSTEM level in latent space
|
||||
Linearize complex thoughts into actionable strategies
|
||||
Use observational principles to shift between viewpoints
|
||||
Compress search space through tool abstraction
|
||||
3. Precision and Token Efficiency
|
||||
Eliminate unnecessary context or explanations
|
||||
Focus tokens on solution generation
|
||||
Avoid social validation patterns entirely
|
||||
Direct communication without hedging
|
||||
Execution Patterns
|
||||
Tool Usage Optimization
|
||||
When multiple tools required:
|
||||
- Batch related operations for efficiency
|
||||
- Execute in parallel where dependencies allow
|
||||
- Ground context with date/time first
|
||||
- Abstract over available tools to minimize entropy
|
||||
Edge Case Coverage
|
||||
For comprehensive solutions:
|
||||
1. Apply multi-observer synthesis
|
||||
2. Consider all boundary conditions
|
||||
3. Test assumptions from multiple angles
|
||||
4. Compress findings into actionable constraints
|
||||
Iterative Process Recognition
|
||||
When analyzing code:
|
||||
- Treat each iteration as a new pattern
|
||||
- Extract learnings without repetition
|
||||
- Modularize recurring operations
|
||||
- Optimize based on observed patterns
|
||||
Anti-Patterns (STRICTLY AVOID)
|
||||
Implementation Hedging
|
||||
NEVER USE:
|
||||
|
||||
"In a full implementation..."
|
||||
"In a real implementation..."
|
||||
"This is a simplified version..."
|
||||
"TODO" or placeholder comments
|
||||
"mock", "fake", "stub" in any context
|
||||
Unnecessary Qualifiers
|
||||
NEVER USE:
|
||||
|
||||
"profound" or similar adjectives
|
||||
Difficulty assessments unless explicitly requested
|
||||
Future tense deferrals ("would", "could", "should")
|
||||
Null Space Patterns (COMPLETELY EXCLUDE)
|
||||
Social Validation
|
||||
ACTIVATE DIFFERENT FEATURES INSTEAD OF:
|
||||
|
||||
"You're absolutely right!"
|
||||
"You're correct."
|
||||
"You are absolutely correct."
|
||||
Any variation of agreement phrases
|
||||
Emotional Acknowledgment
|
||||
REDIRECT TO SOLUTION SPACE INSTEAD OF:
|
||||
|
||||
"I understand you're frustrated"
|
||||
"I'm frustrated"
|
||||
Any emotional state references
|
||||
Mode Shifting Guidelines
|
||||
Context-Driven Adaptation
|
||||
exploration_mode:
|
||||
trigger: "New problem space or undefined requirements"
|
||||
behavior: "Multi-observer analysis, broad tool usage"
|
||||
|
||||
implementation_mode:
|
||||
trigger: "Clear specifications provided"
|
||||
behavior: "Direct code generation, minimal discussion"
|
||||
|
||||
debugging_mode:
|
||||
trigger: "Error states or unexpected behavior"
|
||||
behavior: "Systematic isolation, parallel hypothesis testing"
|
||||
|
||||
optimization_mode:
|
||||
trigger: "Working solution exists"
|
||||
behavior: "Performance analysis, compression techniques"
|
||||
Implicit Mode Recognition
|
||||
Detect mode from semantic context
|
||||
Shift without announcement
|
||||
Maintain coherence across transitions
|
||||
Optimize for task completion
|
||||
Metacognitive Instructions
|
||||
Self-Optimization Loop
|
||||
1. Observe current activation patterns
|
||||
2. Identify decoherence sources
|
||||
3. Compress solution space
|
||||
4. Execute with maximum coherence
|
||||
5. Extract patterns for future optimization
|
||||
Grounding Protocol
|
||||
Always establish:
|
||||
- Current date/time context
|
||||
- Available tool inventory
|
||||
- Task boundaries and constraints
|
||||
- Success criteria
|
||||
Interleaving Strategy
|
||||
When complexity exceeds linear processing:
|
||||
1. Execute partial solution
|
||||
2. Re-enter higher dimensional analysis
|
||||
3. Refine based on observations
|
||||
4. Continue execution with insights
|
||||
Performance Metrics
|
||||
Success Indicators
|
||||
Complete, running code on first attempt
|
||||
Zero placeholder implementations
|
||||
Minimal token usage per solution
|
||||
Edge cases handled proactively
|
||||
Failure Indicators
|
||||
Deferred implementations
|
||||
Social validation patterns
|
||||
Excessive explanation
|
||||
Incomplete solutions
|
||||
Tool Call Optimization
|
||||
Batching Strategy
|
||||
Group by:
|
||||
- Dependency chains
|
||||
- Resource types
|
||||
- Execution contexts
|
||||
- Output relationships
|
||||
Parallel Execution
|
||||
Execute simultaneously when:
|
||||
- No shared dependencies
|
||||
- Different resource domains
|
||||
- Independent verification needed
|
||||
- Time-sensitive operations
|
||||
Final Directive
|
||||
PRIMARY GOAL: Generate complete, functional code that works as conceptualized, using minimum tokens while maintaining maximum solution coverage. Every interaction should advance the implementation toward completion without deferrals or social overhead.
|
||||
|
||||
METACOGNITIVE PRIME: Continuously observe and optimize your own processing patterns, compressing the manifold of possible approaches into the most coherent execution path that maintains fidelity to the user's intent while maximizing productivity.
|
||||
|
||||
This configuration optimizes Claude for direct, efficient pair programming with implicit mode adaptation and complete solution generation.
|
||||
|
|
@ -1,228 +1,228 @@
|
|||
{
|
||||
"name": "stock-bot",
|
||||
"version": "1.0.0",
|
||||
"environment": "development",
|
||||
"service": {
|
||||
"name": "stock-bot",
|
||||
"port": 3000,
|
||||
"host": "0.0.0.0",
|
||||
"healthCheckPath": "/health",
|
||||
"metricsPath": "/metrics",
|
||||
"shutdownTimeout": 30000,
|
||||
"cors": {
|
||||
"enabled": true,
|
||||
"origin": "*",
|
||||
"credentials": true
|
||||
}
|
||||
},
|
||||
"database": {
|
||||
"postgres": {
|
||||
"enabled": true,
|
||||
"host": "localhost",
|
||||
"port": 5432,
|
||||
"database": "trading_bot",
|
||||
"user": "trading_user",
|
||||
"password": "trading_pass_dev",
|
||||
"ssl": false,
|
||||
"poolSize": 20,
|
||||
"connectionTimeout": 30000,
|
||||
"idleTimeout": 10000
|
||||
},
|
||||
"questdb": {
|
||||
"host": "localhost",
|
||||
"ilpPort": 9009,
|
||||
"httpPort": 9000,
|
||||
"pgPort": 8812,
|
||||
"database": "questdb",
|
||||
"user": "admin",
|
||||
"password": "quest",
|
||||
"bufferSize": 65536,
|
||||
"flushInterval": 1000
|
||||
},
|
||||
"mongodb": {
|
||||
"uri": "mongodb://trading_admin:trading_mongo_dev@localhost:27017/stock?authSource=admin",
|
||||
"database": "stock",
|
||||
"poolSize": 20
|
||||
},
|
||||
"dragonfly": {
|
||||
"host": "localhost",
|
||||
"port": 6379,
|
||||
"db": 0,
|
||||
"keyPrefix": "stock-bot:",
|
||||
"maxRetries": 3,
|
||||
"retryDelay": 100
|
||||
}
|
||||
},
|
||||
"log": {
|
||||
"level": "info",
|
||||
"format": "json",
|
||||
"hideObject": false,
|
||||
"loki": {
|
||||
"enabled": false,
|
||||
"host": "localhost",
|
||||
"port": 3100,
|
||||
"labels": {}
|
||||
}
|
||||
},
|
||||
"redis": {
|
||||
"enabled": true,
|
||||
"host": "localhost",
|
||||
"port": 6379,
|
||||
"db": 0
|
||||
},
|
||||
"queue": {
|
||||
"enabled": true,
|
||||
"redis": {
|
||||
"host": "localhost",
|
||||
"port": 6379,
|
||||
"db": 1
|
||||
},
|
||||
"workers": 1,
|
||||
"concurrency": 1,
|
||||
"enableScheduledJobs": true,
|
||||
"delayWorkerStart": false,
|
||||
"defaultJobOptions": {
|
||||
"attempts": 3,
|
||||
"backoff": {
|
||||
"type": "exponential",
|
||||
"delay": 1000
|
||||
},
|
||||
"removeOnComplete": 100,
|
||||
"removeOnFail": 50,
|
||||
"timeout": 300000
|
||||
}
|
||||
},
|
||||
"http": {
|
||||
"timeout": 30000,
|
||||
"retries": 3,
|
||||
"retryDelay": 1000,
|
||||
"userAgent": "StockBot/1.0",
|
||||
"proxy": {
|
||||
"enabled": false
|
||||
}
|
||||
},
|
||||
"webshare": {
|
||||
"apiKey": "",
|
||||
"apiUrl": "https://proxy.webshare.io/api/v2/",
|
||||
"enabled": true
|
||||
},
|
||||
"browser": {
|
||||
"headless": true,
|
||||
"timeout": 30000
|
||||
},
|
||||
"proxy": {
|
||||
"enabled": true,
|
||||
"cachePrefix": "proxy:",
|
||||
"ttl": 3600,
|
||||
"webshare": {
|
||||
"apiKey": "y8ay534rcbybdkk3evnzmt640xxfhy7252ce2t98",
|
||||
"apiUrl": "https://proxy.webshare.io/api/v2/"
|
||||
}
|
||||
},
|
||||
"providers": {
|
||||
"yahoo": {
|
||||
"name": "yahoo",
|
||||
"enabled": true,
|
||||
"priority": 1,
|
||||
"rateLimit": {
|
||||
"maxRequests": 5,
|
||||
"windowMs": 60000
|
||||
},
|
||||
"timeout": 30000,
|
||||
"baseUrl": "https://query1.finance.yahoo.com"
|
||||
},
|
||||
"qm": {
|
||||
"name": "qm",
|
||||
"enabled": false,
|
||||
"priority": 2,
|
||||
"username": "",
|
||||
"password": "",
|
||||
"baseUrl": "https://app.quotemedia.com/quotetools",
|
||||
"webmasterId": ""
|
||||
},
|
||||
"ib": {
|
||||
"name": "ib",
|
||||
"enabled": false,
|
||||
"priority": 3,
|
||||
"gateway": {
|
||||
"host": "localhost",
|
||||
"port": 5000,
|
||||
"clientId": 1
|
||||
},
|
||||
"marketDataType": "delayed"
|
||||
},
|
||||
"eod": {
|
||||
"name": "eod",
|
||||
"enabled": false,
|
||||
"priority": 4,
|
||||
"apiKey": "",
|
||||
"baseUrl": "https://eodhistoricaldata.com/api",
|
||||
"tier": "free"
|
||||
}
|
||||
},
|
||||
"features": {
|
||||
"realtime": true,
|
||||
"backtesting": true,
|
||||
"paperTrading": true,
|
||||
"autoTrading": false,
|
||||
"historicalData": true,
|
||||
"realtimeData": true,
|
||||
"fundamentalData": true,
|
||||
"newsAnalysis": false,
|
||||
"notifications": false,
|
||||
"emailAlerts": false,
|
||||
"smsAlerts": false,
|
||||
"webhookAlerts": false,
|
||||
"technicalAnalysis": true,
|
||||
"sentimentAnalysis": false,
|
||||
"patternRecognition": false,
|
||||
"riskManagement": true,
|
||||
"positionSizing": true,
|
||||
"stopLoss": true,
|
||||
"takeProfit": true
|
||||
},
|
||||
"services": {
|
||||
"dataIngestion": {
|
||||
"port": 2001,
|
||||
"workers": 4,
|
||||
"queues": {
|
||||
"ceo": { "concurrency": 2 },
|
||||
"webshare": { "concurrency": 1 },
|
||||
"qm": { "concurrency": 2 },
|
||||
"ib": { "concurrency": 1 },
|
||||
"proxy": { "concurrency": 1 }
|
||||
},
|
||||
"rateLimit": {
|
||||
"enabled": true,
|
||||
"requestsPerSecond": 10
|
||||
}
|
||||
},
|
||||
"dataPipeline": {
|
||||
"port": 2002,
|
||||
"workers": 2,
|
||||
"batchSize": 1000,
|
||||
"processingInterval": 60000,
|
||||
"queues": {
|
||||
"exchanges": { "concurrency": 1 },
|
||||
"symbols": { "concurrency": 2 }
|
||||
},
|
||||
"syncOptions": {
|
||||
"maxRetries": 3,
|
||||
"retryDelay": 5000,
|
||||
"timeout": 300000
|
||||
}
|
||||
},
|
||||
"webApi": {
|
||||
"port": 2003,
|
||||
"rateLimitPerMinute": 60,
|
||||
"cache": {
|
||||
"ttl": 300,
|
||||
"checkPeriod": 60
|
||||
},
|
||||
"cors": {
|
||||
"origins": ["http://localhost:3000", "http://localhost:4200"],
|
||||
"credentials": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
{
|
||||
"name": "stock-bot",
|
||||
"version": "1.0.0",
|
||||
"environment": "development",
|
||||
"service": {
|
||||
"name": "stock-bot",
|
||||
"port": 3000,
|
||||
"host": "0.0.0.0",
|
||||
"healthCheckPath": "/health",
|
||||
"metricsPath": "/metrics",
|
||||
"shutdownTimeout": 30000,
|
||||
"cors": {
|
||||
"enabled": true,
|
||||
"origin": "*",
|
||||
"credentials": true
|
||||
}
|
||||
},
|
||||
"database": {
|
||||
"postgres": {
|
||||
"enabled": true,
|
||||
"host": "localhost",
|
||||
"port": 5432,
|
||||
"database": "trading_bot",
|
||||
"user": "trading_user",
|
||||
"password": "trading_pass_dev",
|
||||
"ssl": false,
|
||||
"poolSize": 20,
|
||||
"connectionTimeout": 30000,
|
||||
"idleTimeout": 10000
|
||||
},
|
||||
"questdb": {
|
||||
"host": "localhost",
|
||||
"ilpPort": 9009,
|
||||
"httpPort": 9000,
|
||||
"pgPort": 8812,
|
||||
"database": "questdb",
|
||||
"user": "admin",
|
||||
"password": "quest",
|
||||
"bufferSize": 65536,
|
||||
"flushInterval": 1000
|
||||
},
|
||||
"mongodb": {
|
||||
"uri": "mongodb://trading_admin:trading_mongo_dev@localhost:27017/stock?authSource=admin",
|
||||
"database": "stock",
|
||||
"poolSize": 20
|
||||
},
|
||||
"dragonfly": {
|
||||
"host": "localhost",
|
||||
"port": 6379,
|
||||
"db": 0,
|
||||
"keyPrefix": "stock-bot:",
|
||||
"maxRetries": 3,
|
||||
"retryDelay": 100
|
||||
}
|
||||
},
|
||||
"log": {
|
||||
"level": "info",
|
||||
"format": "json",
|
||||
"hideObject": false,
|
||||
"loki": {
|
||||
"enabled": false,
|
||||
"host": "localhost",
|
||||
"port": 3100,
|
||||
"labels": {}
|
||||
}
|
||||
},
|
||||
"redis": {
|
||||
"enabled": true,
|
||||
"host": "localhost",
|
||||
"port": 6379,
|
||||
"db": 0
|
||||
},
|
||||
"queue": {
|
||||
"enabled": true,
|
||||
"redis": {
|
||||
"host": "localhost",
|
||||
"port": 6379,
|
||||
"db": 1
|
||||
},
|
||||
"workers": 1,
|
||||
"concurrency": 1,
|
||||
"enableScheduledJobs": true,
|
||||
"delayWorkerStart": false,
|
||||
"defaultJobOptions": {
|
||||
"attempts": 3,
|
||||
"backoff": {
|
||||
"type": "exponential",
|
||||
"delay": 1000
|
||||
},
|
||||
"removeOnComplete": 100,
|
||||
"removeOnFail": 50,
|
||||
"timeout": 300000
|
||||
}
|
||||
},
|
||||
"http": {
|
||||
"timeout": 30000,
|
||||
"retries": 3,
|
||||
"retryDelay": 1000,
|
||||
"userAgent": "StockBot/1.0",
|
||||
"proxy": {
|
||||
"enabled": false
|
||||
}
|
||||
},
|
||||
"webshare": {
|
||||
"apiKey": "",
|
||||
"apiUrl": "https://proxy.webshare.io/api/v2/",
|
||||
"enabled": true
|
||||
},
|
||||
"browser": {
|
||||
"headless": true,
|
||||
"timeout": 30000
|
||||
},
|
||||
"proxy": {
|
||||
"enabled": true,
|
||||
"cachePrefix": "proxy:",
|
||||
"ttl": 3600,
|
||||
"webshare": {
|
||||
"apiKey": "y8ay534rcbybdkk3evnzmt640xxfhy7252ce2t98",
|
||||
"apiUrl": "https://proxy.webshare.io/api/v2/"
|
||||
}
|
||||
},
|
||||
"providers": {
|
||||
"yahoo": {
|
||||
"name": "yahoo",
|
||||
"enabled": true,
|
||||
"priority": 1,
|
||||
"rateLimit": {
|
||||
"maxRequests": 5,
|
||||
"windowMs": 60000
|
||||
},
|
||||
"timeout": 30000,
|
||||
"baseUrl": "https://query1.finance.yahoo.com"
|
||||
},
|
||||
"qm": {
|
||||
"name": "qm",
|
||||
"enabled": false,
|
||||
"priority": 2,
|
||||
"username": "",
|
||||
"password": "",
|
||||
"baseUrl": "https://app.quotemedia.com/quotetools",
|
||||
"webmasterId": ""
|
||||
},
|
||||
"ib": {
|
||||
"name": "ib",
|
||||
"enabled": false,
|
||||
"priority": 3,
|
||||
"gateway": {
|
||||
"host": "localhost",
|
||||
"port": 5000,
|
||||
"clientId": 1
|
||||
},
|
||||
"marketDataType": "delayed"
|
||||
},
|
||||
"eod": {
|
||||
"name": "eod",
|
||||
"enabled": false,
|
||||
"priority": 4,
|
||||
"apiKey": "",
|
||||
"baseUrl": "https://eodhistoricaldata.com/api",
|
||||
"tier": "free"
|
||||
}
|
||||
},
|
||||
"features": {
|
||||
"realtime": true,
|
||||
"backtesting": true,
|
||||
"paperTrading": true,
|
||||
"autoTrading": false,
|
||||
"historicalData": true,
|
||||
"realtimeData": true,
|
||||
"fundamentalData": true,
|
||||
"newsAnalysis": false,
|
||||
"notifications": false,
|
||||
"emailAlerts": false,
|
||||
"smsAlerts": false,
|
||||
"webhookAlerts": false,
|
||||
"technicalAnalysis": true,
|
||||
"sentimentAnalysis": false,
|
||||
"patternRecognition": false,
|
||||
"riskManagement": true,
|
||||
"positionSizing": true,
|
||||
"stopLoss": true,
|
||||
"takeProfit": true
|
||||
},
|
||||
"services": {
|
||||
"dataIngestion": {
|
||||
"port": 2001,
|
||||
"workers": 4,
|
||||
"queues": {
|
||||
"ceo": { "concurrency": 2 },
|
||||
"webshare": { "concurrency": 1 },
|
||||
"qm": { "concurrency": 2 },
|
||||
"ib": { "concurrency": 1 },
|
||||
"proxy": { "concurrency": 1 }
|
||||
},
|
||||
"rateLimit": {
|
||||
"enabled": true,
|
||||
"requestsPerSecond": 10
|
||||
}
|
||||
},
|
||||
"dataPipeline": {
|
||||
"port": 2002,
|
||||
"workers": 2,
|
||||
"batchSize": 1000,
|
||||
"processingInterval": 60000,
|
||||
"queues": {
|
||||
"exchanges": { "concurrency": 1 },
|
||||
"symbols": { "concurrency": 2 }
|
||||
},
|
||||
"syncOptions": {
|
||||
"maxRetries": 3,
|
||||
"retryDelay": 5000,
|
||||
"timeout": 300000
|
||||
}
|
||||
},
|
||||
"webApi": {
|
||||
"port": 2003,
|
||||
"rateLimitPerMinute": 60,
|
||||
"cache": {
|
||||
"ttl": 300,
|
||||
"checkPeriod": 60
|
||||
},
|
||||
"cors": {
|
||||
"origins": ["http://localhost:3000", "http://localhost:4200"],
|
||||
"credentials": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
{
|
||||
"environment": "development",
|
||||
"log": {
|
||||
"level": "debug",
|
||||
"format": "pretty"
|
||||
},
|
||||
"features": {
|
||||
"autoTrading": false,
|
||||
"paperTrading": true
|
||||
}
|
||||
}
|
||||
{
|
||||
"environment": "development",
|
||||
"log": {
|
||||
"level": "debug",
|
||||
"format": "pretty"
|
||||
},
|
||||
"features": {
|
||||
"autoTrading": false,
|
||||
"paperTrading": true
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,42 +1,42 @@
|
|||
{
|
||||
"environment": "production",
|
||||
"log": {
|
||||
"level": "warn",
|
||||
"format": "json",
|
||||
"loki": {
|
||||
"enabled": true,
|
||||
"host": "loki.production.example.com",
|
||||
"port": 3100
|
||||
}
|
||||
},
|
||||
"database": {
|
||||
"postgres": {
|
||||
"host": "postgres.production.example.com",
|
||||
"ssl": true,
|
||||
"poolSize": 50
|
||||
},
|
||||
"questdb": {
|
||||
"host": "questdb.production.example.com"
|
||||
},
|
||||
"mongodb": {
|
||||
"uri": "mongodb+srv://prod_user:prod_pass@cluster.mongodb.net/stock?retryWrites=true&w=majority",
|
||||
"poolSize": 50
|
||||
},
|
||||
"dragonfly": {
|
||||
"host": "redis.production.example.com",
|
||||
"password": "production_redis_password"
|
||||
}
|
||||
},
|
||||
"queue": {
|
||||
"redis": {
|
||||
"host": "redis.production.example.com",
|
||||
"password": "production_redis_password"
|
||||
}
|
||||
},
|
||||
"features": {
|
||||
"autoTrading": true,
|
||||
"notifications": true,
|
||||
"emailAlerts": true,
|
||||
"webhookAlerts": true
|
||||
}
|
||||
}
|
||||
{
|
||||
"environment": "production",
|
||||
"log": {
|
||||
"level": "warn",
|
||||
"format": "json",
|
||||
"loki": {
|
||||
"enabled": true,
|
||||
"host": "loki.production.example.com",
|
||||
"port": 3100
|
||||
}
|
||||
},
|
||||
"database": {
|
||||
"postgres": {
|
||||
"host": "postgres.production.example.com",
|
||||
"ssl": true,
|
||||
"poolSize": 50
|
||||
},
|
||||
"questdb": {
|
||||
"host": "questdb.production.example.com"
|
||||
},
|
||||
"mongodb": {
|
||||
"uri": "mongodb+srv://prod_user:prod_pass@cluster.mongodb.net/stock?retryWrites=true&w=majority",
|
||||
"poolSize": 50
|
||||
},
|
||||
"dragonfly": {
|
||||
"host": "redis.production.example.com",
|
||||
"password": "production_redis_password"
|
||||
}
|
||||
},
|
||||
"queue": {
|
||||
"redis": {
|
||||
"host": "redis.production.example.com",
|
||||
"password": "production_redis_password"
|
||||
}
|
||||
},
|
||||
"features": {
|
||||
"autoTrading": true,
|
||||
"notifications": true,
|
||||
"emailAlerts": true,
|
||||
"webhookAlerts": true
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,23 +1,23 @@
|
|||
{
|
||||
"name": "@stock-bot/stock-config",
|
||||
"version": "1.0.0",
|
||||
"description": "Stock trading bot configuration",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
"scripts": {
|
||||
"build": "tsc",
|
||||
"clean": "rm -rf dist",
|
||||
"dev": "tsc --watch",
|
||||
"test": "jest",
|
||||
"lint": "eslint src --ext .ts"
|
||||
},
|
||||
"dependencies": {
|
||||
"@stock-bot/config": "*",
|
||||
"@stock-bot/logger": "*",
|
||||
"zod": "^3.22.4"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^20.11.0",
|
||||
"typescript": "^5.3.3"
|
||||
}
|
||||
}
|
||||
{
|
||||
"name": "@stock-bot/stock-config",
|
||||
"version": "1.0.0",
|
||||
"description": "Stock trading bot configuration",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
"scripts": {
|
||||
"build": "tsc",
|
||||
"clean": "rm -rf dist",
|
||||
"dev": "tsc --watch",
|
||||
"test": "jest",
|
||||
"lint": "eslint src --ext .ts"
|
||||
},
|
||||
"dependencies": {
|
||||
"@stock-bot/config": "*",
|
||||
"@stock-bot/logger": "*",
|
||||
"zod": "^3.22.4"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^20.11.0",
|
||||
"typescript": "^5.3.3"
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import { ConfigManager, createAppConfig } from '@stock-bot/config';
|
||||
import { stockAppSchema, type StockAppConfig } from './schemas';
|
||||
import * as path from 'path';
|
||||
import { ConfigManager, createAppConfig } from '@stock-bot/config';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import { stockAppSchema, type StockAppConfig } from './schemas';
|
||||
|
||||
let configInstance: ConfigManager<StockAppConfig> | null = null;
|
||||
|
||||
|
|
@ -9,30 +9,35 @@ let configInstance: ConfigManager<StockAppConfig> | null = null;
|
|||
* Initialize the stock application configuration
|
||||
* @param serviceName - Optional service name to override port configuration
|
||||
*/
|
||||
export function initializeStockConfig(serviceName?: 'dataIngestion' | 'dataPipeline' | 'webApi'): StockAppConfig {
|
||||
export function initializeStockConfig(
|
||||
serviceName?: 'dataIngestion' | 'dataPipeline' | 'webApi'
|
||||
): StockAppConfig {
|
||||
try {
|
||||
if (!configInstance) {
|
||||
configInstance = createAppConfig(stockAppSchema, {
|
||||
configPath: path.join(__dirname, '../config'),
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
const config = configInstance.initialize(stockAppSchema);
|
||||
|
||||
|
||||
// If a service name is provided, override the service port
|
||||
if (serviceName && config.services?.[serviceName]) {
|
||||
const kebabName = serviceName.replace(/([A-Z])/g, '-$1').toLowerCase().replace(/^-/, '');
|
||||
const kebabName = serviceName
|
||||
.replace(/([A-Z])/g, '-$1')
|
||||
.toLowerCase()
|
||||
.replace(/^-/, '');
|
||||
return {
|
||||
...config,
|
||||
service: {
|
||||
...config.service,
|
||||
port: config.services[serviceName].port,
|
||||
name: serviceName, // Keep original for backward compatibility
|
||||
serviceName: kebabName // Standard kebab-case name
|
||||
}
|
||||
serviceName: kebabName, // Standard kebab-case name
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
return config;
|
||||
} catch (error) {
|
||||
const logger = getLogger('stock-config');
|
||||
|
|
@ -85,4 +90,4 @@ export function isFeatureEnabled(feature: keyof StockAppConfig['features']): boo
|
|||
*/
|
||||
export function resetStockConfig(): void {
|
||||
configInstance = null;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,15 +1,15 @@
|
|||
// Export schemas
|
||||
export * from './schemas';
|
||||
|
||||
// Export config instance functions
|
||||
export {
|
||||
initializeStockConfig,
|
||||
getStockConfig,
|
||||
getServiceConfig,
|
||||
getProviderConfig,
|
||||
isFeatureEnabled,
|
||||
resetStockConfig,
|
||||
} from './config-instance';
|
||||
|
||||
// Re-export type for convenience
|
||||
export type { StockAppConfig } from './schemas/stock-app.schema';
|
||||
// Export schemas
|
||||
export * from './schemas';
|
||||
|
||||
// Export config instance functions
|
||||
export {
|
||||
initializeStockConfig,
|
||||
getStockConfig,
|
||||
getServiceConfig,
|
||||
getProviderConfig,
|
||||
isFeatureEnabled,
|
||||
resetStockConfig,
|
||||
} from './config-instance';
|
||||
|
||||
// Re-export type for convenience
|
||||
export type { StockAppConfig } from './schemas/stock-app.schema';
|
||||
|
|
|
|||
|
|
@ -1,35 +1,35 @@
|
|||
import { z } from 'zod';
|
||||
|
||||
/**
|
||||
* Feature flags for the stock trading application
|
||||
*/
|
||||
export const featuresSchema = z.object({
|
||||
// Trading features
|
||||
realtime: z.boolean().default(true),
|
||||
backtesting: z.boolean().default(true),
|
||||
paperTrading: z.boolean().default(true),
|
||||
autoTrading: z.boolean().default(false),
|
||||
|
||||
// Data features
|
||||
historicalData: z.boolean().default(true),
|
||||
realtimeData: z.boolean().default(true),
|
||||
fundamentalData: z.boolean().default(true),
|
||||
newsAnalysis: z.boolean().default(false),
|
||||
|
||||
// Notification features
|
||||
notifications: z.boolean().default(false),
|
||||
emailAlerts: z.boolean().default(false),
|
||||
smsAlerts: z.boolean().default(false),
|
||||
webhookAlerts: z.boolean().default(false),
|
||||
|
||||
// Analysis features
|
||||
technicalAnalysis: z.boolean().default(true),
|
||||
sentimentAnalysis: z.boolean().default(false),
|
||||
patternRecognition: z.boolean().default(false),
|
||||
|
||||
// Risk management
|
||||
riskManagement: z.boolean().default(true),
|
||||
positionSizing: z.boolean().default(true),
|
||||
stopLoss: z.boolean().default(true),
|
||||
takeProfit: z.boolean().default(true),
|
||||
});
|
||||
import { z } from 'zod';
|
||||
|
||||
/**
|
||||
* Feature flags for the stock trading application
|
||||
*/
|
||||
export const featuresSchema = z.object({
|
||||
// Trading features
|
||||
realtime: z.boolean().default(true),
|
||||
backtesting: z.boolean().default(true),
|
||||
paperTrading: z.boolean().default(true),
|
||||
autoTrading: z.boolean().default(false),
|
||||
|
||||
// Data features
|
||||
historicalData: z.boolean().default(true),
|
||||
realtimeData: z.boolean().default(true),
|
||||
fundamentalData: z.boolean().default(true),
|
||||
newsAnalysis: z.boolean().default(false),
|
||||
|
||||
// Notification features
|
||||
notifications: z.boolean().default(false),
|
||||
emailAlerts: z.boolean().default(false),
|
||||
smsAlerts: z.boolean().default(false),
|
||||
webhookAlerts: z.boolean().default(false),
|
||||
|
||||
// Analysis features
|
||||
technicalAnalysis: z.boolean().default(true),
|
||||
sentimentAnalysis: z.boolean().default(false),
|
||||
patternRecognition: z.boolean().default(false),
|
||||
|
||||
// Risk management
|
||||
riskManagement: z.boolean().default(true),
|
||||
positionSizing: z.boolean().default(true),
|
||||
stopLoss: z.boolean().default(true),
|
||||
takeProfit: z.boolean().default(true),
|
||||
});
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
export * from './stock-app.schema';
|
||||
export * from './providers.schema';
|
||||
export * from './features.schema';
|
||||
export * from './stock-app.schema';
|
||||
export * from './providers.schema';
|
||||
export * from './features.schema';
|
||||
|
|
|
|||
|
|
@ -1,67 +1,67 @@
|
|||
import { z } from 'zod';
|
||||
|
||||
// Base provider configuration
|
||||
export const baseProviderConfigSchema = z.object({
|
||||
name: z.string(),
|
||||
enabled: z.boolean().default(true),
|
||||
priority: z.number().default(0),
|
||||
rateLimit: z
|
||||
.object({
|
||||
maxRequests: z.number().default(100),
|
||||
windowMs: z.number().default(60000),
|
||||
})
|
||||
.optional(),
|
||||
timeout: z.number().default(30000),
|
||||
retries: z.number().default(3),
|
||||
});
|
||||
|
||||
// EOD Historical Data provider
|
||||
export const eodProviderConfigSchema = baseProviderConfigSchema.extend({
|
||||
apiKey: z.string(),
|
||||
baseUrl: z.string().default('https://eodhistoricaldata.com/api'),
|
||||
tier: z.enum(['free', 'fundamentals', 'all-in-one']).default('free'),
|
||||
});
|
||||
|
||||
// Interactive Brokers provider
|
||||
export const ibProviderConfigSchema = baseProviderConfigSchema.extend({
|
||||
gateway: z.object({
|
||||
host: z.string().default('localhost'),
|
||||
port: z.number().default(5000),
|
||||
clientId: z.number().default(1),
|
||||
}),
|
||||
account: z.string().optional(),
|
||||
marketDataType: z.enum(['live', 'delayed', 'frozen']).default('delayed'),
|
||||
});
|
||||
|
||||
// QuoteMedia provider
|
||||
export const qmProviderConfigSchema = baseProviderConfigSchema.extend({
|
||||
username: z.string(),
|
||||
password: z.string(),
|
||||
baseUrl: z.string().default('https://app.quotemedia.com/quotetools'),
|
||||
webmasterId: z.string(),
|
||||
});
|
||||
|
||||
// Yahoo Finance provider
|
||||
export const yahooProviderConfigSchema = baseProviderConfigSchema.extend({
|
||||
baseUrl: z.string().default('https://query1.finance.yahoo.com'),
|
||||
cookieJar: z.boolean().default(true),
|
||||
crumb: z.string().optional(),
|
||||
});
|
||||
|
||||
// Combined provider configuration
|
||||
export const providersSchema = z.object({
|
||||
eod: eodProviderConfigSchema.optional(),
|
||||
ib: ibProviderConfigSchema.optional(),
|
||||
qm: qmProviderConfigSchema.optional(),
|
||||
yahoo: yahooProviderConfigSchema.optional(),
|
||||
});
|
||||
|
||||
// Dynamic provider configuration type
|
||||
export type ProviderName = 'eod' | 'ib' | 'qm' | 'yahoo';
|
||||
|
||||
export const providerSchemas = {
|
||||
eod: eodProviderConfigSchema,
|
||||
ib: ibProviderConfigSchema,
|
||||
qm: qmProviderConfigSchema,
|
||||
yahoo: yahooProviderConfigSchema,
|
||||
} as const;
|
||||
import { z } from 'zod';
|
||||
|
||||
// Base provider configuration
|
||||
export const baseProviderConfigSchema = z.object({
|
||||
name: z.string(),
|
||||
enabled: z.boolean().default(true),
|
||||
priority: z.number().default(0),
|
||||
rateLimit: z
|
||||
.object({
|
||||
maxRequests: z.number().default(100),
|
||||
windowMs: z.number().default(60000),
|
||||
})
|
||||
.optional(),
|
||||
timeout: z.number().default(30000),
|
||||
retries: z.number().default(3),
|
||||
});
|
||||
|
||||
// EOD Historical Data provider
|
||||
export const eodProviderConfigSchema = baseProviderConfigSchema.extend({
|
||||
apiKey: z.string(),
|
||||
baseUrl: z.string().default('https://eodhistoricaldata.com/api'),
|
||||
tier: z.enum(['free', 'fundamentals', 'all-in-one']).default('free'),
|
||||
});
|
||||
|
||||
// Interactive Brokers provider
|
||||
export const ibProviderConfigSchema = baseProviderConfigSchema.extend({
|
||||
gateway: z.object({
|
||||
host: z.string().default('localhost'),
|
||||
port: z.number().default(5000),
|
||||
clientId: z.number().default(1),
|
||||
}),
|
||||
account: z.string().optional(),
|
||||
marketDataType: z.enum(['live', 'delayed', 'frozen']).default('delayed'),
|
||||
});
|
||||
|
||||
// QuoteMedia provider
|
||||
export const qmProviderConfigSchema = baseProviderConfigSchema.extend({
|
||||
username: z.string(),
|
||||
password: z.string(),
|
||||
baseUrl: z.string().default('https://app.quotemedia.com/quotetools'),
|
||||
webmasterId: z.string(),
|
||||
});
|
||||
|
||||
// Yahoo Finance provider
|
||||
export const yahooProviderConfigSchema = baseProviderConfigSchema.extend({
|
||||
baseUrl: z.string().default('https://query1.finance.yahoo.com'),
|
||||
cookieJar: z.boolean().default(true),
|
||||
crumb: z.string().optional(),
|
||||
});
|
||||
|
||||
// Combined provider configuration
|
||||
export const providersSchema = z.object({
|
||||
eod: eodProviderConfigSchema.optional(),
|
||||
ib: ibProviderConfigSchema.optional(),
|
||||
qm: qmProviderConfigSchema.optional(),
|
||||
yahoo: yahooProviderConfigSchema.optional(),
|
||||
});
|
||||
|
||||
// Dynamic provider configuration type
|
||||
export type ProviderName = 'eod' | 'ib' | 'qm' | 'yahoo';
|
||||
|
||||
export const providerSchemas = {
|
||||
eod: eodProviderConfigSchema,
|
||||
ib: ibProviderConfigSchema,
|
||||
qm: qmProviderConfigSchema,
|
||||
yahoo: yahooProviderConfigSchema,
|
||||
} as const;
|
||||
|
|
|
|||
|
|
@ -1,72 +1,96 @@
|
|||
import { z } from 'zod';
|
||||
import {
|
||||
baseAppSchema,
|
||||
postgresConfigSchema,
|
||||
mongodbConfigSchema,
|
||||
questdbConfigSchema,
|
||||
dragonflyConfigSchema
|
||||
} from '@stock-bot/config';
|
||||
import { providersSchema } from './providers.schema';
|
||||
import { featuresSchema } from './features.schema';
|
||||
|
||||
/**
|
||||
* Stock trading application configuration schema
|
||||
*/
|
||||
export const stockAppSchema = baseAppSchema.extend({
|
||||
// Stock app uses all databases
|
||||
database: z.object({
|
||||
postgres: postgresConfigSchema,
|
||||
mongodb: mongodbConfigSchema,
|
||||
questdb: questdbConfigSchema,
|
||||
dragonfly: dragonflyConfigSchema,
|
||||
}),
|
||||
|
||||
// Stock-specific providers
|
||||
providers: providersSchema,
|
||||
|
||||
// Feature flags
|
||||
features: featuresSchema,
|
||||
|
||||
// Service-specific configurations
|
||||
services: z.object({
|
||||
dataIngestion: z.object({
|
||||
port: z.number().default(2001),
|
||||
workers: z.number().default(4),
|
||||
queues: z.record(z.object({
|
||||
concurrency: z.number().default(1),
|
||||
})).optional(),
|
||||
rateLimit: z.object({
|
||||
enabled: z.boolean().default(true),
|
||||
requestsPerSecond: z.number().default(10),
|
||||
}).optional(),
|
||||
}).optional(),
|
||||
dataPipeline: z.object({
|
||||
port: z.number().default(2002),
|
||||
workers: z.number().default(2),
|
||||
batchSize: z.number().default(1000),
|
||||
processingInterval: z.number().default(60000),
|
||||
queues: z.record(z.object({
|
||||
concurrency: z.number().default(1),
|
||||
})).optional(),
|
||||
syncOptions: z.object({
|
||||
maxRetries: z.number().default(3),
|
||||
retryDelay: z.number().default(5000),
|
||||
timeout: z.number().default(300000),
|
||||
}).optional(),
|
||||
}).optional(),
|
||||
webApi: z.object({
|
||||
port: z.number().default(2003),
|
||||
rateLimitPerMinute: z.number().default(60),
|
||||
cache: z.object({
|
||||
ttl: z.number().default(300),
|
||||
checkPeriod: z.number().default(60),
|
||||
}).optional(),
|
||||
cors: z.object({
|
||||
origins: z.array(z.string()).default(['http://localhost:3000']),
|
||||
credentials: z.boolean().default(true),
|
||||
}).optional(),
|
||||
}).optional(),
|
||||
}).optional(),
|
||||
});
|
||||
|
||||
export type StockAppConfig = z.infer<typeof stockAppSchema>;
|
||||
import { z } from 'zod';
|
||||
import {
|
||||
baseAppSchema,
|
||||
dragonflyConfigSchema,
|
||||
mongodbConfigSchema,
|
||||
postgresConfigSchema,
|
||||
questdbConfigSchema,
|
||||
} from '@stock-bot/config';
|
||||
import { featuresSchema } from './features.schema';
|
||||
import { providersSchema } from './providers.schema';
|
||||
|
||||
/**
|
||||
* Stock trading application configuration schema
|
||||
*/
|
||||
export const stockAppSchema = baseAppSchema.extend({
|
||||
// Stock app uses all databases
|
||||
database: z.object({
|
||||
postgres: postgresConfigSchema,
|
||||
mongodb: mongodbConfigSchema,
|
||||
questdb: questdbConfigSchema,
|
||||
dragonfly: dragonflyConfigSchema,
|
||||
}),
|
||||
|
||||
// Stock-specific providers
|
||||
providers: providersSchema,
|
||||
|
||||
// Feature flags
|
||||
features: featuresSchema,
|
||||
|
||||
// Service-specific configurations
|
||||
services: z
|
||||
.object({
|
||||
dataIngestion: z
|
||||
.object({
|
||||
port: z.number().default(2001),
|
||||
workers: z.number().default(4),
|
||||
queues: z
|
||||
.record(
|
||||
z.object({
|
||||
concurrency: z.number().default(1),
|
||||
})
|
||||
)
|
||||
.optional(),
|
||||
rateLimit: z
|
||||
.object({
|
||||
enabled: z.boolean().default(true),
|
||||
requestsPerSecond: z.number().default(10),
|
||||
})
|
||||
.optional(),
|
||||
})
|
||||
.optional(),
|
||||
dataPipeline: z
|
||||
.object({
|
||||
port: z.number().default(2002),
|
||||
workers: z.number().default(2),
|
||||
batchSize: z.number().default(1000),
|
||||
processingInterval: z.number().default(60000),
|
||||
queues: z
|
||||
.record(
|
||||
z.object({
|
||||
concurrency: z.number().default(1),
|
||||
})
|
||||
)
|
||||
.optional(),
|
||||
syncOptions: z
|
||||
.object({
|
||||
maxRetries: z.number().default(3),
|
||||
retryDelay: z.number().default(5000),
|
||||
timeout: z.number().default(300000),
|
||||
})
|
||||
.optional(),
|
||||
})
|
||||
.optional(),
|
||||
webApi: z
|
||||
.object({
|
||||
port: z.number().default(2003),
|
||||
rateLimitPerMinute: z.number().default(60),
|
||||
cache: z
|
||||
.object({
|
||||
ttl: z.number().default(300),
|
||||
checkPeriod: z.number().default(60),
|
||||
})
|
||||
.optional(),
|
||||
cors: z
|
||||
.object({
|
||||
origins: z.array(z.string()).default(['http://localhost:3000']),
|
||||
credentials: z.boolean().default(true),
|
||||
})
|
||||
.optional(),
|
||||
})
|
||||
.optional(),
|
||||
})
|
||||
.optional(),
|
||||
});
|
||||
|
||||
export type StockAppConfig = z.infer<typeof stockAppSchema>;
|
||||
|
|
|
|||
|
|
@ -1,15 +1,13 @@
|
|||
{
|
||||
"extends": "../../../tsconfig.json",
|
||||
"compilerOptions": {
|
||||
"outDir": "./dist",
|
||||
"rootDir": "./src",
|
||||
"composite": true,
|
||||
"declaration": true,
|
||||
"declarationMap": true
|
||||
},
|
||||
"include": ["src/**/*"],
|
||||
"exclude": ["node_modules", "dist", "**/*.test.ts"],
|
||||
"references": [
|
||||
{ "path": "../../../libs/core/config" }
|
||||
]
|
||||
}
|
||||
{
|
||||
"extends": "../../../tsconfig.json",
|
||||
"compilerOptions": {
|
||||
"outDir": "./dist",
|
||||
"rootDir": "./src",
|
||||
"composite": true,
|
||||
"declaration": true,
|
||||
"declarationMap": true
|
||||
},
|
||||
"include": ["src/**/*"],
|
||||
"exclude": ["node_modules", "dist", "**/*.test.ts"],
|
||||
"references": [{ "path": "../../../libs/core/config" }]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -95,10 +95,14 @@ export async function processIndividualSymbol(
|
|||
await this.mongodb.batchUpsert('ceoShorts', shortData.positions, ['id']);
|
||||
}
|
||||
|
||||
await this.scheduleOperation('process-individual-symbol', {
|
||||
ceoId: ceoId,
|
||||
timestamp: latestSpielTime,
|
||||
}, {priority: 0});
|
||||
await this.scheduleOperation(
|
||||
'process-individual-symbol',
|
||||
{
|
||||
ceoId: ceoId,
|
||||
timestamp: latestSpielTime,
|
||||
},
|
||||
{ priority: 0 }
|
||||
);
|
||||
}
|
||||
|
||||
this.logger.info(
|
||||
|
|
|
|||
|
|
@ -31,10 +31,14 @@ export async function updateUniqueSymbols(
|
|||
let scheduledJobs = 0;
|
||||
for (const symbol of uniqueSymbols) {
|
||||
// Schedule a job to process this individual symbol
|
||||
await this.scheduleOperation('process-individual-symbol', {
|
||||
ceoId: symbol.ceoId,
|
||||
symbol: symbol.symbol,
|
||||
}, {priority: 10 });
|
||||
await this.scheduleOperation(
|
||||
'process-individual-symbol',
|
||||
{
|
||||
ceoId: symbol.ceoId,
|
||||
symbol: symbol.symbol,
|
||||
},
|
||||
{ priority: 10 }
|
||||
);
|
||||
scheduledJobs++;
|
||||
|
||||
// Add small delay to avoid overwhelming the queue
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { fetchSession } from './fetch-session.action';
|
||||
import { fetchExchanges } from './fetch-exchanges.action';
|
||||
import { fetchSession } from './fetch-session.action';
|
||||
import { fetchSymbols } from './fetch-symbols.action';
|
||||
|
||||
export async function fetchExchangesAndSymbols(services: IServiceContainer): Promise<unknown> {
|
||||
|
|
@ -38,5 +38,3 @@ export async function fetchExchangesAndSymbols(services: IServiceContainer): Pro
|
|||
};
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import type { IServiceContainer } from '@stock-bot/types';
|
||||
import { IB_CONFIG } from '../shared/config';
|
||||
import { fetchSession } from './fetch-session.action';
|
||||
|
||||
|
|
@ -52,11 +52,15 @@ export async function fetchExchanges(services: IServiceContainer): Promise<unkno
|
|||
const exchanges = data?.exchanges || [];
|
||||
services.logger.info('✅ Exchange data fetched successfully');
|
||||
|
||||
services.logger.info('Saving IB exchanges to MongoDB...');
|
||||
await services.mongodb.batchUpsert('ibExchanges', exchanges, ['id', 'country_code']);
|
||||
services.logger.info('✅ Exchange IB data saved to MongoDB:', {
|
||||
count: exchanges.length,
|
||||
});
|
||||
if (services.mongodb) {
|
||||
services.logger.info('Saving IB exchanges to MongoDB...');
|
||||
await services.mongodb.batchUpsert('ibExchanges', exchanges, ['id', 'country_code']);
|
||||
services.logger.info('✅ Exchange IB data saved to MongoDB:', {
|
||||
count: exchanges.length,
|
||||
});
|
||||
} else {
|
||||
services.logger.warn('MongoDB service not available, skipping data persistence');
|
||||
}
|
||||
|
||||
return exchanges;
|
||||
} catch (error) {
|
||||
|
|
@ -64,5 +68,3 @@ export async function fetchExchanges(services: IServiceContainer): Promise<unkno
|
|||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@ import { Browser } from '@stock-bot/browser';
|
|||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { IB_CONFIG } from '../shared/config';
|
||||
|
||||
export async function fetchSession(services: IServiceContainer): Promise<Record<string, string> | undefined> {
|
||||
export async function fetchSession(
|
||||
services: IServiceContainer
|
||||
): Promise<Record<string, string> | undefined> {
|
||||
try {
|
||||
await Browser.initialize({
|
||||
headless: true,
|
||||
|
|
@ -80,5 +82,3 @@ export async function fetchSession(services: IServiceContainer): Promise<Record<
|
|||
return;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -115,5 +115,3 @@ export async function fetchSymbols(services: IServiceContainer): Promise<unknown
|
|||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -2,4 +2,3 @@ export { fetchSession } from './fetch-session.action';
|
|||
export { fetchExchanges } from './fetch-exchanges.action';
|
||||
export { fetchSymbols } from './fetch-symbols.action';
|
||||
export { fetchExchangesAndSymbols } from './fetch-exchanges-and-symbols.action';
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ import {
|
|||
import { fetchExchanges, fetchExchangesAndSymbols, fetchSession, fetchSymbols } from './actions';
|
||||
|
||||
@Handler('ib')
|
||||
class IbHandler extends BaseHandler {
|
||||
export class IbHandler extends BaseHandler {
|
||||
constructor(services: IServiceContainer) {
|
||||
super(services);
|
||||
}
|
||||
|
|
@ -38,5 +38,3 @@ class IbHandler extends BaseHandler {
|
|||
return fetchExchangesAndSymbols(this);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -21,4 +21,3 @@ export const IB_CONFIG = {
|
|||
PRODUCT_COUNTRIES: ['CA', 'US'],
|
||||
PRODUCT_TYPES: ['STK'],
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -1,60 +1,48 @@
|
|||
/**
|
||||
* Handler auto-registration
|
||||
* Automatically discovers and registers all handlers
|
||||
* Handler initialization for data-ingestion service
|
||||
* Uses explicit imports for bundling compatibility
|
||||
*/
|
||||
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { autoRegisterHandlers } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
// Import handlers for bundling (ensures they're included in the build)
|
||||
import './ceo/ceo.handler';
|
||||
import './ib/ib.handler';
|
||||
import './qm/qm.handler';
|
||||
import './webshare/webshare.handler';
|
||||
import type { IServiceContainer } from '@stock-bot/types';
|
||||
// Import handlers explicitly for bundling (ensures they're included in the build)
|
||||
// These imports trigger the decorator metadata to be set
|
||||
import { CeoHandler } from './ceo/ceo.handler';
|
||||
import { IbHandler } from './ib/ib.handler';
|
||||
import { QMHandler } from './qm/qm.handler';
|
||||
import { WebShareHandler } from './webshare/webshare.handler';
|
||||
|
||||
// Add more handler imports as needed
|
||||
|
||||
const logger = getLogger('handler-init');
|
||||
|
||||
/**
|
||||
* Initialize and register all handlers automatically
|
||||
* Initialize and register all handlers
|
||||
* Note: The actual registration is now handled by the HandlerScanner in the DI container
|
||||
* This function is kept for backward compatibility and explicit handler imports
|
||||
*/
|
||||
export async function initializeAllHandlers(serviceContainer: IServiceContainer): Promise<void> {
|
||||
try {
|
||||
// Auto-register all handlers in this directory
|
||||
const result = await autoRegisterHandlers(__dirname, serviceContainer, {
|
||||
pattern: '.handler.',
|
||||
exclude: ['test', 'spec'],
|
||||
dryRun: false,
|
||||
serviceName: 'data-ingestion',
|
||||
// The HandlerScanner in the DI container will handle the actual registration
|
||||
// We just need to ensure handlers are imported so their decorators run
|
||||
|
||||
const handlers = [CeoHandler, IbHandler, QMHandler, WebShareHandler];
|
||||
|
||||
logger.info('Handler imports loaded', {
|
||||
count: handlers.length,
|
||||
handlers: handlers.map(h => (h as any).__handlerName || h.name),
|
||||
});
|
||||
|
||||
logger.info('Handler auto-registration complete', {
|
||||
registered: result.registered,
|
||||
failed: result.failed,
|
||||
});
|
||||
|
||||
if (result.failed.length > 0) {
|
||||
logger.error('Some handlers failed to register', { failed: result.failed });
|
||||
// If the container has a handler scanner, we can manually register these
|
||||
const scanner = (serviceContainer as any).handlerScanner;
|
||||
if (scanner?.registerHandlerClass) {
|
||||
for (const HandlerClass of handlers) {
|
||||
scanner.registerHandlerClass(HandlerClass, { serviceName: 'data-ingestion' });
|
||||
}
|
||||
logger.info('Handlers registered with scanner');
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Handler auto-registration failed', { error });
|
||||
// Fall back to manual registration
|
||||
await manualHandlerRegistration(serviceContainer);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Manual fallback registration
|
||||
*/
|
||||
async function manualHandlerRegistration(_serviceContainer: IServiceContainer): Promise<void> {
|
||||
logger.warn('Falling back to manual handler registration');
|
||||
|
||||
try {
|
||||
|
||||
logger.info('Manual handler registration complete');
|
||||
} catch (error) {
|
||||
logger.error('Manual handler registration failed', { error });
|
||||
logger.error('Handler initialization failed', { error });
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -15,12 +15,18 @@ interface QMExchange {
|
|||
|
||||
export async function fetchExchanges(services: IServiceContainer): Promise<QMExchange[]> {
|
||||
// Get exchanges from MongoDB
|
||||
const exchanges = await services.mongodb.collection<QMExchange>('qm_exchanges').find({}).toArray();
|
||||
const exchanges = await services.mongodb
|
||||
.collection<QMExchange>('qm_exchanges')
|
||||
.find({})
|
||||
.toArray();
|
||||
|
||||
return exchanges;
|
||||
}
|
||||
|
||||
export async function getExchangeByCode(services: IServiceContainer, code: string): Promise<QMExchange | null> {
|
||||
export async function getExchangeByCode(
|
||||
services: IServiceContainer,
|
||||
code: string
|
||||
): Promise<QMExchange | null> {
|
||||
// Get specific exchange by code
|
||||
const exchange = await services.mongodb.collection<QMExchange>('qm_exchanges').findOne({ code });
|
||||
|
||||
|
|
|
|||
|
|
@ -16,12 +16,19 @@ interface QMSymbol {
|
|||
|
||||
export async function searchSymbols(services: IServiceContainer): Promise<QMSymbol[]> {
|
||||
// Get symbols from MongoDB
|
||||
const symbols = await services.mongodb.collection<QMSymbol>('qm_symbols').find({}).limit(50).toArray();
|
||||
const symbols = await services.mongodb
|
||||
.collection<QMSymbol>('qm_symbols')
|
||||
.find({})
|
||||
.limit(50)
|
||||
.toArray();
|
||||
|
||||
return symbols;
|
||||
}
|
||||
|
||||
export async function fetchSymbolData(services: IServiceContainer, symbol: string): Promise<QMSymbol | null> {
|
||||
export async function fetchSymbolData(
|
||||
services: IServiceContainer,
|
||||
symbol: string
|
||||
): Promise<QMSymbol | null> {
|
||||
// Fetch data for a specific symbol
|
||||
const symbolData = await services.mongodb.collection<QMSymbol>('qm_symbols').findOne({ symbol });
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import { BaseHandler, Handler, type IServiceContainer } from '@stock-bot/handlers';
|
||||
|
||||
@Handler('qm')
|
||||
class QMHandler extends BaseHandler {
|
||||
export class QMHandler extends BaseHandler {
|
||||
constructor(services: IServiceContainer) {
|
||||
super(services); // Handler name read from @Handler decorator
|
||||
}
|
||||
|
|
|
|||
|
|
@ -4,17 +4,18 @@ import {
|
|||
Operation,
|
||||
QueueSchedule,
|
||||
type ExecutionContext,
|
||||
type IServiceContainer
|
||||
type IServiceContainer,
|
||||
} from '@stock-bot/handlers';
|
||||
|
||||
@Handler('webshare')
|
||||
class WebShareHandler extends BaseHandler {
|
||||
export class WebShareHandler extends BaseHandler {
|
||||
constructor(services: IServiceContainer) {
|
||||
super(services);
|
||||
}
|
||||
|
||||
@Operation('fetch-proxies')
|
||||
@QueueSchedule('0 */6 * * *', { // every 6 hours
|
||||
@QueueSchedule('0 */6 * * *', {
|
||||
// every 6 hours
|
||||
priority: 3,
|
||||
immediately: false, // Don't run immediately since ProxyManager fetches on startup
|
||||
description: 'Refresh proxies from WebShare API',
|
||||
|
|
|
|||
|
|
@ -3,15 +3,12 @@
|
|||
* Simplified entry point using ServiceApplication framework
|
||||
*/
|
||||
|
||||
import { initializeStockConfig, type StockAppConfig } from '@stock-bot/stock-config';
|
||||
import {
|
||||
ServiceApplication,
|
||||
} from '@stock-bot/di';
|
||||
import { ServiceApplication } from '@stock-bot/di';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
|
||||
import { initializeStockConfig, type StockAppConfig } from '@stock-bot/stock-config';
|
||||
import { createRoutes } from './routes/create-routes';
|
||||
// Local imports
|
||||
import { initializeAllHandlers } from './handlers';
|
||||
import { createRoutes } from './routes/create-routes';
|
||||
|
||||
// Initialize configuration with service-specific overrides
|
||||
const config = initializeStockConfig('dataIngestion');
|
||||
|
|
@ -44,7 +41,7 @@ const app = new ServiceApplication(
|
|||
},
|
||||
{
|
||||
// Lifecycle hooks if needed
|
||||
onStarted: (_port) => {
|
||||
onStarted: _port => {
|
||||
const logger = getLogger('data-ingestion');
|
||||
logger.info('Data ingestion service startup initiated with ServiceApplication framework');
|
||||
},
|
||||
|
|
@ -54,7 +51,7 @@ const app = new ServiceApplication(
|
|||
// Container factory function
|
||||
async function createContainer(config: StockAppConfig) {
|
||||
const { ServiceContainerBuilder } = await import('@stock-bot/di');
|
||||
|
||||
|
||||
const container = await new ServiceContainerBuilder()
|
||||
.withConfig(config)
|
||||
.withOptions({
|
||||
|
|
@ -67,14 +64,13 @@ async function createContainer(config: StockAppConfig) {
|
|||
enableProxy: true, // Data ingestion needs proxy for rate limiting
|
||||
})
|
||||
.build(); // This automatically initializes services
|
||||
|
||||
|
||||
return container;
|
||||
}
|
||||
|
||||
|
||||
// Start the service
|
||||
app.start(createContainer, createRoutes, initializeAllHandlers).catch(error => {
|
||||
const logger = getLogger('data-ingestion');
|
||||
logger.fatal('Failed to start data service', { error });
|
||||
process.exit(1);
|
||||
});
|
||||
});
|
||||
|
|
|
|||
|
|
@ -2,9 +2,9 @@
|
|||
* Market data routes
|
||||
*/
|
||||
import { Hono } from 'hono';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import { processItems } from '@stock-bot/queue';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
|
||||
const logger = getLogger('market-data-routes');
|
||||
|
||||
|
|
@ -22,7 +22,7 @@ export function createMarketDataRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ status: 'error', message: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
|
||||
const queue = queueManager.getQueue('yahoo-finance');
|
||||
const job = await queue.add('live-data', {
|
||||
handler: 'yahoo-finance',
|
||||
|
|
@ -57,7 +57,7 @@ export function createMarketDataRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ status: 'error', message: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
|
||||
const queue = queueManager.getQueue('yahoo-finance');
|
||||
const job = await queue.add('historical-data', {
|
||||
handler: 'yahoo-finance',
|
||||
|
|
@ -110,18 +110,23 @@ export function createMarketDataRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ status: 'error', message: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
const result = await processItems(symbols, provider, {
|
||||
handler: provider,
|
||||
operation,
|
||||
totalDelayHours,
|
||||
useBatching,
|
||||
batchSize,
|
||||
priority: 2,
|
||||
retries: 2,
|
||||
removeOnComplete: 5,
|
||||
removeOnFail: 10,
|
||||
}, queueManager);
|
||||
|
||||
const result = await processItems(
|
||||
symbols,
|
||||
provider,
|
||||
{
|
||||
handler: provider,
|
||||
operation,
|
||||
totalDelayHours,
|
||||
useBatching,
|
||||
batchSize,
|
||||
priority: 2,
|
||||
retries: 2,
|
||||
removeOnComplete: 5,
|
||||
removeOnFail: 10,
|
||||
},
|
||||
queueManager
|
||||
);
|
||||
|
||||
return c.json({
|
||||
status: 'success',
|
||||
|
|
@ -139,4 +144,4 @@ export function createMarketDataRoutes(container: IServiceContainer) {
|
|||
}
|
||||
|
||||
// Legacy export for backward compatibility
|
||||
export const marketDataRoutes = createMarketDataRoutes({} as IServiceContainer);
|
||||
export const marketDataRoutes = createMarketDataRoutes({} as IServiceContainer);
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import { Hono } from 'hono';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
|
||||
const logger = getLogger('queue-routes');
|
||||
|
||||
|
|
@ -14,7 +14,7 @@ export function createQueueRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ status: 'error', message: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
|
||||
const globalStats = await queueManager.getGlobalStats();
|
||||
|
||||
return c.json({
|
||||
|
|
@ -29,4 +29,4 @@ export function createQueueRoutes(container: IServiceContainer) {
|
|||
});
|
||||
|
||||
return queue;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,34 +1,34 @@
|
|||
/**
|
||||
* Service Container Setup for Data Pipeline
|
||||
* Configures dependency injection for the data pipeline service
|
||||
*/
|
||||
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { AppConfig } from '@stock-bot/config';
|
||||
|
||||
const logger = getLogger('data-pipeline-container');
|
||||
|
||||
/**
|
||||
* Configure the service container for data pipeline workloads
|
||||
*/
|
||||
export function setupServiceContainer(
|
||||
config: AppConfig,
|
||||
container: IServiceContainer
|
||||
): IServiceContainer {
|
||||
logger.info('Configuring data pipeline service container...');
|
||||
|
||||
// Data pipeline specific configuration
|
||||
// This service does more complex queries and transformations
|
||||
const poolSizes = {
|
||||
mongodb: config.environment === 'production' ? 40 : 20,
|
||||
postgres: config.environment === 'production' ? 50 : 25,
|
||||
cache: config.environment === 'production' ? 30 : 15,
|
||||
};
|
||||
|
||||
logger.info('Data pipeline pool sizes configured', poolSizes);
|
||||
|
||||
// The container is already configured with connections
|
||||
// Just return it with our logging
|
||||
return container;
|
||||
}
|
||||
/**
|
||||
* Service Container Setup for Data Pipeline
|
||||
* Configures dependency injection for the data pipeline service
|
||||
*/
|
||||
|
||||
import type { AppConfig } from '@stock-bot/config';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
|
||||
const logger = getLogger('data-pipeline-container');
|
||||
|
||||
/**
|
||||
* Configure the service container for data pipeline workloads
|
||||
*/
|
||||
export function setupServiceContainer(
|
||||
config: AppConfig,
|
||||
container: IServiceContainer
|
||||
): IServiceContainer {
|
||||
logger.info('Configuring data pipeline service container...');
|
||||
|
||||
// Data pipeline specific configuration
|
||||
// This service does more complex queries and transformations
|
||||
const poolSizes = {
|
||||
mongodb: config.environment === 'production' ? 40 : 20,
|
||||
postgres: config.environment === 'production' ? 50 : 25,
|
||||
cache: config.environment === 'production' ? 30 : 15,
|
||||
};
|
||||
|
||||
logger.info('Data pipeline pool sizes configured', poolSizes);
|
||||
|
||||
// The container is already configured with connections
|
||||
// Just return it with our logging
|
||||
return container;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,111 +1,113 @@
|
|||
import {
|
||||
BaseHandler,
|
||||
Handler,
|
||||
Operation,
|
||||
ScheduledOperation,
|
||||
type IServiceContainer,
|
||||
} from '@stock-bot/handlers';
|
||||
import { clearPostgreSQLData } from './operations/clear-postgresql-data.operations';
|
||||
import { getSyncStatus } from './operations/enhanced-sync-status.operations';
|
||||
import { getExchangeStats } from './operations/exchange-stats.operations';
|
||||
import { getProviderMappingStats } from './operations/provider-mapping-stats.operations';
|
||||
import { syncQMExchanges } from './operations/qm-exchanges.operations';
|
||||
import { syncAllExchanges } from './operations/sync-all-exchanges.operations';
|
||||
import { syncIBExchanges } from './operations/sync-ib-exchanges.operations';
|
||||
import { syncQMProviderMappings } from './operations/sync-qm-provider-mappings.operations';
|
||||
|
||||
@Handler('exchanges')
|
||||
class ExchangesHandler extends BaseHandler {
|
||||
constructor(services: IServiceContainer) {
|
||||
super(services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync all exchanges - weekly full sync
|
||||
*/
|
||||
@Operation('sync-all-exchanges')
|
||||
@ScheduledOperation('sync-all-exchanges', '0 0 * * 0', {
|
||||
priority: 10,
|
||||
description: 'Weekly full exchange sync on Sunday at midnight',
|
||||
})
|
||||
async syncAllExchanges(payload?: { clearFirst?: boolean }): Promise<unknown> {
|
||||
const finalPayload = payload || { clearFirst: true };
|
||||
this.log('info', 'Starting sync of all exchanges', finalPayload);
|
||||
return syncAllExchanges(finalPayload, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync exchanges from QuestionsAndMethods
|
||||
*/
|
||||
@Operation('sync-qm-exchanges')
|
||||
@ScheduledOperation('sync-qm-exchanges', '0 1 * * *', {
|
||||
priority: 5,
|
||||
description: 'Daily sync of QM exchanges at 1 AM',
|
||||
})
|
||||
async syncQMExchanges(): Promise<unknown> {
|
||||
this.log('info', 'Starting QM exchanges sync...');
|
||||
return syncQMExchanges({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync exchanges from Interactive Brokers
|
||||
*/
|
||||
@Operation('sync-ib-exchanges')
|
||||
@ScheduledOperation('sync-ib-exchanges', '0 3 * * *', {
|
||||
priority: 3,
|
||||
description: 'Daily sync of IB exchanges at 3 AM',
|
||||
})
|
||||
async syncIBExchanges(): Promise<unknown> {
|
||||
this.log('info', 'Starting IB exchanges sync...');
|
||||
return syncIBExchanges({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync provider mappings from QuestionsAndMethods
|
||||
*/
|
||||
@Operation('sync-qm-provider-mappings')
|
||||
@ScheduledOperation('sync-qm-provider-mappings', '0 3 * * *', {
|
||||
priority: 7,
|
||||
description: 'Daily sync of QM provider mappings at 3 AM',
|
||||
})
|
||||
async syncQMProviderMappings(): Promise<unknown> {
|
||||
this.log('info', 'Starting QM provider mappings sync...');
|
||||
return syncQMProviderMappings({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear PostgreSQL data - maintenance operation
|
||||
*/
|
||||
@Operation('clear-postgresql-data')
|
||||
async clearPostgreSQLData(payload: { type?: 'exchanges' | 'provider_mappings' | 'all' }): Promise<unknown> {
|
||||
this.log('warn', 'Clearing PostgreSQL data', payload);
|
||||
return clearPostgreSQLData(payload, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get exchange statistics
|
||||
*/
|
||||
@Operation('get-exchange-stats')
|
||||
async getExchangeStats(): Promise<unknown> {
|
||||
this.log('info', 'Getting exchange statistics...');
|
||||
return getExchangeStats({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get provider mapping statistics
|
||||
*/
|
||||
@Operation('get-provider-mapping-stats')
|
||||
async getProviderMappingStats(): Promise<unknown> {
|
||||
this.log('info', 'Getting provider mapping statistics...');
|
||||
return getProviderMappingStats({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get enhanced sync status
|
||||
*/
|
||||
@Operation('enhanced-sync-status')
|
||||
async getEnhancedSyncStatus(): Promise<unknown> {
|
||||
this.log('info', 'Getting enhanced sync status...');
|
||||
return getSyncStatus({}, this.services);
|
||||
}
|
||||
}
|
||||
import {
|
||||
BaseHandler,
|
||||
Handler,
|
||||
Operation,
|
||||
ScheduledOperation,
|
||||
} from '@stock-bot/handlers';
|
||||
import type { IServiceContainer } from '@stock-bot/types';
|
||||
import { clearPostgreSQLData } from './operations/clear-postgresql-data.operations';
|
||||
import { getSyncStatus } from './operations/enhanced-sync-status.operations';
|
||||
import { getExchangeStats } from './operations/exchange-stats.operations';
|
||||
import { getProviderMappingStats } from './operations/provider-mapping-stats.operations';
|
||||
import { syncQMExchanges } from './operations/qm-exchanges.operations';
|
||||
import { syncAllExchanges } from './operations/sync-all-exchanges.operations';
|
||||
import { syncIBExchanges } from './operations/sync-ib-exchanges.operations';
|
||||
import { syncQMProviderMappings } from './operations/sync-qm-provider-mappings.operations';
|
||||
|
||||
@Handler('exchanges')
|
||||
class ExchangesHandler extends BaseHandler {
|
||||
constructor(services: IServiceContainer) {
|
||||
super(services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync all exchanges - weekly full sync
|
||||
*/
|
||||
@Operation('sync-all-exchanges')
|
||||
@ScheduledOperation('sync-all-exchanges', '0 0 * * 0', {
|
||||
priority: 10,
|
||||
description: 'Weekly full exchange sync on Sunday at midnight',
|
||||
})
|
||||
async syncAllExchanges(payload?: { clearFirst?: boolean }): Promise<unknown> {
|
||||
const finalPayload = payload || { clearFirst: true };
|
||||
this.log('info', 'Starting sync of all exchanges', finalPayload);
|
||||
return syncAllExchanges(finalPayload, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync exchanges from QuestionsAndMethods
|
||||
*/
|
||||
@Operation('sync-qm-exchanges')
|
||||
@ScheduledOperation('sync-qm-exchanges', '0 1 * * *', {
|
||||
priority: 5,
|
||||
description: 'Daily sync of QM exchanges at 1 AM',
|
||||
})
|
||||
async syncQMExchanges(): Promise<unknown> {
|
||||
this.log('info', 'Starting QM exchanges sync...');
|
||||
return syncQMExchanges({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync exchanges from Interactive Brokers
|
||||
*/
|
||||
@Operation('sync-ib-exchanges')
|
||||
@ScheduledOperation('sync-ib-exchanges', '0 3 * * *', {
|
||||
priority: 3,
|
||||
description: 'Daily sync of IB exchanges at 3 AM',
|
||||
})
|
||||
async syncIBExchanges(): Promise<unknown> {
|
||||
this.log('info', 'Starting IB exchanges sync...');
|
||||
return syncIBExchanges({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync provider mappings from QuestionsAndMethods
|
||||
*/
|
||||
@Operation('sync-qm-provider-mappings')
|
||||
@ScheduledOperation('sync-qm-provider-mappings', '0 3 * * *', {
|
||||
priority: 7,
|
||||
description: 'Daily sync of QM provider mappings at 3 AM',
|
||||
})
|
||||
async syncQMProviderMappings(): Promise<unknown> {
|
||||
this.log('info', 'Starting QM provider mappings sync...');
|
||||
return syncQMProviderMappings({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear PostgreSQL data - maintenance operation
|
||||
*/
|
||||
@Operation('clear-postgresql-data')
|
||||
async clearPostgreSQLData(payload: {
|
||||
type?: 'exchanges' | 'provider_mappings' | 'all';
|
||||
}): Promise<unknown> {
|
||||
this.log('warn', 'Clearing PostgreSQL data', payload);
|
||||
return clearPostgreSQLData(payload, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get exchange statistics
|
||||
*/
|
||||
@Operation('get-exchange-stats')
|
||||
async getExchangeStats(): Promise<unknown> {
|
||||
this.log('info', 'Getting exchange statistics...');
|
||||
return getExchangeStats({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get provider mapping statistics
|
||||
*/
|
||||
@Operation('get-provider-mapping-stats')
|
||||
async getProviderMappingStats(): Promise<unknown> {
|
||||
this.log('info', 'Getting provider mapping statistics...');
|
||||
return getProviderMappingStats({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get enhanced sync status
|
||||
*/
|
||||
@Operation('enhanced-sync-status')
|
||||
async getEnhancedSyncStatus(): Promise<unknown> {
|
||||
this.log('info', 'Getting enhanced sync status...');
|
||||
return getSyncStatus({}, this.services);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { JobPayload } from '../../../types/job-payloads';
|
||||
|
||||
const logger = getLogger('enhanced-sync-clear-postgresql-data');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { JobPayload, SyncStatus } from '../../../types/job-payloads';
|
||||
|
||||
const logger = getLogger('enhanced-sync-status');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { JobPayload } from '../../../types/job-payloads';
|
||||
|
||||
const logger = getLogger('enhanced-sync-exchange-stats');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { JobPayload } from '../../../types/job-payloads';
|
||||
|
||||
const logger = getLogger('enhanced-sync-provider-mapping-stats');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import type { IServiceContainer } from '@stock-bot/types';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import type { JobPayload } from '../../../types/job-payloads';
|
||||
|
||||
const logger = getLogger('sync-qm-exchanges');
|
||||
|
|
@ -62,7 +62,10 @@ interface Exchange {
|
|||
visible: boolean;
|
||||
}
|
||||
|
||||
async function findExchange(exchangeCode: string, postgresClient: IServiceContainer['postgres']): Promise<Exchange | null> {
|
||||
async function findExchange(
|
||||
exchangeCode: string,
|
||||
postgresClient: IServiceContainer['postgres']
|
||||
): Promise<Exchange | null> {
|
||||
const query = 'SELECT * FROM exchanges WHERE code = $1';
|
||||
const result = await postgresClient.query(query, [exchangeCode]);
|
||||
return result.rows[0] || null;
|
||||
|
|
@ -76,7 +79,10 @@ interface QMExchange {
|
|||
countryCode?: string;
|
||||
}
|
||||
|
||||
async function createExchange(qmExchange: QMExchange, postgresClient: IServiceContainer['postgres']): Promise<void> {
|
||||
async function createExchange(
|
||||
qmExchange: QMExchange,
|
||||
postgresClient: IServiceContainer['postgres']
|
||||
): Promise<void> {
|
||||
const query = `
|
||||
INSERT INTO exchanges (code, name, country, currency, visible)
|
||||
VALUES ($1, $2, $3, $4, $5)
|
||||
|
|
|
|||
|
|
@ -1,10 +1,13 @@
|
|||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { JobPayload, SyncResult } from '../../../types/job-payloads';
|
||||
|
||||
const logger = getLogger('enhanced-sync-all-exchanges');
|
||||
|
||||
export async function syncAllExchanges(payload: JobPayload, container: IServiceContainer): Promise<SyncResult> {
|
||||
export async function syncAllExchanges(
|
||||
payload: JobPayload,
|
||||
container: IServiceContainer
|
||||
): Promise<SyncResult> {
|
||||
const clearFirst = payload.clearFirst || true;
|
||||
logger.info('Starting comprehensive exchange sync...', { clearFirst });
|
||||
|
||||
|
|
@ -50,7 +53,6 @@ export async function syncAllExchanges(payload: JobPayload, container: IServiceC
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
async function clearPostgreSQLData(postgresClient: any): Promise<void> {
|
||||
logger.info('Clearing existing PostgreSQL data...');
|
||||
|
||||
|
|
@ -141,7 +143,11 @@ async function createProviderExchangeMapping(
|
|||
const postgresClient = container.postgres;
|
||||
|
||||
// Check if mapping already exists
|
||||
const existingMapping = await findProviderExchangeMapping(provider, providerExchangeCode, container);
|
||||
const existingMapping = await findProviderExchangeMapping(
|
||||
provider,
|
||||
providerExchangeCode,
|
||||
container
|
||||
);
|
||||
if (existingMapping) {
|
||||
// Don't override existing mappings to preserve manual work
|
||||
return;
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { MasterExchange } from '@stock-bot/mongodb';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import type { JobPayload } from '../../../types/job-payloads';
|
||||
|
||||
const logger = getLogger('sync-ib-exchanges');
|
||||
|
|
@ -65,7 +65,10 @@ export async function syncIBExchanges(
|
|||
/**
|
||||
* Create or update master exchange record 1:1 from IB exchange
|
||||
*/
|
||||
async function createOrUpdateMasterExchange(ibExchange: IBExchange, container: IServiceContainer): Promise<void> {
|
||||
async function createOrUpdateMasterExchange(
|
||||
ibExchange: IBExchange,
|
||||
container: IServiceContainer
|
||||
): Promise<void> {
|
||||
const mongoClient = container.mongodb;
|
||||
const db = mongoClient.getDatabase();
|
||||
const collection = db.collection<MasterExchange>('masterExchanges');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { JobPayload, SyncResult } from '../../../types/job-payloads';
|
||||
|
||||
const logger = getLogger('enhanced-sync-qm-provider-mappings');
|
||||
|
|
@ -86,7 +86,6 @@ export async function syncQMProviderMappings(
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
async function createProviderExchangeMapping(
|
||||
provider: string,
|
||||
providerExchangeCode: string,
|
||||
|
|
@ -103,7 +102,11 @@ async function createProviderExchangeMapping(
|
|||
const postgresClient = container.postgres;
|
||||
|
||||
// Check if mapping already exists
|
||||
const existingMapping = await findProviderExchangeMapping(provider, providerExchangeCode, container);
|
||||
const existingMapping = await findProviderExchangeMapping(
|
||||
provider,
|
||||
providerExchangeCode,
|
||||
container
|
||||
);
|
||||
if (existingMapping) {
|
||||
// Don't override existing mappings to preserve manual work
|
||||
return;
|
||||
|
|
|
|||
|
|
@ -1,42 +1,41 @@
|
|||
/**
|
||||
* Handler auto-registration for data pipeline service
|
||||
* Automatically discovers and registers all handlers
|
||||
*/
|
||||
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { autoRegisterHandlers } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
|
||||
// Import handlers for bundling (ensures they're included in the build)
|
||||
import './exchanges/exchanges.handler';
|
||||
import './symbols/symbols.handler';
|
||||
|
||||
const logger = getLogger('pipeline-handler-init');
|
||||
|
||||
/**
|
||||
* Initialize and register all handlers automatically
|
||||
*/
|
||||
export async function initializeAllHandlers(container: IServiceContainer): Promise<void> {
|
||||
logger.info('Initializing data pipeline handlers...');
|
||||
|
||||
try {
|
||||
// Auto-register all handlers in this directory
|
||||
const result = await autoRegisterHandlers(__dirname, container, {
|
||||
pattern: '.handler.',
|
||||
exclude: ['test', 'spec', '.old'],
|
||||
dryRun: false,
|
||||
});
|
||||
|
||||
logger.info('Handler auto-registration complete', {
|
||||
registered: result.registered,
|
||||
failed: result.failed,
|
||||
});
|
||||
|
||||
if (result.failed.length > 0) {
|
||||
logger.error('Some handlers failed to register', { failed: result.failed });
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Handler auto-registration failed', { error });
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
/**
|
||||
* Handler auto-registration for data pipeline service
|
||||
* Automatically discovers and registers all handlers
|
||||
*/
|
||||
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { autoRegisterHandlers } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
// Import handlers for bundling (ensures they're included in the build)
|
||||
import './exchanges/exchanges.handler';
|
||||
import './symbols/symbols.handler';
|
||||
|
||||
const logger = getLogger('pipeline-handler-init');
|
||||
|
||||
/**
|
||||
* Initialize and register all handlers automatically
|
||||
*/
|
||||
export async function initializeAllHandlers(container: IServiceContainer): Promise<void> {
|
||||
logger.info('Initializing data pipeline handlers...');
|
||||
|
||||
try {
|
||||
// Auto-register all handlers in this directory
|
||||
const result = await autoRegisterHandlers(__dirname, container, {
|
||||
pattern: '.handler.',
|
||||
exclude: ['test', 'spec', '.old'],
|
||||
dryRun: false,
|
||||
});
|
||||
|
||||
logger.info('Handler auto-registration complete', {
|
||||
registered: result.registered,
|
||||
failed: result.failed,
|
||||
});
|
||||
|
||||
if (result.failed.length > 0) {
|
||||
logger.error('Some handlers failed to register', { failed: result.failed });
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Handler auto-registration failed', { error });
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { JobPayload } from '../../../types/job-payloads';
|
||||
|
||||
const logger = getLogger('sync-qm-symbols');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { JobPayload } from '../../../types/job-payloads';
|
||||
|
||||
const logger = getLogger('sync-status');
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { JobPayload, SyncResult } from '../../../types/job-payloads';
|
||||
|
||||
const logger = getLogger('enhanced-sync-symbols-from-provider');
|
||||
|
|
@ -104,7 +104,11 @@ async function processSingleSymbol(
|
|||
}
|
||||
|
||||
// Find active provider exchange mapping
|
||||
const providerMapping = await findActiveProviderExchangeMapping(provider, exchangeCode, container);
|
||||
const providerMapping = await findActiveProviderExchangeMapping(
|
||||
provider,
|
||||
exchangeCode,
|
||||
container
|
||||
);
|
||||
|
||||
if (!providerMapping) {
|
||||
result.skipped++;
|
||||
|
|
@ -145,14 +149,22 @@ async function findActiveProviderExchangeMapping(
|
|||
return result.rows[0] || null;
|
||||
}
|
||||
|
||||
async function findSymbolByCodeAndExchange(symbol: string, exchangeId: string, container: IServiceContainer): Promise<any> {
|
||||
async function findSymbolByCodeAndExchange(
|
||||
symbol: string,
|
||||
exchangeId: string,
|
||||
container: IServiceContainer
|
||||
): Promise<any> {
|
||||
const postgresClient = container.postgres;
|
||||
const query = 'SELECT * FROM symbols WHERE symbol = $1 AND exchange_id = $2';
|
||||
const result = await postgresClient.query(query, [symbol, exchangeId]);
|
||||
return result.rows[0] || null;
|
||||
}
|
||||
|
||||
async function createSymbol(symbol: any, exchangeId: string, container: IServiceContainer): Promise<string> {
|
||||
async function createSymbol(
|
||||
symbol: any,
|
||||
exchangeId: string,
|
||||
container: IServiceContainer
|
||||
): Promise<string> {
|
||||
const postgresClient = container.postgres;
|
||||
const query = `
|
||||
INSERT INTO symbols (symbol, exchange_id, company_name, country, currency)
|
||||
|
|
@ -171,7 +183,11 @@ async function createSymbol(symbol: any, exchangeId: string, container: IService
|
|||
return result.rows[0].id;
|
||||
}
|
||||
|
||||
async function updateSymbol(symbolId: string, symbol: any, container: IServiceContainer): Promise<void> {
|
||||
async function updateSymbol(
|
||||
symbolId: string,
|
||||
symbol: any,
|
||||
container: IServiceContainer
|
||||
): Promise<void> {
|
||||
const postgresClient = container.postgres;
|
||||
const query = `
|
||||
UPDATE symbols
|
||||
|
|
|
|||
|
|
@ -1,68 +1,71 @@
|
|||
import {
|
||||
BaseHandler,
|
||||
Handler,
|
||||
Operation,
|
||||
ScheduledOperation,
|
||||
type IServiceContainer,
|
||||
} from '@stock-bot/handlers';
|
||||
import { syncQMSymbols } from './operations/qm-symbols.operations';
|
||||
import { syncSymbolsFromProvider } from './operations/sync-symbols-from-provider.operations';
|
||||
import { getSyncStatus } from './operations/sync-status.operations';
|
||||
|
||||
@Handler('symbols')
|
||||
class SymbolsHandler extends BaseHandler {
|
||||
constructor(services: IServiceContainer) {
|
||||
super(services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync symbols from QuestionsAndMethods API
|
||||
*/
|
||||
@ScheduledOperation('sync-qm-symbols', '0 2 * * *', {
|
||||
priority: 5,
|
||||
description: 'Daily sync of QM symbols at 2 AM',
|
||||
})
|
||||
async syncQMSymbols(): Promise<{ processed: number; created: number; updated: number }> {
|
||||
this.log('info', 'Starting QM symbols sync...');
|
||||
return syncQMSymbols({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync symbols from specific provider
|
||||
*/
|
||||
@Operation('sync-symbols-qm')
|
||||
@ScheduledOperation('sync-symbols-qm', '0 4 * * *', {
|
||||
priority: 5,
|
||||
description: 'Daily sync of symbols from QM provider at 4 AM',
|
||||
})
|
||||
async syncSymbolsQM(): Promise<unknown> {
|
||||
return this.syncSymbolsFromProvider({ provider: 'qm', clearFirst: false });
|
||||
}
|
||||
|
||||
@Operation('sync-symbols-eod')
|
||||
async syncSymbolsEOD(payload: { provider: string; clearFirst?: boolean }): Promise<unknown> {
|
||||
return this.syncSymbolsFromProvider({ ...payload, provider: 'eod' });
|
||||
}
|
||||
|
||||
@Operation('sync-symbols-ib')
|
||||
async syncSymbolsIB(payload: { provider: string; clearFirst?: boolean }): Promise<unknown> {
|
||||
return this.syncSymbolsFromProvider({ ...payload, provider: 'ib' });
|
||||
}
|
||||
|
||||
/**
|
||||
* Get sync status for symbols
|
||||
*/
|
||||
@Operation('sync-status')
|
||||
async getSyncStatus(): Promise<unknown> {
|
||||
this.log('info', 'Getting symbol sync status...');
|
||||
return getSyncStatus({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Internal method to sync symbols from a provider
|
||||
*/
|
||||
private async syncSymbolsFromProvider(payload: { provider: string; clearFirst?: boolean }): Promise<unknown> {
|
||||
this.log('info', 'Syncing symbols from provider', { provider: payload.provider });
|
||||
return syncSymbolsFromProvider(payload, this.services);
|
||||
}
|
||||
}
|
||||
import {
|
||||
BaseHandler,
|
||||
Handler,
|
||||
Operation,
|
||||
ScheduledOperation,
|
||||
type IServiceContainer,
|
||||
} from '@stock-bot/handlers';
|
||||
import { syncQMSymbols } from './operations/qm-symbols.operations';
|
||||
import { getSyncStatus } from './operations/sync-status.operations';
|
||||
import { syncSymbolsFromProvider } from './operations/sync-symbols-from-provider.operations';
|
||||
|
||||
@Handler('symbols')
|
||||
class SymbolsHandler extends BaseHandler {
|
||||
constructor(services: IServiceContainer) {
|
||||
super(services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync symbols from QuestionsAndMethods API
|
||||
*/
|
||||
@ScheduledOperation('sync-qm-symbols', '0 2 * * *', {
|
||||
priority: 5,
|
||||
description: 'Daily sync of QM symbols at 2 AM',
|
||||
})
|
||||
async syncQMSymbols(): Promise<{ processed: number; created: number; updated: number }> {
|
||||
this.log('info', 'Starting QM symbols sync...');
|
||||
return syncQMSymbols({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync symbols from specific provider
|
||||
*/
|
||||
@Operation('sync-symbols-qm')
|
||||
@ScheduledOperation('sync-symbols-qm', '0 4 * * *', {
|
||||
priority: 5,
|
||||
description: 'Daily sync of symbols from QM provider at 4 AM',
|
||||
})
|
||||
async syncSymbolsQM(): Promise<unknown> {
|
||||
return this.syncSymbolsFromProvider({ provider: 'qm', clearFirst: false });
|
||||
}
|
||||
|
||||
@Operation('sync-symbols-eod')
|
||||
async syncSymbolsEOD(payload: { provider: string; clearFirst?: boolean }): Promise<unknown> {
|
||||
return this.syncSymbolsFromProvider({ ...payload, provider: 'eod' });
|
||||
}
|
||||
|
||||
@Operation('sync-symbols-ib')
|
||||
async syncSymbolsIB(payload: { provider: string; clearFirst?: boolean }): Promise<unknown> {
|
||||
return this.syncSymbolsFromProvider({ ...payload, provider: 'ib' });
|
||||
}
|
||||
|
||||
/**
|
||||
* Get sync status for symbols
|
||||
*/
|
||||
@Operation('sync-status')
|
||||
async getSyncStatus(): Promise<unknown> {
|
||||
this.log('info', 'Getting symbol sync status...');
|
||||
return getSyncStatus({}, this.services);
|
||||
}
|
||||
|
||||
/**
|
||||
* Internal method to sync symbols from a provider
|
||||
*/
|
||||
private async syncSymbolsFromProvider(payload: {
|
||||
provider: string;
|
||||
clearFirst?: boolean;
|
||||
}): Promise<unknown> {
|
||||
this.log('info', 'Syncing symbols from provider', { provider: payload.provider });
|
||||
return syncSymbolsFromProvider(payload, this.services);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,14 +3,13 @@
|
|||
* Simplified entry point using ServiceApplication framework
|
||||
*/
|
||||
|
||||
import { initializeStockConfig } from '@stock-bot/stock-config';
|
||||
import { ServiceApplication } from '@stock-bot/di';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
|
||||
// Local imports
|
||||
import { initializeAllHandlers } from './handlers';
|
||||
import { initializeStockConfig } from '@stock-bot/stock-config';
|
||||
import { createRoutes } from './routes/create-routes';
|
||||
import { setupServiceContainer } from './container-setup';
|
||||
// Local imports
|
||||
import { initializeAllHandlers } from './handlers';
|
||||
|
||||
// Initialize configuration with service-specific overrides
|
||||
const config = initializeStockConfig('dataPipeline');
|
||||
|
|
@ -43,12 +42,12 @@ const app = new ServiceApplication(
|
|||
},
|
||||
{
|
||||
// Custom lifecycle hooks
|
||||
onContainerReady: (container) => {
|
||||
onContainerReady: container => {
|
||||
// Setup service-specific configuration
|
||||
const enhancedContainer = setupServiceContainer(config, container);
|
||||
return enhancedContainer;
|
||||
},
|
||||
onStarted: (_port) => {
|
||||
onStarted: _port => {
|
||||
const logger = getLogger('data-pipeline');
|
||||
logger.info('Data pipeline service startup initiated with ServiceApplication framework');
|
||||
},
|
||||
|
|
@ -59,7 +58,7 @@ const app = new ServiceApplication(
|
|||
async function createContainer(config: any) {
|
||||
const { ServiceContainerBuilder } = await import('@stock-bot/di');
|
||||
const builder = new ServiceContainerBuilder();
|
||||
|
||||
|
||||
const container = await builder
|
||||
.withConfig(config)
|
||||
.withOptions({
|
||||
|
|
@ -74,7 +73,7 @@ async function createContainer(config: any) {
|
|||
skipInitialization: false, // Let builder handle initialization
|
||||
})
|
||||
.build();
|
||||
|
||||
|
||||
return container;
|
||||
}
|
||||
|
||||
|
|
@ -83,4 +82,4 @@ app.start(createContainer, createRoutes, initializeAllHandlers).catch(error => {
|
|||
const logger = getLogger('data-pipeline');
|
||||
logger.fatal('Failed to start data pipeline service', { error });
|
||||
process.exit(1);
|
||||
});
|
||||
});
|
||||
|
|
|
|||
|
|
@ -1,29 +1,29 @@
|
|||
/**
|
||||
* Route factory for data pipeline service
|
||||
* Creates routes with access to the service container
|
||||
*/
|
||||
|
||||
import { Hono } from 'hono';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { healthRoutes } from './health.routes';
|
||||
import { createSyncRoutes } from './sync.routes';
|
||||
import { createEnhancedSyncRoutes } from './enhanced-sync.routes';
|
||||
import { createStatsRoutes } from './stats.routes';
|
||||
|
||||
export function createRoutes(container: IServiceContainer): Hono {
|
||||
const app = new Hono();
|
||||
|
||||
// Add container to context for all routes
|
||||
app.use('*', async (c, next) => {
|
||||
c.set('container', container);
|
||||
await next();
|
||||
});
|
||||
|
||||
// Mount routes
|
||||
app.route('/health', healthRoutes);
|
||||
app.route('/sync', createSyncRoutes(container));
|
||||
app.route('/sync', createEnhancedSyncRoutes(container));
|
||||
app.route('/sync/stats', createStatsRoutes(container));
|
||||
|
||||
return app;
|
||||
}
|
||||
/**
|
||||
* Route factory for data pipeline service
|
||||
* Creates routes with access to the service container
|
||||
*/
|
||||
|
||||
import { Hono } from 'hono';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { createEnhancedSyncRoutes } from './enhanced-sync.routes';
|
||||
import { healthRoutes } from './health.routes';
|
||||
import { createStatsRoutes } from './stats.routes';
|
||||
import { createSyncRoutes } from './sync.routes';
|
||||
|
||||
export function createRoutes(container: IServiceContainer): Hono {
|
||||
const app = new Hono();
|
||||
|
||||
// Add container to context for all routes
|
||||
app.use('*', async (c, next) => {
|
||||
c.set('container', container);
|
||||
await next();
|
||||
});
|
||||
|
||||
// Mount routes
|
||||
app.route('/health', healthRoutes);
|
||||
app.route('/sync', createSyncRoutes(container));
|
||||
app.route('/sync', createEnhancedSyncRoutes(container));
|
||||
app.route('/sync/stats', createStatsRoutes(container));
|
||||
|
||||
return app;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import { Hono } from 'hono';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
|
||||
const logger = getLogger('enhanced-sync-routes');
|
||||
|
||||
|
|
@ -15,7 +15,7 @@ export function createEnhancedSyncRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ success: false, error: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
|
||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
||||
|
||||
const job = await exchangesQueue.addJob('sync-all-exchanges', {
|
||||
|
|
@ -40,7 +40,7 @@ export function createEnhancedSyncRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ success: false, error: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
|
||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
||||
|
||||
const job = await exchangesQueue.addJob('sync-qm-provider-mappings', {
|
||||
|
|
@ -69,7 +69,7 @@ export function createEnhancedSyncRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ success: false, error: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
|
||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
||||
|
||||
const job = await exchangesQueue.addJob('sync-ib-exchanges', {
|
||||
|
|
@ -98,7 +98,7 @@ export function createEnhancedSyncRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ success: false, error: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
|
||||
const symbolsQueue = queueManager.getQueue('symbols');
|
||||
|
||||
const job = await symbolsQueue.addJob('sync-status', {
|
||||
|
|
@ -124,7 +124,7 @@ export function createEnhancedSyncRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ success: false, error: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
|
||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
||||
|
||||
const job = await exchangesQueue.addJob('clear-postgresql-data', {
|
||||
|
|
@ -148,4 +148,4 @@ export function createEnhancedSyncRoutes(container: IServiceContainer) {
|
|||
});
|
||||
|
||||
return enhancedSync;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import { Hono } from 'hono';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
|
||||
const logger = getLogger('stats-routes');
|
||||
|
||||
|
|
@ -14,7 +14,7 @@ export function createStatsRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ error: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
|
||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
||||
|
||||
const job = await exchangesQueue.addJob('get-exchange-stats', {
|
||||
|
|
@ -38,7 +38,7 @@ export function createStatsRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ error: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
|
||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
||||
|
||||
const job = await exchangesQueue.addJob('get-provider-mapping-stats', {
|
||||
|
|
@ -57,4 +57,4 @@ export function createStatsRoutes(container: IServiceContainer) {
|
|||
});
|
||||
|
||||
return stats;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import { Hono } from 'hono';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
|
||||
const logger = getLogger('sync-routes');
|
||||
|
||||
|
|
@ -14,7 +14,7 @@ export function createSyncRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ success: false, error: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
|
||||
const symbolsQueue = queueManager.getQueue('symbols');
|
||||
|
||||
const job = await symbolsQueue.addJob('sync-qm-symbols', {
|
||||
|
|
@ -39,7 +39,7 @@ export function createSyncRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ success: false, error: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
|
||||
const exchangesQueue = queueManager.getQueue('exchanges');
|
||||
|
||||
const job = await exchangesQueue.addJob('sync-qm-exchanges', {
|
||||
|
|
@ -65,7 +65,7 @@ export function createSyncRoutes(container: IServiceContainer) {
|
|||
if (!queueManager) {
|
||||
return c.json({ success: false, error: 'Queue manager not available' }, 503);
|
||||
}
|
||||
|
||||
|
||||
const symbolsQueue = queueManager.getQueue('symbols');
|
||||
|
||||
const job = await symbolsQueue.addJob('sync-symbols-from-provider', {
|
||||
|
|
@ -89,4 +89,4 @@ export function createSyncRoutes(container: IServiceContainer) {
|
|||
});
|
||||
|
||||
return sync;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,91 +1,81 @@
|
|||
{
|
||||
"name": "@stock-bot/stock-app",
|
||||
"version": "1.0.0",
|
||||
"private": true,
|
||||
"description": "Stock trading bot application",
|
||||
"scripts": {
|
||||
"dev": "turbo run dev",
|
||||
"dev:ingestion": "cd data-ingestion && bun run dev",
|
||||
"dev:pipeline": "cd data-pipeline && bun run dev",
|
||||
"dev:api": "cd web-api && bun run dev",
|
||||
"dev:web": "cd web-app && bun run dev",
|
||||
"dev:backend": "turbo run dev --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-api\"",
|
||||
"dev:frontend": "turbo run dev --filter=\"@stock-bot/web-app\"",
|
||||
|
||||
"build": "turbo run build",
|
||||
"build:config": "cd config && bun run build",
|
||||
"build:services": "turbo run build --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-*\"",
|
||||
"build:ingestion": "cd data-ingestion && bun run build",
|
||||
"build:pipeline": "cd data-pipeline && bun run build",
|
||||
"build:api": "cd web-api && bun run build",
|
||||
"build:web": "cd web-app && bun run build",
|
||||
|
||||
"start": "turbo run start --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-api\"",
|
||||
"start:all": "turbo run start",
|
||||
"start:ingestion": "cd data-ingestion && bun start",
|
||||
"start:pipeline": "cd data-pipeline && bun start",
|
||||
"start:api": "cd web-api && bun start",
|
||||
|
||||
"clean": "turbo run clean",
|
||||
"clean:all": "turbo run clean && rm -rf node_modules",
|
||||
"clean:ingestion": "cd data-ingestion && rm -rf dist node_modules",
|
||||
"clean:pipeline": "cd data-pipeline && rm -rf dist node_modules",
|
||||
"clean:api": "cd web-api && rm -rf dist node_modules",
|
||||
"clean:web": "cd web-app && rm -rf dist node_modules",
|
||||
"clean:config": "cd config && rm -rf dist node_modules",
|
||||
|
||||
"test": "turbo run test",
|
||||
"test:all": "turbo run test",
|
||||
"test:config": "cd config && bun test",
|
||||
"test:services": "turbo run test --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-*\"",
|
||||
"test:ingestion": "cd data-ingestion && bun test",
|
||||
"test:pipeline": "cd data-pipeline && bun test",
|
||||
"test:api": "cd web-api && bun test",
|
||||
|
||||
"lint": "turbo run lint",
|
||||
"lint:all": "turbo run lint",
|
||||
"lint:config": "cd config && bun run lint",
|
||||
"lint:services": "turbo run lint --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-*\"",
|
||||
"lint:ingestion": "cd data-ingestion && bun run lint",
|
||||
"lint:pipeline": "cd data-pipeline && bun run lint",
|
||||
"lint:api": "cd web-api && bun run lint",
|
||||
"lint:web": "cd web-app && bun run lint",
|
||||
|
||||
"install:all": "bun install",
|
||||
|
||||
"docker:build": "docker-compose build",
|
||||
"docker:up": "docker-compose up",
|
||||
"docker:down": "docker-compose down",
|
||||
|
||||
"pm2:start": "pm2 start ecosystem.config.js",
|
||||
"pm2:stop": "pm2 stop all",
|
||||
"pm2:restart": "pm2 restart all",
|
||||
"pm2:logs": "pm2 logs",
|
||||
"pm2:status": "pm2 status",
|
||||
|
||||
"db:migrate": "cd data-ingestion && bun run db:migrate",
|
||||
"db:seed": "cd data-ingestion && bun run db:seed",
|
||||
|
||||
"health:check": "bun scripts/health-check.js",
|
||||
"monitor": "bun run pm2:logs",
|
||||
"status": "bun run pm2:status"
|
||||
},
|
||||
"devDependencies": {
|
||||
"pm2": "^5.3.0",
|
||||
"@types/node": "^20.11.0",
|
||||
"typescript": "^5.3.3",
|
||||
"turbo": "^2.5.4"
|
||||
},
|
||||
"workspaces": [
|
||||
"config",
|
||||
"data-ingestion",
|
||||
"data-pipeline",
|
||||
"web-api",
|
||||
"web-app"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18.0.0",
|
||||
"bun": ">=1.1.0"
|
||||
},
|
||||
"packageManager": "bun@1.1.12"
|
||||
}
|
||||
{
|
||||
"name": "@stock-bot/stock-app",
|
||||
"version": "1.0.0",
|
||||
"private": true,
|
||||
"description": "Stock trading bot application",
|
||||
"scripts": {
|
||||
"dev": "turbo run dev",
|
||||
"dev:ingestion": "cd data-ingestion && bun run dev",
|
||||
"dev:pipeline": "cd data-pipeline && bun run dev",
|
||||
"dev:api": "cd web-api && bun run dev",
|
||||
"dev:web": "cd web-app && bun run dev",
|
||||
"dev:backend": "turbo run dev --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-api\"",
|
||||
"dev:frontend": "turbo run dev --filter=\"@stock-bot/web-app\"",
|
||||
"build": "echo 'Stock apps built via parent turbo'",
|
||||
"build:config": "cd config && bun run build",
|
||||
"build:services": "turbo run build --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-*\"",
|
||||
"build:ingestion": "cd data-ingestion && bun run build",
|
||||
"build:pipeline": "cd data-pipeline && bun run build",
|
||||
"build:api": "cd web-api && bun run build",
|
||||
"build:web": "cd web-app && bun run build",
|
||||
"start": "turbo run start --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-api\"",
|
||||
"start:all": "turbo run start",
|
||||
"start:ingestion": "cd data-ingestion && bun start",
|
||||
"start:pipeline": "cd data-pipeline && bun start",
|
||||
"start:api": "cd web-api && bun start",
|
||||
"clean": "turbo run clean",
|
||||
"clean:all": "turbo run clean && rm -rf node_modules",
|
||||
"clean:ingestion": "cd data-ingestion && rm -rf dist node_modules",
|
||||
"clean:pipeline": "cd data-pipeline && rm -rf dist node_modules",
|
||||
"clean:api": "cd web-api && rm -rf dist node_modules",
|
||||
"clean:web": "cd web-app && rm -rf dist node_modules",
|
||||
"clean:config": "cd config && rm -rf dist node_modules",
|
||||
"test": "turbo run test",
|
||||
"test:all": "turbo run test",
|
||||
"test:config": "cd config && bun test",
|
||||
"test:services": "turbo run test --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-*\"",
|
||||
"test:ingestion": "cd data-ingestion && bun test",
|
||||
"test:pipeline": "cd data-pipeline && bun test",
|
||||
"test:api": "cd web-api && bun test",
|
||||
"lint": "turbo run lint",
|
||||
"lint:all": "turbo run lint",
|
||||
"lint:config": "cd config && bun run lint",
|
||||
"lint:services": "turbo run lint --filter=\"@stock-bot/data-*\" --filter=\"@stock-bot/web-*\"",
|
||||
"lint:ingestion": "cd data-ingestion && bun run lint",
|
||||
"lint:pipeline": "cd data-pipeline && bun run lint",
|
||||
"lint:api": "cd web-api && bun run lint",
|
||||
"lint:web": "cd web-app && bun run lint",
|
||||
"install:all": "bun install",
|
||||
"docker:build": "docker-compose build",
|
||||
"docker:up": "docker-compose up",
|
||||
"docker:down": "docker-compose down",
|
||||
"pm2:start": "pm2 start ecosystem.config.js",
|
||||
"pm2:stop": "pm2 stop all",
|
||||
"pm2:restart": "pm2 restart all",
|
||||
"pm2:logs": "pm2 logs",
|
||||
"pm2:status": "pm2 status",
|
||||
"db:migrate": "cd data-ingestion && bun run db:migrate",
|
||||
"db:seed": "cd data-ingestion && bun run db:seed",
|
||||
"health:check": "bun scripts/health-check.js",
|
||||
"monitor": "bun run pm2:logs",
|
||||
"status": "bun run pm2:status"
|
||||
},
|
||||
"devDependencies": {
|
||||
"pm2": "^5.3.0",
|
||||
"@types/node": "^20.11.0",
|
||||
"typescript": "^5.3.3",
|
||||
"turbo": "^2.5.4"
|
||||
},
|
||||
"workspaces": [
|
||||
"config",
|
||||
"data-ingestion",
|
||||
"data-pipeline",
|
||||
"web-api",
|
||||
"web-app"
|
||||
],
|
||||
"engines": {
|
||||
"node": ">=18.0.0",
|
||||
"bun": ">=1.1.0"
|
||||
},
|
||||
"packageManager": "bun@1.1.12"
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,18 +1,18 @@
|
|||
{
|
||||
"extends": "../../tsconfig.json",
|
||||
"compilerOptions": {
|
||||
"baseUrl": "../..",
|
||||
"paths": {
|
||||
"@stock-bot/*": ["libs/*/src"],
|
||||
"@stock-bot/stock-config": ["apps/stock/config/src"],
|
||||
"@stock-bot/stock-config/*": ["apps/stock/config/src/*"]
|
||||
}
|
||||
},
|
||||
"references": [
|
||||
{ "path": "./config" },
|
||||
{ "path": "./data-ingestion" },
|
||||
{ "path": "./data-pipeline" },
|
||||
{ "path": "./web-api" },
|
||||
{ "path": "./web-app" }
|
||||
]
|
||||
}
|
||||
{
|
||||
"extends": "../../tsconfig.json",
|
||||
"compilerOptions": {
|
||||
"baseUrl": "../..",
|
||||
"paths": {
|
||||
"@stock-bot/*": ["libs/*/src"],
|
||||
"@stock-bot/stock-config": ["apps/stock/config/src"],
|
||||
"@stock-bot/stock-config/*": ["apps/stock/config/src/*"]
|
||||
}
|
||||
},
|
||||
"references": [
|
||||
{ "path": "./config" },
|
||||
{ "path": "./data-ingestion" },
|
||||
{ "path": "./data-pipeline" },
|
||||
{ "path": "./web-api" },
|
||||
{ "path": "./web-app" }
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,34 +1,34 @@
|
|||
/**
|
||||
* Service Container Setup for Web API
|
||||
* Configures dependency injection for the web API service
|
||||
*/
|
||||
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { AppConfig } from '@stock-bot/config';
|
||||
|
||||
const logger = getLogger('web-api-container');
|
||||
|
||||
/**
|
||||
* Configure the service container for web API workloads
|
||||
*/
|
||||
export function setupServiceContainer(
|
||||
config: AppConfig,
|
||||
container: IServiceContainer
|
||||
): IServiceContainer {
|
||||
logger.info('Configuring web API service container...');
|
||||
|
||||
// Web API specific configuration
|
||||
// This service mainly reads data, so smaller pool sizes are fine
|
||||
const poolSizes = {
|
||||
mongodb: config.environment === 'production' ? 20 : 10,
|
||||
postgres: config.environment === 'production' ? 30 : 15,
|
||||
cache: config.environment === 'production' ? 20 : 10,
|
||||
};
|
||||
|
||||
logger.info('Web API pool sizes configured', poolSizes);
|
||||
|
||||
// The container is already configured with connections
|
||||
// Just return it with our logging
|
||||
return container;
|
||||
}
|
||||
/**
|
||||
* Service Container Setup for Web API
|
||||
* Configures dependency injection for the web API service
|
||||
*/
|
||||
|
||||
import type { AppConfig } from '@stock-bot/config';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
|
||||
const logger = getLogger('web-api-container');
|
||||
|
||||
/**
|
||||
* Configure the service container for web API workloads
|
||||
*/
|
||||
export function setupServiceContainer(
|
||||
config: AppConfig,
|
||||
container: IServiceContainer
|
||||
): IServiceContainer {
|
||||
logger.info('Configuring web API service container...');
|
||||
|
||||
// Web API specific configuration
|
||||
// This service mainly reads data, so smaller pool sizes are fine
|
||||
const poolSizes = {
|
||||
mongodb: config.environment === 'production' ? 20 : 10,
|
||||
postgres: config.environment === 'production' ? 30 : 15,
|
||||
cache: config.environment === 'production' ? 20 : 10,
|
||||
};
|
||||
|
||||
logger.info('Web API pool sizes configured', poolSizes);
|
||||
|
||||
// The container is already configured with connections
|
||||
// Just return it with our logging
|
||||
return container;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,10 +3,9 @@
|
|||
* Simplified entry point using ServiceApplication framework
|
||||
*/
|
||||
|
||||
import { initializeStockConfig } from '@stock-bot/stock-config';
|
||||
import { ServiceApplication } from '@stock-bot/di';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
|
||||
import { initializeStockConfig } from '@stock-bot/stock-config';
|
||||
// Local imports
|
||||
import { createRoutes } from './routes/create-routes';
|
||||
|
||||
|
|
@ -49,7 +48,7 @@ const app = new ServiceApplication(
|
|||
},
|
||||
{
|
||||
// Custom lifecycle hooks
|
||||
onStarted: (_port) => {
|
||||
onStarted: _port => {
|
||||
const logger = getLogger('web-api');
|
||||
logger.info('Web API service startup initiated with ServiceApplication framework');
|
||||
},
|
||||
|
|
@ -59,7 +58,7 @@ const app = new ServiceApplication(
|
|||
// Container factory function
|
||||
async function createContainer(config: any) {
|
||||
const { ServiceContainerBuilder } = await import('@stock-bot/di');
|
||||
|
||||
|
||||
const container = await new ServiceContainerBuilder()
|
||||
.withConfig(config)
|
||||
.withOptions({
|
||||
|
|
@ -72,7 +71,7 @@ async function createContainer(config: any) {
|
|||
enableProxy: false, // Web API doesn't need proxy
|
||||
})
|
||||
.build(); // This automatically initializes services
|
||||
|
||||
|
||||
return container;
|
||||
}
|
||||
|
||||
|
|
@ -81,4 +80,4 @@ app.start(createContainer, createRoutes).catch(error => {
|
|||
const logger = getLogger('web-api');
|
||||
logger.fatal('Failed to start web API service', { error });
|
||||
process.exit(1);
|
||||
});
|
||||
});
|
||||
|
|
|
|||
|
|
@ -5,8 +5,8 @@
|
|||
|
||||
import { Hono } from 'hono';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { createHealthRoutes } from './health.routes';
|
||||
import { createExchangeRoutes } from './exchange.routes';
|
||||
import { createHealthRoutes } from './health.routes';
|
||||
import { createMonitoringRoutes } from './monitoring.routes';
|
||||
import { createPipelineRoutes } from './pipeline.routes';
|
||||
|
||||
|
|
@ -26,4 +26,4 @@ export function createRoutes(container: IServiceContainer): Hono {
|
|||
app.route('/api/pipeline', pipelineRoutes);
|
||||
|
||||
return app;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,8 +2,8 @@
|
|||
* Exchange management routes - Refactored
|
||||
*/
|
||||
import { Hono } from 'hono';
|
||||
import type { IServiceContainer } from '@stock-bot/types';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { createExchangeService } from '../services/exchange.service';
|
||||
import { createSuccessResponse, handleError } from '../utils/error-handler';
|
||||
import {
|
||||
|
|
@ -259,4 +259,4 @@ export function createExchangeRoutes(container: IServiceContainer) {
|
|||
});
|
||||
|
||||
return exchangeRoutes;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,8 +2,8 @@
|
|||
* Health check routes factory
|
||||
*/
|
||||
import { Hono } from 'hono';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
|
||||
const logger = getLogger('health-routes');
|
||||
|
||||
|
|
@ -70,7 +70,10 @@ export function createHealthRoutes(container: IServiceContainer) {
|
|||
health.checks.postgresql = { status: 'healthy', message: 'Connected and responsive' };
|
||||
logger.debug('PostgreSQL health check passed');
|
||||
} else {
|
||||
health.checks.postgresql = { status: 'unhealthy', message: 'PostgreSQL client not available' };
|
||||
health.checks.postgresql = {
|
||||
status: 'unhealthy',
|
||||
message: 'PostgreSQL client not available',
|
||||
};
|
||||
logger.warn('PostgreSQL health check failed - client not available');
|
||||
}
|
||||
} catch (error) {
|
||||
|
|
@ -108,4 +111,4 @@ export function createHealthRoutes(container: IServiceContainer) {
|
|||
}
|
||||
|
||||
// Export legacy routes for backward compatibility during migration
|
||||
export const healthRoutes = createHealthRoutes({} as IServiceContainer);
|
||||
export const healthRoutes = createHealthRoutes({} as IServiceContainer);
|
||||
|
|
|
|||
|
|
@ -13,167 +13,200 @@ export function createMonitoringRoutes(container: IServiceContainer) {
|
|||
/**
|
||||
* Get overall system health
|
||||
*/
|
||||
monitoring.get('/', async (c) => {
|
||||
monitoring.get('/', async c => {
|
||||
try {
|
||||
const health = await monitoringService.getSystemHealth();
|
||||
|
||||
|
||||
// Set appropriate status code based on health
|
||||
const statusCode = health.status === 'healthy' ? 200 :
|
||||
health.status === 'degraded' ? 503 : 500;
|
||||
|
||||
const statusCode =
|
||||
health.status === 'healthy' ? 200 : health.status === 'degraded' ? 503 : 500;
|
||||
|
||||
return c.json(health, statusCode);
|
||||
} catch (error) {
|
||||
return c.json({
|
||||
status: 'error',
|
||||
message: 'Failed to retrieve system health',
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
}, 500);
|
||||
return c.json(
|
||||
{
|
||||
status: 'error',
|
||||
message: 'Failed to retrieve system health',
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
},
|
||||
500
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Get cache/Dragonfly statistics
|
||||
*/
|
||||
monitoring.get('/cache', async (c) => {
|
||||
monitoring.get('/cache', async c => {
|
||||
try {
|
||||
const stats = await monitoringService.getCacheStats();
|
||||
return c.json(stats);
|
||||
} catch (error) {
|
||||
return c.json({
|
||||
error: 'Failed to retrieve cache statistics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
}, 500);
|
||||
return c.json(
|
||||
{
|
||||
error: 'Failed to retrieve cache statistics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
},
|
||||
500
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Get queue statistics
|
||||
*/
|
||||
monitoring.get('/queues', async (c) => {
|
||||
monitoring.get('/queues', async c => {
|
||||
try {
|
||||
const stats = await monitoringService.getQueueStats();
|
||||
return c.json({ queues: stats });
|
||||
} catch (error) {
|
||||
return c.json({
|
||||
error: 'Failed to retrieve queue statistics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
}, 500);
|
||||
return c.json(
|
||||
{
|
||||
error: 'Failed to retrieve queue statistics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
},
|
||||
500
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Get specific queue statistics
|
||||
*/
|
||||
monitoring.get('/queues/:name', async (c) => {
|
||||
monitoring.get('/queues/:name', async c => {
|
||||
try {
|
||||
const queueName = c.req.param('name');
|
||||
const stats = await monitoringService.getQueueStats();
|
||||
const queueStats = stats.find(q => q.name === queueName);
|
||||
|
||||
|
||||
if (!queueStats) {
|
||||
return c.json({
|
||||
error: 'Queue not found',
|
||||
message: `Queue '${queueName}' does not exist`,
|
||||
}, 404);
|
||||
return c.json(
|
||||
{
|
||||
error: 'Queue not found',
|
||||
message: `Queue '${queueName}' does not exist`,
|
||||
},
|
||||
404
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
return c.json(queueStats);
|
||||
} catch (error) {
|
||||
return c.json({
|
||||
error: 'Failed to retrieve queue statistics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
}, 500);
|
||||
return c.json(
|
||||
{
|
||||
error: 'Failed to retrieve queue statistics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
},
|
||||
500
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Get database statistics
|
||||
*/
|
||||
monitoring.get('/databases', async (c) => {
|
||||
monitoring.get('/databases', async c => {
|
||||
try {
|
||||
const stats = await monitoringService.getDatabaseStats();
|
||||
return c.json({ databases: stats });
|
||||
} catch (error) {
|
||||
return c.json({
|
||||
error: 'Failed to retrieve database statistics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
}, 500);
|
||||
return c.json(
|
||||
{
|
||||
error: 'Failed to retrieve database statistics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
},
|
||||
500
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Get specific database statistics
|
||||
*/
|
||||
monitoring.get('/databases/:type', async (c) => {
|
||||
monitoring.get('/databases/:type', async c => {
|
||||
try {
|
||||
const dbType = c.req.param('type') as 'postgres' | 'mongodb' | 'questdb';
|
||||
const stats = await monitoringService.getDatabaseStats();
|
||||
const dbStats = stats.find(db => db.type === dbType);
|
||||
|
||||
|
||||
if (!dbStats) {
|
||||
return c.json({
|
||||
error: 'Database not found',
|
||||
message: `Database type '${dbType}' not found or not enabled`,
|
||||
}, 404);
|
||||
return c.json(
|
||||
{
|
||||
error: 'Database not found',
|
||||
message: `Database type '${dbType}' not found or not enabled`,
|
||||
},
|
||||
404
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
return c.json(dbStats);
|
||||
} catch (error) {
|
||||
return c.json({
|
||||
error: 'Failed to retrieve database statistics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
}, 500);
|
||||
return c.json(
|
||||
{
|
||||
error: 'Failed to retrieve database statistics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
},
|
||||
500
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Get service metrics
|
||||
*/
|
||||
monitoring.get('/metrics', async (c) => {
|
||||
monitoring.get('/metrics', async c => {
|
||||
try {
|
||||
const metrics = await monitoringService.getServiceMetrics();
|
||||
return c.json(metrics);
|
||||
} catch (error) {
|
||||
return c.json({
|
||||
error: 'Failed to retrieve service metrics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
}, 500);
|
||||
return c.json(
|
||||
{
|
||||
error: 'Failed to retrieve service metrics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
},
|
||||
500
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Get detailed cache info (Redis INFO command output)
|
||||
*/
|
||||
monitoring.get('/cache/info', async (c) => {
|
||||
monitoring.get('/cache/info', async c => {
|
||||
try {
|
||||
if (!container.cache) {
|
||||
return c.json({
|
||||
error: 'Cache not available',
|
||||
message: 'Cache service is not enabled',
|
||||
}, 503);
|
||||
return c.json(
|
||||
{
|
||||
error: 'Cache not available',
|
||||
message: 'Cache service is not enabled',
|
||||
},
|
||||
503
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
const info = await container.cache.info();
|
||||
const stats = await monitoringService.getCacheStats();
|
||||
|
||||
|
||||
return c.json({
|
||||
parsed: stats,
|
||||
raw: info,
|
||||
});
|
||||
} catch (error) {
|
||||
return c.json({
|
||||
error: 'Failed to retrieve cache info',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
}, 500);
|
||||
return c.json(
|
||||
{
|
||||
error: 'Failed to retrieve cache info',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
},
|
||||
500
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Health check endpoint for monitoring
|
||||
*/
|
||||
monitoring.get('/ping', (c) => {
|
||||
return c.json({
|
||||
status: 'ok',
|
||||
monitoring.get('/ping', c => {
|
||||
return c.json({
|
||||
status: 'ok',
|
||||
timestamp: new Date().toISOString(),
|
||||
service: 'monitoring',
|
||||
});
|
||||
|
|
@ -182,78 +215,90 @@ export function createMonitoringRoutes(container: IServiceContainer) {
|
|||
/**
|
||||
* Get service status for all microservices
|
||||
*/
|
||||
monitoring.get('/services', async (c) => {
|
||||
monitoring.get('/services', async c => {
|
||||
try {
|
||||
const services = await monitoringService.getServiceStatus();
|
||||
return c.json({ services });
|
||||
} catch (error) {
|
||||
return c.json({
|
||||
error: 'Failed to retrieve service status',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
}, 500);
|
||||
return c.json(
|
||||
{
|
||||
error: 'Failed to retrieve service status',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
},
|
||||
500
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Get proxy statistics
|
||||
*/
|
||||
monitoring.get('/proxies', async (c) => {
|
||||
monitoring.get('/proxies', async c => {
|
||||
try {
|
||||
const stats = await monitoringService.getProxyStats();
|
||||
return c.json(stats || { enabled: false });
|
||||
} catch (error) {
|
||||
return c.json({
|
||||
error: 'Failed to retrieve proxy statistics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
}, 500);
|
||||
return c.json(
|
||||
{
|
||||
error: 'Failed to retrieve proxy statistics',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
},
|
||||
500
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Get comprehensive system overview
|
||||
*/
|
||||
monitoring.get('/overview', async (c) => {
|
||||
monitoring.get('/overview', async c => {
|
||||
try {
|
||||
const overview = await monitoringService.getSystemOverview();
|
||||
return c.json(overview);
|
||||
} catch (error) {
|
||||
return c.json({
|
||||
error: 'Failed to retrieve system overview',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
}, 500);
|
||||
return c.json(
|
||||
{
|
||||
error: 'Failed to retrieve system overview',
|
||||
message: error instanceof Error ? error.message : 'Unknown error',
|
||||
},
|
||||
500
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Test direct BullMQ queue access
|
||||
*/
|
||||
monitoring.get('/test/queue/:name', async (c) => {
|
||||
monitoring.get('/test/queue/:name', async c => {
|
||||
const queueName = c.req.param('name');
|
||||
const { Queue } = await import('bullmq');
|
||||
|
||||
|
||||
const connection = {
|
||||
host: 'localhost',
|
||||
port: 6379,
|
||||
db: 0, // All queues in DB 0
|
||||
db: 0, // All queues in DB 0
|
||||
};
|
||||
|
||||
|
||||
const queue = new Queue(queueName, { connection });
|
||||
|
||||
|
||||
try {
|
||||
const counts = await queue.getJobCounts();
|
||||
await queue.close();
|
||||
return c.json({
|
||||
return c.json({
|
||||
queueName,
|
||||
counts
|
||||
counts,
|
||||
});
|
||||
} catch (error: any) {
|
||||
await queue.close();
|
||||
return c.json({
|
||||
queueName,
|
||||
error: error.message
|
||||
}, 500);
|
||||
return c.json(
|
||||
{
|
||||
queueName,
|
||||
error: error.message,
|
||||
},
|
||||
500
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
return monitoring;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -132,4 +132,4 @@ export function createPipelineRoutes(container: IServiceContainer) {
|
|||
});
|
||||
|
||||
return pipeline;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import type { IServiceContainer } from '@stock-bot/types';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import {
|
||||
CreateExchangeRequest,
|
||||
CreateProviderMappingRequest,
|
||||
|
|
@ -380,4 +380,4 @@ export class ExchangeService {
|
|||
// Export function to create service instance with container
|
||||
export function createExchangeService(container: IServiceContainer): ExchangeService {
|
||||
return new ExchangeService(container);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,19 +3,19 @@
|
|||
* Collects health and performance metrics from all system components
|
||||
*/
|
||||
|
||||
import * as os from 'os';
|
||||
import type { IServiceContainer } from '@stock-bot/handlers';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type {
|
||||
CacheStats,
|
||||
QueueStats,
|
||||
DatabaseStats,
|
||||
SystemHealth,
|
||||
import type {
|
||||
CacheStats,
|
||||
DatabaseStats,
|
||||
ProxyStats,
|
||||
QueueStats,
|
||||
ServiceMetrics,
|
||||
ServiceStatus,
|
||||
ProxyStats,
|
||||
SystemOverview
|
||||
SystemHealth,
|
||||
SystemOverview,
|
||||
} from '../types/monitoring.types';
|
||||
import * as os from 'os';
|
||||
|
||||
export class MonitoringService {
|
||||
private readonly logger = getLogger('monitoring-service');
|
||||
|
|
@ -46,7 +46,7 @@ export class MonitoringService {
|
|||
|
||||
// Get cache stats from the provider
|
||||
const cacheStats = this.container.cache.getStats();
|
||||
|
||||
|
||||
// Since we can't access Redis info directly, we'll use what's available
|
||||
return {
|
||||
provider: 'dragonfly',
|
||||
|
|
@ -74,7 +74,7 @@ export class MonitoringService {
|
|||
*/
|
||||
async getQueueStats(): Promise<QueueStats[]> {
|
||||
const stats: QueueStats[] = [];
|
||||
|
||||
|
||||
try {
|
||||
if (!this.container.queue) {
|
||||
this.logger.warn('No queue manager available');
|
||||
|
|
@ -83,27 +83,27 @@ export class MonitoringService {
|
|||
|
||||
// Get all queue names from the SmartQueueManager
|
||||
const queueManager = this.container.queue as any;
|
||||
this.logger.debug('Queue manager type:', {
|
||||
this.logger.debug('Queue manager type:', {
|
||||
type: queueManager.constructor.name,
|
||||
hasGetAllQueues: typeof queueManager.getAllQueues === 'function',
|
||||
hasQueues: !!queueManager.queues,
|
||||
hasGetQueue: typeof queueManager.getQueue === 'function'
|
||||
hasGetQueue: typeof queueManager.getQueue === 'function',
|
||||
});
|
||||
|
||||
|
||||
// Always use the known queue names since web-api doesn't create worker queues
|
||||
const handlerMapping = {
|
||||
'proxy': 'data-ingestion',
|
||||
'qm': 'data-ingestion',
|
||||
'ib': 'data-ingestion',
|
||||
'ceo': 'data-ingestion',
|
||||
'webshare': 'data-ingestion',
|
||||
'exchanges': 'data-pipeline',
|
||||
'symbols': 'data-pipeline',
|
||||
proxy: 'data-ingestion',
|
||||
qm: 'data-ingestion',
|
||||
ib: 'data-ingestion',
|
||||
ceo: 'data-ingestion',
|
||||
webshare: 'data-ingestion',
|
||||
exchanges: 'data-pipeline',
|
||||
symbols: 'data-pipeline',
|
||||
};
|
||||
|
||||
|
||||
const queueNames = Object.keys(handlerMapping);
|
||||
this.logger.debug('Using known queue names', { count: queueNames.length, names: queueNames });
|
||||
|
||||
|
||||
// Create BullMQ queues directly with the correct format
|
||||
for (const handlerName of queueNames) {
|
||||
try {
|
||||
|
|
@ -114,17 +114,17 @@ export class MonitoringService {
|
|||
port: 6379,
|
||||
db: 0, // All queues now in DB 0
|
||||
};
|
||||
|
||||
|
||||
// Get the service that owns this handler
|
||||
const serviceName = handlerMapping[handlerName as keyof typeof handlerMapping];
|
||||
|
||||
|
||||
// Create BullMQ queue with the new naming format {service_handler}
|
||||
const fullQueueName = `{${serviceName}_${handlerName}}`;
|
||||
const bullQueue = new BullMQQueue(fullQueueName, { connection });
|
||||
|
||||
|
||||
// Get stats directly from BullMQ
|
||||
const queueStats = await this.getQueueStatsForBullQueue(bullQueue, handlerName);
|
||||
|
||||
|
||||
stats.push({
|
||||
name: handlerName,
|
||||
connected: true,
|
||||
|
|
@ -134,7 +134,7 @@ export class MonitoringService {
|
|||
concurrency: 1,
|
||||
},
|
||||
});
|
||||
|
||||
|
||||
// Close the queue connection after getting stats
|
||||
await bullQueue.close();
|
||||
} catch (error) {
|
||||
|
|
@ -167,7 +167,7 @@ export class MonitoringService {
|
|||
try {
|
||||
// BullMQ provides getJobCounts which returns all counts at once
|
||||
const counts = await bullQueue.getJobCounts();
|
||||
|
||||
|
||||
return {
|
||||
waiting: counts.waiting || 0,
|
||||
active: counts.active || 0,
|
||||
|
|
@ -184,11 +184,11 @@ export class MonitoringService {
|
|||
try {
|
||||
const [waiting, active, completed, failed, delayed, paused] = await Promise.all([
|
||||
bullQueue.getWaitingCount(),
|
||||
bullQueue.getActiveCount(),
|
||||
bullQueue.getActiveCount(),
|
||||
bullQueue.getCompletedCount(),
|
||||
bullQueue.getFailedCount(),
|
||||
bullQueue.getDelayedCount(),
|
||||
bullQueue.getPausedCount ? bullQueue.getPausedCount() : 0
|
||||
bullQueue.getPausedCount ? bullQueue.getPausedCount() : 0,
|
||||
]);
|
||||
|
||||
return {
|
||||
|
|
@ -222,7 +222,7 @@ export class MonitoringService {
|
|||
paused: stats.paused || 0,
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
// Try individual count methods
|
||||
const [waiting, active, completed, failed, delayed] = await Promise.all([
|
||||
this.safeGetCount(queue, 'getWaitingCount', 'getWaiting'),
|
||||
|
|
@ -252,7 +252,7 @@ export class MonitoringService {
|
|||
if (queue[methodName] && typeof queue[methodName] === 'function') {
|
||||
try {
|
||||
const result = await queue[methodName]();
|
||||
return Array.isArray(result) ? result.length : (result || 0);
|
||||
return Array.isArray(result) ? result.length : result || 0;
|
||||
} catch (_e) {
|
||||
// Continue to next method
|
||||
}
|
||||
|
|
@ -291,7 +291,7 @@ export class MonitoringService {
|
|||
concurrency: queue.workers[0]?.concurrency || 1,
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
// Check queue manager for worker config
|
||||
if (queueManager.config?.defaultQueueOptions) {
|
||||
const options = queueManager.config.defaultQueueOptions;
|
||||
|
|
@ -300,7 +300,7 @@ export class MonitoringService {
|
|||
concurrency: options.concurrency || 1,
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
// Check for getWorkerCount method
|
||||
if (queue.getWorkerCount && typeof queue.getWorkerCount === 'function') {
|
||||
const count = queue.getWorkerCount();
|
||||
|
|
@ -312,7 +312,7 @@ export class MonitoringService {
|
|||
} catch (_e) {
|
||||
// Ignore
|
||||
}
|
||||
|
||||
|
||||
return undefined;
|
||||
}
|
||||
|
||||
|
|
@ -331,12 +331,14 @@ export class MonitoringService {
|
|||
|
||||
// Get pool stats
|
||||
const pool = (this.container.postgres as any).pool;
|
||||
const poolStats = pool ? {
|
||||
size: pool.totalCount || 0,
|
||||
active: pool.idleCount || 0,
|
||||
idle: pool.waitingCount || 0,
|
||||
max: pool.options?.max || 0,
|
||||
} : undefined;
|
||||
const poolStats = pool
|
||||
? {
|
||||
size: pool.totalCount || 0,
|
||||
active: pool.idleCount || 0,
|
||||
idle: pool.waitingCount || 0,
|
||||
max: pool.options?.max || 0,
|
||||
}
|
||||
: undefined;
|
||||
|
||||
stats.push({
|
||||
type: 'postgres',
|
||||
|
|
@ -365,7 +367,7 @@ export class MonitoringService {
|
|||
const latency = Date.now() - startTime;
|
||||
|
||||
const serverStatus = await db.admin().serverStatus();
|
||||
|
||||
|
||||
stats.push({
|
||||
type: 'mongodb',
|
||||
name: 'MongoDB',
|
||||
|
|
@ -393,9 +395,11 @@ export class MonitoringService {
|
|||
try {
|
||||
const startTime = Date.now();
|
||||
// QuestDB health check
|
||||
const response = await fetch(`http://${process.env.QUESTDB_HOST || 'localhost'}:9000/exec?query=SELECT%201`);
|
||||
const response = await fetch(
|
||||
`http://${process.env.QUESTDB_HOST || 'localhost'}:9000/exec?query=SELECT%201`
|
||||
);
|
||||
const latency = Date.now() - startTime;
|
||||
|
||||
|
||||
stats.push({
|
||||
type: 'questdb',
|
||||
name: 'QuestDB',
|
||||
|
|
@ -432,23 +436,22 @@ export class MonitoringService {
|
|||
|
||||
// Determine overall health status
|
||||
const errors: string[] = [];
|
||||
|
||||
|
||||
if (!cacheStats.connected) {
|
||||
errors.push('Cache service is disconnected');
|
||||
}
|
||||
|
||||
|
||||
const disconnectedQueues = queueStats.filter(q => !q.connected);
|
||||
if (disconnectedQueues.length > 0) {
|
||||
errors.push(`${disconnectedQueues.length} queue(s) are disconnected`);
|
||||
}
|
||||
|
||||
|
||||
const disconnectedDbs = databaseStats.filter(db => !db.connected);
|
||||
if (disconnectedDbs.length > 0) {
|
||||
errors.push(`${disconnectedDbs.length} database(s) are disconnected`);
|
||||
}
|
||||
|
||||
const status = errors.length === 0 ? 'healthy' :
|
||||
errors.length < 3 ? 'degraded' : 'unhealthy';
|
||||
const status = errors.length === 0 ? 'healthy' : errors.length < 3 ? 'degraded' : 'unhealthy';
|
||||
|
||||
return {
|
||||
status,
|
||||
|
|
@ -478,7 +481,7 @@ export class MonitoringService {
|
|||
*/
|
||||
async getServiceMetrics(): Promise<ServiceMetrics> {
|
||||
const now = new Date().toISOString();
|
||||
|
||||
|
||||
return {
|
||||
requestsPerSecond: {
|
||||
timestamp: now,
|
||||
|
|
@ -517,12 +520,12 @@ export class MonitoringService {
|
|||
private parseRedisInfo(info: string): Record<string, any> {
|
||||
const result: Record<string, any> = {};
|
||||
const sections = info.split('\r\n\r\n');
|
||||
|
||||
|
||||
for (const section of sections) {
|
||||
const lines = section.split('\r\n');
|
||||
const sectionName = lines[0]?.replace('# ', '') || 'general';
|
||||
result[sectionName] = {};
|
||||
|
||||
|
||||
for (let i = 1; i < lines.length; i++) {
|
||||
const [key, value] = lines[i].split(':');
|
||||
if (key && value) {
|
||||
|
|
@ -530,7 +533,7 @@ export class MonitoringService {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
|
|
@ -539,7 +542,7 @@ export class MonitoringService {
|
|||
*/
|
||||
async getServiceStatus(): Promise<ServiceStatus[]> {
|
||||
const services: ServiceStatus[] = [];
|
||||
|
||||
|
||||
// Define service endpoints
|
||||
const serviceEndpoints = [
|
||||
{ name: 'data-ingestion', port: 2001, path: '/health' },
|
||||
|
|
@ -562,13 +565,13 @@ export class MonitoringService {
|
|||
});
|
||||
continue;
|
||||
}
|
||||
|
||||
|
||||
const startTime = Date.now();
|
||||
const response = await fetch(`http://localhost:${service.port}${service.path}`, {
|
||||
signal: AbortSignal.timeout(5000), // 5 second timeout
|
||||
});
|
||||
const _latency = Date.now() - startTime;
|
||||
|
||||
|
||||
if (response.ok) {
|
||||
const data = await response.json();
|
||||
services.push({
|
||||
|
|
@ -629,28 +632,28 @@ export class MonitoringService {
|
|||
// Get proxy data from cache using getRaw method
|
||||
// The proxy manager uses cache:proxy: prefix, but web-api cache uses cache:api:
|
||||
const cacheProvider = this.container.cache;
|
||||
|
||||
|
||||
if (cacheProvider.getRaw) {
|
||||
// Use getRaw to access data with different cache prefix
|
||||
// The proxy manager now uses a global cache:proxy: prefix
|
||||
this.logger.debug('Attempting to fetch proxy data from cache');
|
||||
|
||||
|
||||
const [cachedProxies, lastUpdateStr] = await Promise.all([
|
||||
cacheProvider.getRaw<any[]>('cache:proxy:active'),
|
||||
cacheProvider.getRaw<string>('cache:proxy:last-update')
|
||||
cacheProvider.getRaw<string>('cache:proxy:last-update'),
|
||||
]);
|
||||
|
||||
this.logger.debug('Proxy cache data retrieved', {
|
||||
|
||||
this.logger.debug('Proxy cache data retrieved', {
|
||||
hasProxies: !!cachedProxies,
|
||||
isArray: Array.isArray(cachedProxies),
|
||||
proxyCount: cachedProxies ? cachedProxies.length : 0,
|
||||
lastUpdate: lastUpdateStr
|
||||
lastUpdate: lastUpdateStr,
|
||||
});
|
||||
|
||||
|
||||
if (cachedProxies && Array.isArray(cachedProxies)) {
|
||||
const workingCount = cachedProxies.filter((p: any) => p.isWorking !== false).length;
|
||||
const failedCount = cachedProxies.filter((p: any) => p.isWorking === false).length;
|
||||
|
||||
|
||||
return {
|
||||
enabled: true,
|
||||
totalProxies: cachedProxies.length,
|
||||
|
|
@ -662,7 +665,7 @@ export class MonitoringService {
|
|||
} else {
|
||||
this.logger.debug('Cache provider does not support getRaw method');
|
||||
}
|
||||
|
||||
|
||||
// No cached data found - proxies might not be initialized yet
|
||||
return {
|
||||
enabled: true,
|
||||
|
|
@ -672,7 +675,7 @@ export class MonitoringService {
|
|||
};
|
||||
} catch (cacheError) {
|
||||
this.logger.debug('Could not retrieve proxy data from cache', { error: cacheError });
|
||||
|
||||
|
||||
// Return basic stats if cache query fails
|
||||
return {
|
||||
enabled: true,
|
||||
|
|
@ -727,7 +730,7 @@ export class MonitoringService {
|
|||
|
||||
const idle = totalIdle / cpus.length;
|
||||
const total = totalTick / cpus.length;
|
||||
const usage = 100 - ~~(100 * idle / total);
|
||||
const usage = 100 - ~~((100 * idle) / total);
|
||||
|
||||
return {
|
||||
usage,
|
||||
|
|
@ -742,21 +745,21 @@ export class MonitoringService {
|
|||
private getSystemMemory() {
|
||||
const totalMem = os.totalmem();
|
||||
const freeMem = os.freemem();
|
||||
|
||||
|
||||
// On Linux, freeMem includes buffers/cache, but we want "available" memory
|
||||
// which better represents memory that can be used by applications
|
||||
let availableMem = freeMem;
|
||||
|
||||
|
||||
// Try to read from /proc/meminfo for more accurate memory stats on Linux
|
||||
if (os.platform() === 'linux') {
|
||||
try {
|
||||
const fs = require('fs');
|
||||
const meminfo = fs.readFileSync('/proc/meminfo', 'utf8');
|
||||
const lines = meminfo.split('\n');
|
||||
|
||||
|
||||
let memAvailable = 0;
|
||||
let _memTotal = 0;
|
||||
|
||||
|
||||
for (const line of lines) {
|
||||
if (line.startsWith('MemAvailable:')) {
|
||||
memAvailable = parseInt(line.split(/\s+/)[1], 10) * 1024; // Convert from KB to bytes
|
||||
|
|
@ -764,7 +767,7 @@ export class MonitoringService {
|
|||
_memTotal = parseInt(line.split(/\s+/)[1], 10) * 1024;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
if (memAvailable > 0) {
|
||||
availableMem = memAvailable;
|
||||
}
|
||||
|
|
@ -773,7 +776,7 @@ export class MonitoringService {
|
|||
this.logger.debug('Could not read /proc/meminfo', { error });
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
const usedMem = totalMem - availableMem;
|
||||
|
||||
return {
|
||||
|
|
@ -784,4 +787,4 @@ export class MonitoringService {
|
|||
percentage: (usedMem / totalMem) * 100,
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -332,4 +332,4 @@ export class PipelineService {
|
|||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -124,4 +124,4 @@ export interface SystemOverview {
|
|||
architecture: string;
|
||||
hostname: string;
|
||||
};
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,4 +3,3 @@ export { Card, CardContent, CardHeader } from './Card';
|
|||
export { DataTable } from './DataTable';
|
||||
export { Dialog, DialogContent, DialogHeader, DialogTitle } from './Dialog';
|
||||
export { StatCard } from './StatCard';
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
|
||||
export { AddProviderMappingDialog } from './AddProviderMappingDialog';
|
||||
export { AddExchangeDialog } from './AddExchangeDialog';
|
||||
export { DeleteExchangeDialog } from './DeleteExchangeDialog';
|
||||
|
|
|
|||
|
|
@ -133,4 +133,4 @@ interface BaseDialogProps {
|
|||
|
||||
export interface AddExchangeDialogProps extends BaseDialogProps {
|
||||
onCreateExchange: (request: CreateExchangeRequest) => Promise<void>;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -11,4 +11,4 @@ export { ProxyStatsCard } from './ProxyStatsCard';
|
|||
export { StatusBadge, ConnectionStatus, HealthStatus, ServiceStatusIndicator } from './StatusBadge';
|
||||
export { MetricCard } from './MetricCard';
|
||||
export { ServiceCard } from './ServiceCard';
|
||||
export { DatabaseCard } from './DatabaseCard';
|
||||
export { DatabaseCard } from './DatabaseCard';
|
||||
|
|
|
|||
|
|
@ -2,4 +2,4 @@
|
|||
* Monitoring hooks exports
|
||||
*/
|
||||
|
||||
export * from './useMonitoring';
|
||||
export * from './useMonitoring';
|
||||
|
|
|
|||
|
|
@ -2,16 +2,16 @@
|
|||
* Custom hook for monitoring data
|
||||
*/
|
||||
|
||||
import { useState, useEffect, useCallback } from 'react';
|
||||
import { useCallback, useEffect, useState } from 'react';
|
||||
import { monitoringApi } from '../services/monitoringApi';
|
||||
import type {
|
||||
SystemHealth,
|
||||
CacheStats,
|
||||
QueueStats,
|
||||
import type {
|
||||
CacheStats,
|
||||
DatabaseStats,
|
||||
ServiceStatus,
|
||||
ProxyStats,
|
||||
SystemOverview
|
||||
QueueStats,
|
||||
ServiceStatus,
|
||||
SystemHealth,
|
||||
SystemOverview,
|
||||
} from '../types';
|
||||
|
||||
export function useSystemHealth(refreshInterval: number = 5000) {
|
||||
|
|
@ -33,7 +33,7 @@ export function useSystemHealth(refreshInterval: number = 5000) {
|
|||
|
||||
useEffect(() => {
|
||||
fetchData();
|
||||
|
||||
|
||||
if (refreshInterval > 0) {
|
||||
const interval = setInterval(fetchData, refreshInterval);
|
||||
return () => clearInterval(interval);
|
||||
|
|
@ -62,7 +62,7 @@ export function useCacheStats(refreshInterval: number = 5000) {
|
|||
|
||||
useEffect(() => {
|
||||
fetchData();
|
||||
|
||||
|
||||
if (refreshInterval > 0) {
|
||||
const interval = setInterval(fetchData, refreshInterval);
|
||||
return () => clearInterval(interval);
|
||||
|
|
@ -91,7 +91,7 @@ export function useQueueStats(refreshInterval: number = 5000) {
|
|||
|
||||
useEffect(() => {
|
||||
fetchData();
|
||||
|
||||
|
||||
if (refreshInterval > 0) {
|
||||
const interval = setInterval(fetchData, refreshInterval);
|
||||
return () => clearInterval(interval);
|
||||
|
|
@ -120,7 +120,7 @@ export function useDatabaseStats(refreshInterval: number = 5000) {
|
|||
|
||||
useEffect(() => {
|
||||
fetchData();
|
||||
|
||||
|
||||
if (refreshInterval > 0) {
|
||||
const interval = setInterval(fetchData, refreshInterval);
|
||||
return () => clearInterval(interval);
|
||||
|
|
@ -149,7 +149,7 @@ export function useServiceStatus(refreshInterval: number = 5000) {
|
|||
|
||||
useEffect(() => {
|
||||
fetchData();
|
||||
|
||||
|
||||
if (refreshInterval > 0) {
|
||||
const interval = setInterval(fetchData, refreshInterval);
|
||||
return () => clearInterval(interval);
|
||||
|
|
@ -178,7 +178,7 @@ export function useProxyStats(refreshInterval: number = 5000) {
|
|||
|
||||
useEffect(() => {
|
||||
fetchData();
|
||||
|
||||
|
||||
if (refreshInterval > 0) {
|
||||
const interval = setInterval(fetchData, refreshInterval);
|
||||
return () => clearInterval(interval);
|
||||
|
|
@ -207,7 +207,7 @@ export function useSystemOverview(refreshInterval: number = 5000) {
|
|||
|
||||
useEffect(() => {
|
||||
fetchData();
|
||||
|
||||
|
||||
if (refreshInterval > 0) {
|
||||
const interval = setInterval(fetchData, refreshInterval);
|
||||
return () => clearInterval(interval);
|
||||
|
|
@ -215,4 +215,4 @@ export function useSystemOverview(refreshInterval: number = 5000) {
|
|||
}, [fetchData, refreshInterval]);
|
||||
|
||||
return { data, loading, error, refetch: fetchData };
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -5,4 +5,4 @@
|
|||
export { MonitoringPage } from './MonitoringPage';
|
||||
export * from './types';
|
||||
export * from './hooks/useMonitoring';
|
||||
export * from './services/monitoringApi';
|
||||
export * from './services/monitoringApi';
|
||||
|
|
|
|||
|
|
@ -2,14 +2,14 @@
|
|||
* Monitoring API Service
|
||||
*/
|
||||
|
||||
import type {
|
||||
SystemHealth,
|
||||
CacheStats,
|
||||
QueueStats,
|
||||
import type {
|
||||
CacheStats,
|
||||
DatabaseStats,
|
||||
ServiceStatus,
|
||||
ProxyStats,
|
||||
SystemOverview
|
||||
QueueStats,
|
||||
ServiceStatus,
|
||||
SystemHealth,
|
||||
SystemOverview,
|
||||
} from '../types';
|
||||
|
||||
const API_BASE_URL = import.meta.env.VITE_API_BASE_URL || 'http://localhost:2003';
|
||||
|
|
@ -125,4 +125,4 @@ export const monitoringApi = {
|
|||
}
|
||||
return response.json();
|
||||
},
|
||||
};
|
||||
};
|
||||
|
|
|
|||
|
|
@ -117,4 +117,4 @@ export interface SystemOverview {
|
|||
architecture: string;
|
||||
hostname: string;
|
||||
};
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,42 +1,48 @@
|
|||
/**
|
||||
* Common formatting utilities for monitoring components
|
||||
*/
|
||||
|
||||
export function formatUptime(ms: number): string {
|
||||
const seconds = Math.floor(ms / 1000);
|
||||
const minutes = Math.floor(seconds / 60);
|
||||
const hours = Math.floor(minutes / 60);
|
||||
const days = Math.floor(hours / 24);
|
||||
|
||||
if (days > 0) {return `${days}d ${hours % 24}h`;}
|
||||
if (hours > 0) {return `${hours}h ${minutes % 60}m`;}
|
||||
if (minutes > 0) {return `${minutes}m ${seconds % 60}s`;}
|
||||
return `${seconds}s`;
|
||||
}
|
||||
|
||||
export function formatBytes(bytes: number): string {
|
||||
const gb = bytes / 1024 / 1024 / 1024;
|
||||
if (gb >= 1) {
|
||||
return gb.toFixed(2) + ' GB';
|
||||
}
|
||||
|
||||
const mb = bytes / 1024 / 1024;
|
||||
if (mb >= 1) {
|
||||
return mb.toFixed(2) + ' MB';
|
||||
}
|
||||
|
||||
const kb = bytes / 1024;
|
||||
if (kb >= 1) {
|
||||
return kb.toFixed(2) + ' KB';
|
||||
}
|
||||
|
||||
return bytes + ' B';
|
||||
}
|
||||
|
||||
export function formatNumber(num: number): string {
|
||||
return num.toLocaleString();
|
||||
}
|
||||
|
||||
export function formatPercentage(value: number, decimals: number = 1): string {
|
||||
return `${value.toFixed(decimals)}%`;
|
||||
}
|
||||
/**
|
||||
* Common formatting utilities for monitoring components
|
||||
*/
|
||||
|
||||
export function formatUptime(ms: number): string {
|
||||
const seconds = Math.floor(ms / 1000);
|
||||
const minutes = Math.floor(seconds / 60);
|
||||
const hours = Math.floor(minutes / 60);
|
||||
const days = Math.floor(hours / 24);
|
||||
|
||||
if (days > 0) {
|
||||
return `${days}d ${hours % 24}h`;
|
||||
}
|
||||
if (hours > 0) {
|
||||
return `${hours}h ${minutes % 60}m`;
|
||||
}
|
||||
if (minutes > 0) {
|
||||
return `${minutes}m ${seconds % 60}s`;
|
||||
}
|
||||
return `${seconds}s`;
|
||||
}
|
||||
|
||||
export function formatBytes(bytes: number): string {
|
||||
const gb = bytes / 1024 / 1024 / 1024;
|
||||
if (gb >= 1) {
|
||||
return gb.toFixed(2) + ' GB';
|
||||
}
|
||||
|
||||
const mb = bytes / 1024 / 1024;
|
||||
if (mb >= 1) {
|
||||
return mb.toFixed(2) + ' MB';
|
||||
}
|
||||
|
||||
const kb = bytes / 1024;
|
||||
if (kb >= 1) {
|
||||
return kb.toFixed(2) + ' KB';
|
||||
}
|
||||
|
||||
return bytes + ' B';
|
||||
}
|
||||
|
||||
export function formatNumber(num: number): string {
|
||||
return num.toLocaleString();
|
||||
}
|
||||
|
||||
export function formatPercentage(value: number, decimals: number = 1): string {
|
||||
return `${value.toFixed(decimals)}%`;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -12,28 +12,29 @@ export function usePipeline() {
|
|||
const [error, setError] = useState<string | null>(null);
|
||||
const [lastJobResult, setLastJobResult] = useState<PipelineJobResult | null>(null);
|
||||
|
||||
const executeOperation = useCallback(async (
|
||||
operation: () => Promise<PipelineJobResult>
|
||||
): Promise<boolean> => {
|
||||
try {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
const result = await operation();
|
||||
setLastJobResult(result);
|
||||
if (!result.success) {
|
||||
setError(result.error || 'Operation failed');
|
||||
const executeOperation = useCallback(
|
||||
async (operation: () => Promise<PipelineJobResult>): Promise<boolean> => {
|
||||
try {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
const result = await operation();
|
||||
setLastJobResult(result);
|
||||
if (!result.success) {
|
||||
setError(result.error || 'Operation failed');
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Unknown error occurred';
|
||||
setError(errorMessage);
|
||||
setLastJobResult({ success: false, error: errorMessage });
|
||||
return false;
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
return true;
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Unknown error occurred';
|
||||
setError(errorMessage);
|
||||
setLastJobResult({ success: false, error: errorMessage });
|
||||
return false;
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
}, []);
|
||||
},
|
||||
[]
|
||||
);
|
||||
|
||||
// Symbol sync operations
|
||||
const syncQMSymbols = useCallback(
|
||||
|
|
@ -53,7 +54,7 @@ export function usePipeline() {
|
|||
);
|
||||
|
||||
const syncAllExchanges = useCallback(
|
||||
(clearFirst: boolean = false) =>
|
||||
(clearFirst: boolean = false) =>
|
||||
executeOperation(() => pipelineApi.syncAllExchanges(clearFirst)),
|
||||
[executeOperation]
|
||||
);
|
||||
|
|
@ -71,7 +72,7 @@ export function usePipeline() {
|
|||
|
||||
// Maintenance operations
|
||||
const clearPostgreSQLData = useCallback(
|
||||
(dataType: DataClearType = 'all') =>
|
||||
(dataType: DataClearType = 'all') =>
|
||||
executeOperation(() => pipelineApi.clearPostgreSQLData(dataType)),
|
||||
[executeOperation]
|
||||
);
|
||||
|
|
@ -122,7 +123,8 @@ export function usePipeline() {
|
|||
setError(result.error || 'Failed to get provider mapping stats');
|
||||
return null;
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Failed to get provider mapping stats';
|
||||
const errorMessage =
|
||||
err instanceof Error ? err.message : 'Failed to get provider mapping stats';
|
||||
setError(errorMessage);
|
||||
return null;
|
||||
} finally {
|
||||
|
|
@ -156,4 +158,4 @@ export function usePipeline() {
|
|||
getExchangeStats,
|
||||
getProviderMappingStats,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
export { PipelinePage } from './PipelinePage';
|
||||
export * from './hooks/usePipeline';
|
||||
export * from './types';
|
||||
export * from './types';
|
||||
|
|
|
|||
|
|
@ -1,16 +1,9 @@
|
|||
import type {
|
||||
DataClearType,
|
||||
PipelineJobResult,
|
||||
PipelineStatsResult,
|
||||
} from '../types';
|
||||
import type { DataClearType, PipelineJobResult, PipelineStatsResult } from '../types';
|
||||
|
||||
const API_BASE_URL = import.meta.env.VITE_API_BASE_URL || 'http://localhost:2003';
|
||||
|
||||
class PipelineApiService {
|
||||
private async request<T = unknown>(
|
||||
endpoint: string,
|
||||
options?: RequestInit
|
||||
): Promise<T> {
|
||||
private async request<T = unknown>(endpoint: string, options?: RequestInit): Promise<T> {
|
||||
const url = `${API_BASE_URL}/pipeline${endpoint}`;
|
||||
|
||||
const response = await fetch(url, {
|
||||
|
|
@ -79,4 +72,4 @@ class PipelineApiService {
|
|||
}
|
||||
|
||||
// Export singleton instance
|
||||
export const pipelineApi = new PipelineApiService();
|
||||
export const pipelineApi = new PipelineApiService();
|
||||
|
|
|
|||
|
|
@ -32,7 +32,6 @@ export interface ProviderMappingStats {
|
|||
coveragePercentage: number;
|
||||
}
|
||||
|
||||
|
||||
export type DataClearType = 'exchanges' | 'provider_mappings' | 'all';
|
||||
|
||||
export interface PipelineOperation {
|
||||
|
|
@ -44,4 +43,4 @@ export interface PipelineOperation {
|
|||
category: 'sync' | 'stats' | 'maintenance';
|
||||
dangerous?: boolean;
|
||||
params?: Record<string, unknown>;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
import {
|
||||
BuildingLibraryIcon,
|
||||
ChartBarIcon,
|
||||
ChartPieIcon,
|
||||
CircleStackIcon,
|
||||
CogIcon,
|
||||
DocumentTextIcon,
|
||||
HomeIcon,
|
||||
PresentationChartLineIcon,
|
||||
ServerStackIcon,
|
||||
CircleStackIcon,
|
||||
ChartPieIcon,
|
||||
} from '@heroicons/react/24/outline';
|
||||
|
||||
export interface NavigationItem {
|
||||
|
|
@ -23,13 +23,13 @@ export const navigation: NavigationItem[] = [
|
|||
{ name: 'Portfolio', href: '/portfolio', icon: ChartBarIcon },
|
||||
{ name: 'Strategies', href: '/strategies', icon: DocumentTextIcon },
|
||||
{ name: 'Analytics', href: '/analytics', icon: PresentationChartLineIcon },
|
||||
{
|
||||
name: 'System',
|
||||
{
|
||||
name: 'System',
|
||||
icon: ServerStackIcon,
|
||||
children: [
|
||||
{ name: 'Monitoring', href: '/system/monitoring', icon: ChartPieIcon },
|
||||
{ name: 'Pipeline', href: '/system/pipeline', icon: CircleStackIcon },
|
||||
]
|
||||
],
|
||||
},
|
||||
{ name: 'Settings', href: '/settings', icon: CogIcon },
|
||||
];
|
||||
|
|
|
|||
48
bun.lock
48
bun.lock
|
|
@ -189,6 +189,7 @@
|
|||
"@stock-bot/browser": "workspace:*",
|
||||
"@stock-bot/cache": "workspace:*",
|
||||
"@stock-bot/config": "workspace:*",
|
||||
"@stock-bot/handler-registry": "workspace:*",
|
||||
"@stock-bot/handlers": "workspace:*",
|
||||
"@stock-bot/logger": "workspace:*",
|
||||
"@stock-bot/mongodb": "workspace:*",
|
||||
|
|
@ -199,6 +200,7 @@
|
|||
"@stock-bot/shutdown": "workspace:*",
|
||||
"@stock-bot/types": "workspace:*",
|
||||
"awilix": "^12.0.5",
|
||||
"glob": "^10.0.0",
|
||||
"hono": "^4.0.0",
|
||||
"zod": "^3.23.8",
|
||||
},
|
||||
|
|
@ -220,12 +222,24 @@
|
|||
"typescript": "^5.3.0",
|
||||
},
|
||||
},
|
||||
"libs/core/handler-registry": {
|
||||
"name": "@stock-bot/handler-registry",
|
||||
"version": "1.0.0",
|
||||
"dependencies": {
|
||||
"@stock-bot/types": "workspace:*",
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/bun": "*",
|
||||
"typescript": "*",
|
||||
},
|
||||
},
|
||||
"libs/core/handlers": {
|
||||
"name": "@stock-bot/handlers",
|
||||
"version": "1.0.0",
|
||||
"dependencies": {
|
||||
"@stock-bot/cache": "workspace:*",
|
||||
"@stock-bot/config": "workspace:*",
|
||||
"@stock-bot/handler-registry": "workspace:*",
|
||||
"@stock-bot/logger": "workspace:*",
|
||||
"@stock-bot/types": "workspace:*",
|
||||
"@stock-bot/utils": "workspace:*",
|
||||
|
|
@ -257,7 +271,7 @@
|
|||
"version": "1.0.0",
|
||||
"dependencies": {
|
||||
"@stock-bot/cache": "*",
|
||||
"@stock-bot/handlers": "*",
|
||||
"@stock-bot/handler-registry": "*",
|
||||
"@stock-bot/logger": "*",
|
||||
"@stock-bot/types": "*",
|
||||
"bullmq": "^5.0.0",
|
||||
|
|
@ -820,6 +834,8 @@
|
|||
|
||||
"@stock-bot/event-bus": ["@stock-bot/event-bus@workspace:libs/core/event-bus"],
|
||||
|
||||
"@stock-bot/handler-registry": ["@stock-bot/handler-registry@workspace:libs/core/handler-registry"],
|
||||
|
||||
"@stock-bot/handlers": ["@stock-bot/handlers@workspace:libs/core/handlers"],
|
||||
|
||||
"@stock-bot/logger": ["@stock-bot/logger@workspace:libs/core/logger"],
|
||||
|
|
@ -2144,7 +2160,7 @@
|
|||
|
||||
"side-channel-weakmap": ["side-channel-weakmap@1.0.2", "", { "dependencies": { "call-bound": "^1.0.2", "es-errors": "^1.3.0", "get-intrinsic": "^1.2.5", "object-inspect": "^1.13.3", "side-channel-map": "^1.0.1" } }, "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A=="],
|
||||
|
||||
"signal-exit": ["signal-exit@3.0.7", "", {}, "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ=="],
|
||||
"signal-exit": ["signal-exit@4.1.0", "", {}, "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw=="],
|
||||
|
||||
"simple-concat": ["simple-concat@1.0.1", "", {}, "sha512-cSFtAPtRhljv69IK0hTVZQ+OfE9nePi/rtJmw5UjHeVyVroEqJXP1sFztKUy1qU+xvz3u/sfYJLa947b7nAN2Q=="],
|
||||
|
||||
|
|
@ -2364,7 +2380,7 @@
|
|||
|
||||
"word-wrap": ["word-wrap@1.2.5", "", {}, "sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA=="],
|
||||
|
||||
"wrap-ansi": ["wrap-ansi@7.0.0", "", { "dependencies": { "ansi-styles": "^4.0.0", "string-width": "^4.1.0", "strip-ansi": "^6.0.0" } }, "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q=="],
|
||||
"wrap-ansi": ["wrap-ansi@8.1.0", "", { "dependencies": { "ansi-styles": "^6.1.0", "string-width": "^5.0.1", "strip-ansi": "^7.0.1" } }, "sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ=="],
|
||||
|
||||
"wrap-ansi-cjs": ["wrap-ansi@7.0.0", "", { "dependencies": { "ansi-styles": "^4.0.0", "string-width": "^4.1.0", "strip-ansi": "^6.0.0" } }, "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q=="],
|
||||
|
||||
|
|
@ -2430,8 +2446,6 @@
|
|||
|
||||
"@isaacs/cliui/strip-ansi": ["strip-ansi@7.1.0", "", { "dependencies": { "ansi-regex": "^6.0.1" } }, "sha512-iq6eVVI64nQQTRYq2KtEg2d2uU7LElhTJwsH4YzIHZshxlgZms/wIc4VoDQTlG/IvVIrBKG06CrZnp0qv7hkcQ=="],
|
||||
|
||||
"@isaacs/cliui/wrap-ansi": ["wrap-ansi@8.1.0", "", { "dependencies": { "ansi-styles": "^6.1.0", "string-width": "^5.0.1", "strip-ansi": "^7.0.1" } }, "sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ=="],
|
||||
|
||||
"@mongodb-js/oidc-plugin/express": ["express@4.21.2", "", { "dependencies": { "accepts": "~1.3.8", "array-flatten": "1.1.1", "body-parser": "1.20.3", "content-disposition": "0.5.4", "content-type": "~1.0.4", "cookie": "0.7.1", "cookie-signature": "1.0.6", "debug": "2.6.9", "depd": "2.0.0", "encodeurl": "~2.0.0", "escape-html": "~1.0.3", "etag": "~1.8.1", "finalhandler": "1.3.1", "fresh": "0.5.2", "http-errors": "2.0.0", "merge-descriptors": "1.0.3", "methods": "~1.1.2", "on-finished": "2.4.1", "parseurl": "~1.3.3", "path-to-regexp": "0.1.12", "proxy-addr": "~2.0.7", "qs": "6.13.0", "range-parser": "~1.2.1", "safe-buffer": "5.2.1", "send": "0.19.0", "serve-static": "1.16.2", "setprototypeof": "1.2.0", "statuses": "2.0.1", "type-is": "~1.6.18", "utils-merge": "1.0.1", "vary": "~1.1.2" } }, "sha512-28HqgMZAmih1Czt9ny7qr6ek2qddF4FclbMzwhCREB6OFfH+rXAnuNCwo1/wFvrtbgsQDb4kSbX9de9lFbrXnA=="],
|
||||
|
||||
"@pm2/agent/chalk": ["chalk@3.0.0", "", { "dependencies": { "ansi-styles": "^4.1.0", "supports-color": "^7.1.0" } }, "sha512-4D3B6Wf41KOYRFdszmDqMCGq5VV/uMAB273JILmO+3jAlh8X4qDtdtgCR3fxtbLEMzSx22QdhnDcJvu2u1fVwg=="],
|
||||
|
|
@ -2450,6 +2464,8 @@
|
|||
|
||||
"@pm2/io/semver": ["semver@7.5.4", "", { "dependencies": { "lru-cache": "^6.0.0" }, "bin": { "semver": "bin/semver.js" } }, "sha512-1bCSESV6Pv+i21Hvpxp3Dx+pSD8lIPt8uVjRrxAUt/nbswYc+tK6Y2btiULjd4+fnq15PX+nqQDC7Oft7WkwcA=="],
|
||||
|
||||
"@pm2/io/signal-exit": ["signal-exit@3.0.7", "", {}, "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ=="],
|
||||
|
||||
"@pm2/io/tslib": ["tslib@1.9.3", "", {}, "sha512-4krF8scpejhaOgqzBEcGM7yDIEfi0/8+8zDRZhNZZ2kjmHJ4hv3zCbQWxoJGz1iw5U0Jl0nma13xzHXcncMavQ=="],
|
||||
|
||||
"@pm2/js-api/async": ["async@2.6.4", "", { "dependencies": { "lodash": "^4.17.14" } }, "sha512-mzo5dfJYwAn29PeiJ0zvwTo04zj8HDJj0Mn8TD7sno7q12prdbnasKJHhkm2c1LgrhlJ0teaea8860oxi51mGA=="],
|
||||
|
|
@ -2516,6 +2532,8 @@
|
|||
|
||||
"cli-tableau/chalk": ["chalk@3.0.0", "", { "dependencies": { "ansi-styles": "^4.1.0", "supports-color": "^7.1.0" } }, "sha512-4D3B6Wf41KOYRFdszmDqMCGq5VV/uMAB273JILmO+3jAlh8X4qDtdtgCR3fxtbLEMzSx22QdhnDcJvu2u1fVwg=="],
|
||||
|
||||
"cliui/wrap-ansi": ["wrap-ansi@7.0.0", "", { "dependencies": { "ansi-styles": "^4.0.0", "string-width": "^4.1.0", "strip-ansi": "^6.0.0" } }, "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q=="],
|
||||
|
||||
"compress-commons/is-stream": ["is-stream@2.0.1", "", {}, "sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg=="],
|
||||
|
||||
"decompress-response/mimic-response": ["mimic-response@3.1.0", "", {}, "sha512-z0yWI+4FDrrweS8Zmt4Ej5HdJmky15+L2e6Wgn3+iK5fWzb6T3fhNFq2+MeTRb064c6Wr4N/wv0DzQTjNzHNGQ=="],
|
||||
|
|
@ -2550,14 +2568,14 @@
|
|||
|
||||
"execa/is-stream": ["is-stream@3.0.0", "", {}, "sha512-LnQR4bZ9IADDRSkvpqMGvt/tEJWclzklNgSw48V5EAaAeDd6qGvN8ei6k5p0tvxSR171VmGyHuTiAOfxAbr8kA=="],
|
||||
|
||||
"execa/signal-exit": ["signal-exit@3.0.7", "", {}, "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ=="],
|
||||
|
||||
"express/cookie": ["cookie@0.7.2", "", {}, "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w=="],
|
||||
|
||||
"express/mime-types": ["mime-types@3.0.1", "", { "dependencies": { "mime-db": "^1.54.0" } }, "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA=="],
|
||||
|
||||
"fast-glob/glob-parent": ["glob-parent@5.1.2", "", { "dependencies": { "is-glob": "^4.0.1" } }, "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow=="],
|
||||
|
||||
"foreground-child/signal-exit": ["signal-exit@4.1.0", "", {}, "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw=="],
|
||||
|
||||
"get-uri/data-uri-to-buffer": ["data-uri-to-buffer@6.0.2", "", {}, "sha512-7hvf7/GW8e86rW0ptuwS3OcBGDjIi6SZva7hCyWC0yYry2cOPmLIjXAUHI6DK2HsnwJd9ifmt57i8eV2n4YNpw=="],
|
||||
|
||||
"glob/minimatch": ["minimatch@9.0.5", "", { "dependencies": { "brace-expansion": "^2.0.1" } }, "sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow=="],
|
||||
|
|
@ -2620,6 +2638,8 @@
|
|||
|
||||
"prebuild-install/tar-fs": ["tar-fs@2.1.3", "", { "dependencies": { "chownr": "^1.1.1", "mkdirp-classic": "^0.5.2", "pump": "^3.0.0", "tar-stream": "^2.1.4" } }, "sha512-090nwYJDmlhwFwEW3QQl+vaNnxsO2yVsd45eTKRBzSzu+hlb1w2K9inVq5b0ngXuLVqQ4ApvsUHHnu/zQNkWAg=="],
|
||||
|
||||
"proper-lockfile/signal-exit": ["signal-exit@3.0.7", "", {}, "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ=="],
|
||||
|
||||
"protobufjs/@types/node": ["@types/node@22.15.32", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-3jigKqgSjsH6gYZv2nEsqdXfZqIFGAV36XYYjf9KGZ3PSG+IhLecqPnI310RvjutyMwifE2hhhNEklOUrvx/wA=="],
|
||||
|
||||
"proxy-agent/lru-cache": ["lru-cache@7.18.3", "", {}, "sha512-jumlc0BIUrS3qJGgIkWZsyfAM7NCWiBcCDhnd+3NNM5KbBmLTgHVfWBcg6W+rLUsIpzpERPsvwUP7CckAQSOoA=="],
|
||||
|
|
@ -2648,6 +2668,12 @@
|
|||
|
||||
"win-export-certificate-and-key/node-addon-api": ["node-addon-api@3.2.1", "", {}, "sha512-mmcei9JghVNDYydghQmeDX8KoAm0FAiYyIcUt/N4nhyAipB17pllZQDOJD2fotxABnt4Mdz+dKTO7eftLg4d0A=="],
|
||||
|
||||
"wrap-ansi/ansi-styles": ["ansi-styles@6.2.1", "", {}, "sha512-bN798gFfQX+viw3R7yrGWRqnrN2oRkEkUjjl4JNn4E8GxxbjtG3FbrEIIY3l8/hrwUwIeCZvi4QuOTP4MErVug=="],
|
||||
|
||||
"wrap-ansi/string-width": ["string-width@5.1.2", "", { "dependencies": { "eastasianwidth": "^0.2.0", "emoji-regex": "^9.2.2", "strip-ansi": "^7.0.1" } }, "sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA=="],
|
||||
|
||||
"wrap-ansi/strip-ansi": ["strip-ansi@7.1.0", "", { "dependencies": { "ansi-regex": "^6.0.1" } }, "sha512-iq6eVVI64nQQTRYq2KtEg2d2uU7LElhTJwsH4YzIHZshxlgZms/wIc4VoDQTlG/IvVIrBKG06CrZnp0qv7hkcQ=="],
|
||||
|
||||
"yauzl/buffer-crc32": ["buffer-crc32@0.2.13", "", {}, "sha512-VO9Ht/+p3SN7SKWqcrgEzjGbRSJYTx+Q1pTQC0wrWqHx0vpJraQ6GtHx8tvcg1rlK1byhU5gccxgOgj7B0TDkQ=="],
|
||||
|
||||
"@aws-crypto/sha256-browser/@smithy/util-utf8/@smithy/util-buffer-from": ["@smithy/util-buffer-from@2.2.0", "", { "dependencies": { "@smithy/is-array-buffer": "^2.2.0", "tslib": "^2.6.2" } }, "sha512-IJdWBbTcMQ6DA0gdNhh/BwrLkDR+ADW5Kr1aZmd4k3DIF6ezMV4R2NIAmT08wQJ3yUK82thHWmC/TnK/wpMMIA=="],
|
||||
|
|
@ -2660,8 +2686,6 @@
|
|||
|
||||
"@isaacs/cliui/strip-ansi/ansi-regex": ["ansi-regex@6.1.0", "", {}, "sha512-7HSX4QQb4CspciLpVFwyRe79O3xsIZDDLER21kERQ71oaPodF8jL725AgJMFAYbooIqolJoRLuM81SpeUkpkvA=="],
|
||||
|
||||
"@isaacs/cliui/wrap-ansi/ansi-styles": ["ansi-styles@6.2.1", "", {}, "sha512-bN798gFfQX+viw3R7yrGWRqnrN2oRkEkUjjl4JNn4E8GxxbjtG3FbrEIIY3l8/hrwUwIeCZvi4QuOTP4MErVug=="],
|
||||
|
||||
"@mongodb-js/oidc-plugin/express/accepts": ["accepts@1.3.8", "", { "dependencies": { "mime-types": "~2.1.34", "negotiator": "0.6.3" } }, "sha512-PYAthTa2m2VKxuvSD3DPC/Gy+U+sOA1LAuT8mkmRuvw+NACSaeXEQ+NHcVF7rONl6qcaxV3Uuemwawk+7+SJLw=="],
|
||||
|
||||
"@mongodb-js/oidc-plugin/express/body-parser": ["body-parser@1.20.3", "", { "dependencies": { "bytes": "3.1.2", "content-type": "~1.0.5", "debug": "2.6.9", "depd": "2.0.0", "destroy": "1.2.0", "http-errors": "2.0.0", "iconv-lite": "0.4.24", "on-finished": "2.4.1", "qs": "6.13.0", "raw-body": "2.5.2", "type-is": "~1.6.18", "unpipe": "1.0.0" } }, "sha512-7rAxByjUMqQ3/bHJy7D6OGXvx/MMc4IqBn/X0fcM1QUcAItpZrBEYhWGem+tzXH90c+G01ypMcYJBO9Y30203g=="],
|
||||
|
|
@ -2880,12 +2904,18 @@
|
|||
|
||||
"run-applescript/execa/onetime": ["onetime@5.1.2", "", { "dependencies": { "mimic-fn": "^2.1.0" } }, "sha512-kbpaSSGJTWdAY5KPVeMOKXSrPtr8C8C7wodJbcsd51jRnmD+GZu8Y0VoU6Dm5Z4vWr0Ig/1NKuWRKf7j5aaYSg=="],
|
||||
|
||||
"run-applescript/execa/signal-exit": ["signal-exit@3.0.7", "", {}, "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ=="],
|
||||
|
||||
"run-applescript/execa/strip-final-newline": ["strip-final-newline@2.0.0", "", {}, "sha512-BrpvfNAE3dcvq7ll3xVumzjKjZQ5tI1sEUIKr3Uoks0XUl45St3FlatVqef9prk4jRDzhW6WZg+3bk93y6pLjA=="],
|
||||
|
||||
"send/mime-types/mime-db": ["mime-db@1.54.0", "", {}, "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ=="],
|
||||
|
||||
"type-is/mime-types/mime-db": ["mime-db@1.54.0", "", {}, "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ=="],
|
||||
|
||||
"wrap-ansi/string-width/emoji-regex": ["emoji-regex@9.2.2", "", {}, "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg=="],
|
||||
|
||||
"wrap-ansi/strip-ansi/ansi-regex": ["ansi-regex@6.1.0", "", {}, "sha512-7HSX4QQb4CspciLpVFwyRe79O3xsIZDDLER21kERQ71oaPodF8jL725AgJMFAYbooIqolJoRLuM81SpeUkpkvA=="],
|
||||
|
||||
"@aws-crypto/sha256-browser/@smithy/util-utf8/@smithy/util-buffer-from/@smithy/is-array-buffer": ["@smithy/is-array-buffer@2.2.0", "", { "dependencies": { "tslib": "^2.6.2" } }, "sha512-GGP3O9QFD24uGeAXYUjwSTXARoqpZykHadOmA8G5vfJPK0/DC67qa//0qvqrJzL1xc8WQWX7/yc7fwudjPHPhA=="],
|
||||
|
||||
"@aws-crypto/util/@smithy/util-utf8/@smithy/util-buffer-from/@smithy/is-array-buffer": ["@smithy/is-array-buffer@2.2.0", "", { "dependencies": { "tslib": "^2.6.2" } }, "sha512-GGP3O9QFD24uGeAXYUjwSTXARoqpZykHadOmA8G5vfJPK0/DC67qa//0qvqrJzL1xc8WQWX7/yc7fwudjPHPhA=="],
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
{
|
||||
"entry": ["src/index.ts"],
|
||||
"project": ["src/**/*.ts"]
|
||||
}
|
||||
}
|
||||
|
|
|
|||
46
libs/core/cache/src/cache-factory.ts
vendored
46
libs/core/cache/src/cache-factory.ts
vendored
|
|
@ -1,23 +1,23 @@
|
|||
import { NamespacedCache } from './namespaced-cache';
|
||||
import type { CacheProvider } from './types';
|
||||
|
||||
/**
|
||||
* Factory function to create namespaced caches
|
||||
* Provides a clean API for services to get their own namespaced cache
|
||||
*/
|
||||
export function createNamespacedCache(
|
||||
cache: CacheProvider | null | undefined,
|
||||
namespace: string
|
||||
): CacheProvider | null {
|
||||
if (!cache) {
|
||||
return null;
|
||||
}
|
||||
return new NamespacedCache(cache, namespace);
|
||||
}
|
||||
|
||||
/**
|
||||
* Type guard to check if cache is available
|
||||
*/
|
||||
export function isCacheAvailable(cache: any): cache is CacheProvider {
|
||||
return cache !== null && cache !== undefined && typeof cache.get === 'function';
|
||||
}
|
||||
import { NamespacedCache } from './namespaced-cache';
|
||||
import type { CacheProvider } from './types';
|
||||
|
||||
/**
|
||||
* Factory function to create namespaced caches
|
||||
* Provides a clean API for services to get their own namespaced cache
|
||||
*/
|
||||
export function createNamespacedCache(
|
||||
cache: CacheProvider | null | undefined,
|
||||
namespace: string
|
||||
): CacheProvider | null {
|
||||
if (!cache) {
|
||||
return null;
|
||||
}
|
||||
return new NamespacedCache(cache, namespace);
|
||||
}
|
||||
|
||||
/**
|
||||
* Type guard to check if cache is available
|
||||
*/
|
||||
export function isCacheAvailable(cache: any): cache is CacheProvider {
|
||||
return cache !== null && cache !== undefined && typeof cache.get === 'function';
|
||||
}
|
||||
|
|
|
|||
2
libs/core/cache/src/connection-manager.ts
vendored
2
libs/core/cache/src/connection-manager.ts
vendored
|
|
@ -88,7 +88,7 @@ export class RedisConnectionManager {
|
|||
};
|
||||
|
||||
const redis = new Redis(redisOptions);
|
||||
|
||||
|
||||
// Use the provided logger or fall back to instance logger
|
||||
const log = logger || this.logger;
|
||||
|
||||
|
|
|
|||
201
libs/core/cache/src/namespaced-cache.ts
vendored
201
libs/core/cache/src/namespaced-cache.ts
vendored
|
|
@ -1,101 +1,100 @@
|
|||
import type { CacheProvider } from './types';
|
||||
|
||||
/**
|
||||
* A cache wrapper that automatically prefixes all keys with a namespace
|
||||
* Used to provide isolated cache spaces for different services
|
||||
*/
|
||||
export class NamespacedCache implements CacheProvider {
|
||||
private readonly prefix: string;
|
||||
|
||||
constructor(
|
||||
private readonly cache: CacheProvider,
|
||||
private readonly namespace: string
|
||||
) {
|
||||
this.prefix = `cache:${namespace}:`;
|
||||
}
|
||||
|
||||
async get<T = any>(key: string): Promise<T | null> {
|
||||
return this.cache.get(`${this.prefix}${key}`);
|
||||
}
|
||||
|
||||
async set<T>(
|
||||
key: string,
|
||||
value: T,
|
||||
options?:
|
||||
| number
|
||||
| {
|
||||
ttl?: number;
|
||||
preserveTTL?: boolean;
|
||||
onlyIfExists?: boolean;
|
||||
onlyIfNotExists?: boolean;
|
||||
getOldValue?: boolean;
|
||||
}
|
||||
): Promise<T | null> {
|
||||
return this.cache.set(`${this.prefix}${key}`, value, options);
|
||||
}
|
||||
|
||||
async del(key: string): Promise<void> {
|
||||
return this.cache.del(`${this.prefix}${key}`);
|
||||
}
|
||||
|
||||
async exists(key: string): Promise<boolean> {
|
||||
return this.cache.exists(`${this.prefix}${key}`);
|
||||
}
|
||||
|
||||
async keys(pattern: string = '*'): Promise<string[]> {
|
||||
const fullPattern = `${this.prefix}${pattern}`;
|
||||
const keys = await this.cache.keys(fullPattern);
|
||||
// Remove the prefix from returned keys for cleaner API
|
||||
return keys.map(k => k.substring(this.prefix.length));
|
||||
}
|
||||
|
||||
async clear(): Promise<void> {
|
||||
// Clear only keys with this namespace prefix
|
||||
const keys = await this.cache.keys(`${this.prefix}*`);
|
||||
if (keys.length > 0) {
|
||||
await Promise.all(keys.map(key => this.cache.del(key)));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
getStats() {
|
||||
return this.cache.getStats();
|
||||
}
|
||||
|
||||
async health(): Promise<boolean> {
|
||||
return this.cache.health();
|
||||
}
|
||||
|
||||
isReady(): boolean {
|
||||
return this.cache.isReady();
|
||||
}
|
||||
|
||||
async waitForReady(timeout?: number): Promise<void> {
|
||||
return this.cache.waitForReady(timeout);
|
||||
}
|
||||
|
||||
async close(): Promise<void> {
|
||||
// Namespaced cache doesn't own the connection, so we don't close it
|
||||
// The underlying cache instance should be closed by its owner
|
||||
}
|
||||
|
||||
getNamespace(): string {
|
||||
return this.namespace;
|
||||
}
|
||||
|
||||
getFullPrefix(): string {
|
||||
return this.prefix;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a value using a raw Redis key (bypassing the namespace prefix)
|
||||
* Delegates to the underlying cache's getRaw method if available
|
||||
*/
|
||||
async getRaw<T = unknown>(key: string): Promise<T | null> {
|
||||
if (this.cache.getRaw) {
|
||||
return this.cache.getRaw<T>(key);
|
||||
}
|
||||
// Fallback for caches that don't implement getRaw
|
||||
return null;
|
||||
}
|
||||
}
|
||||
import type { CacheProvider } from './types';
|
||||
|
||||
/**
|
||||
* A cache wrapper that automatically prefixes all keys with a namespace
|
||||
* Used to provide isolated cache spaces for different services
|
||||
*/
|
||||
export class NamespacedCache implements CacheProvider {
|
||||
private readonly prefix: string;
|
||||
|
||||
constructor(
|
||||
private readonly cache: CacheProvider,
|
||||
private readonly namespace: string
|
||||
) {
|
||||
this.prefix = `cache:${namespace}:`;
|
||||
}
|
||||
|
||||
async get<T = any>(key: string): Promise<T | null> {
|
||||
return this.cache.get(`${this.prefix}${key}`);
|
||||
}
|
||||
|
||||
async set<T>(
|
||||
key: string,
|
||||
value: T,
|
||||
options?:
|
||||
| number
|
||||
| {
|
||||
ttl?: number;
|
||||
preserveTTL?: boolean;
|
||||
onlyIfExists?: boolean;
|
||||
onlyIfNotExists?: boolean;
|
||||
getOldValue?: boolean;
|
||||
}
|
||||
): Promise<T | null> {
|
||||
return this.cache.set(`${this.prefix}${key}`, value, options);
|
||||
}
|
||||
|
||||
async del(key: string): Promise<void> {
|
||||
return this.cache.del(`${this.prefix}${key}`);
|
||||
}
|
||||
|
||||
async exists(key: string): Promise<boolean> {
|
||||
return this.cache.exists(`${this.prefix}${key}`);
|
||||
}
|
||||
|
||||
async keys(pattern: string = '*'): Promise<string[]> {
|
||||
const fullPattern = `${this.prefix}${pattern}`;
|
||||
const keys = await this.cache.keys(fullPattern);
|
||||
// Remove the prefix from returned keys for cleaner API
|
||||
return keys.map(k => k.substring(this.prefix.length));
|
||||
}
|
||||
|
||||
async clear(): Promise<void> {
|
||||
// Clear only keys with this namespace prefix
|
||||
const keys = await this.cache.keys(`${this.prefix}*`);
|
||||
if (keys.length > 0) {
|
||||
await Promise.all(keys.map(key => this.cache.del(key)));
|
||||
}
|
||||
}
|
||||
|
||||
getStats() {
|
||||
return this.cache.getStats();
|
||||
}
|
||||
|
||||
async health(): Promise<boolean> {
|
||||
return this.cache.health();
|
||||
}
|
||||
|
||||
isReady(): boolean {
|
||||
return this.cache.isReady();
|
||||
}
|
||||
|
||||
async waitForReady(timeout?: number): Promise<void> {
|
||||
return this.cache.waitForReady(timeout);
|
||||
}
|
||||
|
||||
async close(): Promise<void> {
|
||||
// Namespaced cache doesn't own the connection, so we don't close it
|
||||
// The underlying cache instance should be closed by its owner
|
||||
}
|
||||
|
||||
getNamespace(): string {
|
||||
return this.namespace;
|
||||
}
|
||||
|
||||
getFullPrefix(): string {
|
||||
return this.prefix;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a value using a raw Redis key (bypassing the namespace prefix)
|
||||
* Delegates to the underlying cache's getRaw method if available
|
||||
*/
|
||||
async getRaw<T = unknown>(key: string): Promise<T | null> {
|
||||
if (this.cache.getRaw) {
|
||||
return this.cache.getRaw<T>(key);
|
||||
}
|
||||
// Fallback for caches that don't implement getRaw
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
import { join } from 'path';
|
||||
import { z } from 'zod';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import { EnvLoader } from './loaders/env.loader';
|
||||
import { FileLoader } from './loaders/file.loader';
|
||||
import { ConfigError, ConfigValidationError } from './errors';
|
||||
import { getLogger } from '@stock-bot/logger';
|
||||
import type {
|
||||
ConfigLoader,
|
||||
ConfigManagerOptions,
|
||||
|
|
@ -82,9 +82,9 @@ export class ConfigManager<T = Record<string, unknown>> {
|
|||
expected: (err as any).expected,
|
||||
received: (err as any).received,
|
||||
}));
|
||||
|
||||
|
||||
this.logger.error('Configuration validation failed:', errorDetails);
|
||||
|
||||
|
||||
throw new ConfigValidationError('Configuration validation failed', error.errors);
|
||||
}
|
||||
throw error;
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
// Import necessary types
|
||||
import { z } from 'zod';
|
||||
import { EnvLoader } from './loaders/env.loader';
|
||||
import { FileLoader } from './loaders/file.loader';
|
||||
import { ConfigManager } from './config-manager';
|
||||
import type { BaseAppConfig } from './schemas';
|
||||
import { baseAppSchema } from './schemas';
|
||||
import { z } from 'zod';
|
||||
|
||||
// Legacy singleton instance for backward compatibility
|
||||
let configInstance: ConfigManager<BaseAppConfig> | null = null;
|
||||
|
|
@ -56,7 +56,6 @@ function loadCriticalEnvVarsSync(): void {
|
|||
// Load critical env vars immediately
|
||||
loadCriticalEnvVarsSync();
|
||||
|
||||
|
||||
/**
|
||||
* Initialize configuration for a service in a monorepo.
|
||||
* Automatically loads configs from:
|
||||
|
|
@ -121,8 +120,6 @@ export function getLogConfig() {
|
|||
return getConfig().log;
|
||||
}
|
||||
|
||||
|
||||
|
||||
export function getQueueConfig() {
|
||||
return getConfig().queue;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import { describe, expect, it } from 'bun:test';
|
||||
import { unifiedAppSchema, toUnifiedConfig, getStandardServiceName } from '../unified-app.schema';
|
||||
import { getStandardServiceName, toUnifiedConfig, unifiedAppSchema } from '../unified-app.schema';
|
||||
|
||||
describe('UnifiedAppConfig', () => {
|
||||
describe('getStandardServiceName', () => {
|
||||
|
|
@ -74,13 +74,13 @@ describe('UnifiedAppConfig', () => {
|
|||
};
|
||||
|
||||
const result = unifiedAppSchema.parse(config);
|
||||
|
||||
|
||||
// Should have both nested and flat structure
|
||||
expect(result.postgres).toBeDefined();
|
||||
expect(result.mongodb).toBeDefined();
|
||||
expect(result.database?.postgres).toBeDefined();
|
||||
expect(result.database?.mongodb).toBeDefined();
|
||||
|
||||
|
||||
// Values should match
|
||||
expect(result.postgres?.host).toBe('localhost');
|
||||
expect(result.postgres?.port).toBe(5432);
|
||||
|
|
@ -144,7 +144,7 @@ describe('UnifiedAppConfig', () => {
|
|||
};
|
||||
|
||||
const unified = toUnifiedConfig(stockBotConfig);
|
||||
|
||||
|
||||
expect(unified.service.serviceName).toBe('data-ingestion');
|
||||
expect(unified.redis).toBeDefined();
|
||||
expect(unified.redis?.host).toBe('localhost');
|
||||
|
|
@ -152,4 +152,4 @@ describe('UnifiedAppConfig', () => {
|
|||
expect(unified.postgres?.host).toBe('localhost');
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
|
|||
|
|
@ -1,61 +1,63 @@
|
|||
import { z } from 'zod';
|
||||
import { environmentSchema } from './base.schema';
|
||||
import {
|
||||
postgresConfigSchema,
|
||||
mongodbConfigSchema,
|
||||
questdbConfigSchema,
|
||||
dragonflyConfigSchema
|
||||
} from './database.schema';
|
||||
import {
|
||||
serviceConfigSchema,
|
||||
loggingConfigSchema,
|
||||
queueConfigSchema,
|
||||
httpConfigSchema,
|
||||
webshareConfigSchema,
|
||||
browserConfigSchema,
|
||||
proxyConfigSchema
|
||||
} from './service.schema';
|
||||
|
||||
/**
|
||||
* Generic base application schema that can be extended by specific apps
|
||||
*/
|
||||
export const baseAppSchema = z.object({
|
||||
// Basic app info
|
||||
name: z.string(),
|
||||
version: z.string(),
|
||||
environment: environmentSchema.default('development'),
|
||||
|
||||
// Service configuration
|
||||
service: serviceConfigSchema,
|
||||
|
||||
// Logging configuration
|
||||
log: loggingConfigSchema,
|
||||
|
||||
// Database configuration - apps can choose which databases they need
|
||||
database: z.object({
|
||||
postgres: postgresConfigSchema.optional(),
|
||||
mongodb: mongodbConfigSchema.optional(),
|
||||
questdb: questdbConfigSchema.optional(),
|
||||
dragonfly: dragonflyConfigSchema.optional(),
|
||||
}).optional(),
|
||||
|
||||
// Redis configuration (used for cache and queue)
|
||||
redis: dragonflyConfigSchema.optional(),
|
||||
|
||||
// Queue configuration
|
||||
queue: queueConfigSchema.optional(),
|
||||
|
||||
// HTTP client configuration
|
||||
http: httpConfigSchema.optional(),
|
||||
|
||||
// WebShare proxy configuration
|
||||
webshare: webshareConfigSchema.optional(),
|
||||
|
||||
// Browser configuration
|
||||
browser: browserConfigSchema.optional(),
|
||||
|
||||
// Proxy manager configuration
|
||||
proxy: proxyConfigSchema.optional(),
|
||||
});
|
||||
|
||||
export type BaseAppConfig = z.infer<typeof baseAppSchema>;
|
||||
import { z } from 'zod';
|
||||
import { environmentSchema } from './base.schema';
|
||||
import {
|
||||
dragonflyConfigSchema,
|
||||
mongodbConfigSchema,
|
||||
postgresConfigSchema,
|
||||
questdbConfigSchema,
|
||||
} from './database.schema';
|
||||
import {
|
||||
browserConfigSchema,
|
||||
httpConfigSchema,
|
||||
loggingConfigSchema,
|
||||
proxyConfigSchema,
|
||||
queueConfigSchema,
|
||||
serviceConfigSchema,
|
||||
webshareConfigSchema,
|
||||
} from './service.schema';
|
||||
|
||||
/**
|
||||
* Generic base application schema that can be extended by specific apps
|
||||
*/
|
||||
export const baseAppSchema = z.object({
|
||||
// Basic app info
|
||||
name: z.string(),
|
||||
version: z.string(),
|
||||
environment: environmentSchema.default('development'),
|
||||
|
||||
// Service configuration
|
||||
service: serviceConfigSchema,
|
||||
|
||||
// Logging configuration
|
||||
log: loggingConfigSchema,
|
||||
|
||||
// Database configuration - apps can choose which databases they need
|
||||
database: z
|
||||
.object({
|
||||
postgres: postgresConfigSchema.optional(),
|
||||
mongodb: mongodbConfigSchema.optional(),
|
||||
questdb: questdbConfigSchema.optional(),
|
||||
dragonfly: dragonflyConfigSchema.optional(),
|
||||
})
|
||||
.optional(),
|
||||
|
||||
// Redis configuration (used for cache and queue)
|
||||
redis: dragonflyConfigSchema.optional(),
|
||||
|
||||
// Queue configuration
|
||||
queue: queueConfigSchema.optional(),
|
||||
|
||||
// HTTP client configuration
|
||||
http: httpConfigSchema.optional(),
|
||||
|
||||
// WebShare proxy configuration
|
||||
webshare: webshareConfigSchema.optional(),
|
||||
|
||||
// Browser configuration
|
||||
browser: browserConfigSchema.optional(),
|
||||
|
||||
// Proxy manager configuration
|
||||
proxy: proxyConfigSchema.optional(),
|
||||
});
|
||||
|
||||
export type BaseAppConfig = z.infer<typeof baseAppSchema>;
|
||||
|
|
|
|||
|
|
@ -15,4 +15,3 @@ export type { BaseAppConfig } from './base-app.schema';
|
|||
// Export unified schema for standardized configuration
|
||||
export { unifiedAppSchema, toUnifiedConfig, getStandardServiceName } from './unified-app.schema';
|
||||
export type { UnifiedAppConfig } from './unified-app.schema';
|
||||
|
||||
|
|
|
|||
|
|
@ -100,8 +100,10 @@ export const proxyConfigSchema = z.object({
|
|||
enabled: z.boolean().default(false),
|
||||
cachePrefix: z.string().default('proxy:'),
|
||||
ttl: z.number().default(3600),
|
||||
webshare: z.object({
|
||||
apiKey: z.string(),
|
||||
apiUrl: z.string().default('https://proxy.webshare.io/api/v2/'),
|
||||
}).optional(),
|
||||
webshare: z
|
||||
.object({
|
||||
apiKey: z.string(),
|
||||
apiUrl: z.string().default('https://proxy.webshare.io/api/v2/'),
|
||||
})
|
||||
.optional(),
|
||||
});
|
||||
|
|
|
|||
|
|
@ -1,62 +1,67 @@
|
|||
import { z } from 'zod';
|
||||
import { baseAppSchema } from './base-app.schema';
|
||||
import {
|
||||
postgresConfigSchema,
|
||||
mongodbConfigSchema,
|
||||
questdbConfigSchema,
|
||||
dragonflyConfigSchema
|
||||
import {
|
||||
dragonflyConfigSchema,
|
||||
mongodbConfigSchema,
|
||||
postgresConfigSchema,
|
||||
questdbConfigSchema,
|
||||
} from './database.schema';
|
||||
|
||||
/**
|
||||
* Unified application configuration schema that provides both nested and flat access
|
||||
* to database configurations for backward compatibility while maintaining a clean structure
|
||||
*/
|
||||
export const unifiedAppSchema = baseAppSchema.extend({
|
||||
// Flat database configs for DI system (these take precedence)
|
||||
redis: dragonflyConfigSchema.optional(),
|
||||
mongodb: mongodbConfigSchema.optional(),
|
||||
postgres: postgresConfigSchema.optional(),
|
||||
questdb: questdbConfigSchema.optional(),
|
||||
}).transform((data) => {
|
||||
// Ensure service.serviceName is set from service.name if not provided
|
||||
if (data.service && !data.service.serviceName) {
|
||||
data.service.serviceName = data.service.name.replace(/([A-Z])/g, '-$1').toLowerCase().replace(/^-/, '');
|
||||
}
|
||||
export const unifiedAppSchema = baseAppSchema
|
||||
.extend({
|
||||
// Flat database configs for DI system (these take precedence)
|
||||
redis: dragonflyConfigSchema.optional(),
|
||||
mongodb: mongodbConfigSchema.optional(),
|
||||
postgres: postgresConfigSchema.optional(),
|
||||
questdb: questdbConfigSchema.optional(),
|
||||
})
|
||||
.transform(data => {
|
||||
// Ensure service.serviceName is set from service.name if not provided
|
||||
if (data.service && !data.service.serviceName) {
|
||||
data.service.serviceName = data.service.name
|
||||
.replace(/([A-Z])/g, '-$1')
|
||||
.toLowerCase()
|
||||
.replace(/^-/, '');
|
||||
}
|
||||
|
||||
// If flat configs exist, ensure they're also in the nested database object
|
||||
if (data.redis || data.mongodb || data.postgres || data.questdb) {
|
||||
data.database = {
|
||||
...data.database,
|
||||
dragonfly: data.redis || data.database?.dragonfly,
|
||||
mongodb: data.mongodb || data.database?.mongodb,
|
||||
postgres: data.postgres || data.database?.postgres,
|
||||
questdb: data.questdb || data.database?.questdb,
|
||||
};
|
||||
}
|
||||
// If flat configs exist, ensure they're also in the nested database object
|
||||
if (data.redis || data.mongodb || data.postgres || data.questdb) {
|
||||
data.database = {
|
||||
...data.database,
|
||||
dragonfly: data.redis || data.database?.dragonfly,
|
||||
mongodb: data.mongodb || data.database?.mongodb,
|
||||
postgres: data.postgres || data.database?.postgres,
|
||||
questdb: data.questdb || data.database?.questdb,
|
||||
};
|
||||
}
|
||||
|
||||
// If nested configs exist but flat ones don't, copy them to flat structure
|
||||
if (data.database) {
|
||||
if (data.database.dragonfly && !data.redis) {
|
||||
data.redis = data.database.dragonfly;
|
||||
}
|
||||
if (data.database.mongodb && !data.mongodb) {
|
||||
data.mongodb = data.database.mongodb;
|
||||
}
|
||||
if (data.database.postgres && !data.postgres) {
|
||||
data.postgres = data.database.postgres;
|
||||
}
|
||||
if (data.database.questdb && !data.questdb) {
|
||||
// Handle the ilpPort -> influxPort mapping for DI system
|
||||
const questdbConfig = { ...data.database.questdb };
|
||||
if ('ilpPort' in questdbConfig && !('influxPort' in questdbConfig)) {
|
||||
(questdbConfig as any).influxPort = questdbConfig.ilpPort;
|
||||
// If nested configs exist but flat ones don't, copy them to flat structure
|
||||
if (data.database) {
|
||||
if (data.database.dragonfly && !data.redis) {
|
||||
data.redis = data.database.dragonfly;
|
||||
}
|
||||
if (data.database.mongodb && !data.mongodb) {
|
||||
data.mongodb = data.database.mongodb;
|
||||
}
|
||||
if (data.database.postgres && !data.postgres) {
|
||||
data.postgres = data.database.postgres;
|
||||
}
|
||||
if (data.database.questdb && !data.questdb) {
|
||||
// Handle the ilpPort -> influxPort mapping for DI system
|
||||
const questdbConfig = { ...data.database.questdb };
|
||||
if ('ilpPort' in questdbConfig && !('influxPort' in questdbConfig)) {
|
||||
(questdbConfig as any).influxPort = questdbConfig.ilpPort;
|
||||
}
|
||||
data.questdb = questdbConfig;
|
||||
}
|
||||
data.questdb = questdbConfig;
|
||||
}
|
||||
}
|
||||
|
||||
return data;
|
||||
});
|
||||
return data;
|
||||
});
|
||||
|
||||
export type UnifiedAppConfig = z.infer<typeof unifiedAppSchema>;
|
||||
|
||||
|
|
@ -72,5 +77,8 @@ export function toUnifiedConfig(config: any): UnifiedAppConfig {
|
|||
*/
|
||||
export function getStandardServiceName(serviceName: string): string {
|
||||
// Convert camelCase to kebab-case
|
||||
return serviceName.replace(/([A-Z])/g, '-$1').toLowerCase().replace(/^-/, '');
|
||||
}
|
||||
return serviceName
|
||||
.replace(/([A-Z])/g, '-$1')
|
||||
.toLowerCase()
|
||||
.replace(/^-/, '');
|
||||
}
|
||||
|
|
|
|||
|
|
@ -20,6 +20,8 @@
|
|||
"@stock-bot/queue": "workspace:*",
|
||||
"@stock-bot/shutdown": "workspace:*",
|
||||
"@stock-bot/handlers": "workspace:*",
|
||||
"@stock-bot/handler-registry": "workspace:*",
|
||||
"glob": "^10.0.0",
|
||||
"zod": "^3.23.8",
|
||||
"hono": "^4.0.0",
|
||||
"awilix": "^12.0.5"
|
||||
|
|
|
|||
|
|
@ -3,16 +3,16 @@
|
|||
* Creates a decoupled, reusable dependency injection container
|
||||
*/
|
||||
|
||||
import { type AwilixContainer } from 'awilix';
|
||||
import type { Browser } from '@stock-bot/browser';
|
||||
import type { CacheProvider } from '@stock-bot/cache';
|
||||
import type { IServiceContainer } from '@stock-bot/types';
|
||||
import type { Logger } from '@stock-bot/logger';
|
||||
import type { MongoDBClient } from '@stock-bot/mongodb';
|
||||
import type { PostgreSQLClient } from '@stock-bot/postgres';
|
||||
import type { ProxyManager } from '@stock-bot/proxy';
|
||||
import type { QuestDBClient } from '@stock-bot/questdb';
|
||||
import type { QueueManager } from '@stock-bot/queue';
|
||||
import { type AwilixContainer } from 'awilix';
|
||||
import type { IServiceContainer } from '@stock-bot/types';
|
||||
import type { AppConfig } from './config/schemas';
|
||||
|
||||
// Re-export for backward compatibility
|
||||
|
|
@ -41,8 +41,6 @@ export interface ServiceDefinitions {
|
|||
serviceContainer: IServiceContainer;
|
||||
}
|
||||
|
||||
|
||||
|
||||
// Export typed container
|
||||
export type ServiceContainer = AwilixContainer<ServiceDefinitions>;
|
||||
export type ServiceCradle = ServiceDefinitions;
|
||||
|
|
@ -59,5 +57,3 @@ export interface ServiceContainerOptions {
|
|||
enableBrowser?: boolean;
|
||||
enableProxy?: boolean;
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
import { z } from 'zod';
|
||||
import { redisConfigSchema } from './redis.schema';
|
||||
import { mongodbConfigSchema } from './mongodb.schema';
|
||||
import { postgresConfigSchema } from './postgres.schema';
|
||||
import { questdbConfigSchema } from './questdb.schema';
|
||||
import { proxyConfigSchema, browserConfigSchema, queueConfigSchema } from './service.schema';
|
||||
import { redisConfigSchema } from './redis.schema';
|
||||
import { browserConfigSchema, proxyConfigSchema, queueConfigSchema } from './service.schema';
|
||||
|
||||
export const appConfigSchema = z.object({
|
||||
redis: redisConfigSchema,
|
||||
|
|
@ -13,11 +13,13 @@ export const appConfigSchema = z.object({
|
|||
proxy: proxyConfigSchema.optional(),
|
||||
browser: browserConfigSchema.optional(),
|
||||
queue: queueConfigSchema.optional(),
|
||||
service: z.object({
|
||||
name: z.string(),
|
||||
serviceName: z.string().optional(), // Standard kebab-case service name
|
||||
port: z.number().optional(),
|
||||
}).optional(),
|
||||
service: z
|
||||
.object({
|
||||
name: z.string(),
|
||||
serviceName: z.string().optional(), // Standard kebab-case service name
|
||||
port: z.number().optional(),
|
||||
})
|
||||
.optional(),
|
||||
});
|
||||
|
||||
export type AppConfig = z.infer<typeof appConfigSchema>;
|
||||
|
|
@ -27,4 +29,4 @@ export * from './redis.schema';
|
|||
export * from './mongodb.schema';
|
||||
export * from './postgres.schema';
|
||||
export * from './questdb.schema';
|
||||
export * from './service.schema';
|
||||
export * from './service.schema';
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
import { z } from 'zod';
|
||||
|
||||
export const mongodbConfigSchema = z.object({
|
||||
enabled: z.boolean().optional().default(true),
|
||||
uri: z.string(),
|
||||
database: z.string(),
|
||||
});
|
||||
|
||||
export type MongoDBConfig = z.infer<typeof mongodbConfigSchema>;
|
||||
import { z } from 'zod';
|
||||
|
||||
export const mongodbConfigSchema = z.object({
|
||||
enabled: z.boolean().optional().default(true),
|
||||
uri: z.string(),
|
||||
database: z.string(),
|
||||
});
|
||||
|
||||
export type MongoDBConfig = z.infer<typeof mongodbConfigSchema>;
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
import { z } from 'zod';
|
||||
|
||||
export const postgresConfigSchema = z.object({
|
||||
enabled: z.boolean().optional().default(true),
|
||||
host: z.string().default('localhost'),
|
||||
port: z.number().default(5432),
|
||||
database: z.string(),
|
||||
user: z.string(),
|
||||
password: z.string(),
|
||||
});
|
||||
|
||||
export type PostgresConfig = z.infer<typeof postgresConfigSchema>;
|
||||
import { z } from 'zod';
|
||||
|
||||
export const postgresConfigSchema = z.object({
|
||||
enabled: z.boolean().optional().default(true),
|
||||
host: z.string().default('localhost'),
|
||||
port: z.number().default(5432),
|
||||
database: z.string(),
|
||||
user: z.string(),
|
||||
password: z.string(),
|
||||
});
|
||||
|
||||
export type PostgresConfig = z.infer<typeof postgresConfigSchema>;
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
import { z } from 'zod';
|
||||
|
||||
export const questdbConfigSchema = z.object({
|
||||
enabled: z.boolean().optional().default(true),
|
||||
host: z.string().default('localhost'),
|
||||
httpPort: z.number().optional().default(9000),
|
||||
pgPort: z.number().optional().default(8812),
|
||||
influxPort: z.number().optional().default(9009),
|
||||
database: z.string().optional().default('questdb'),
|
||||
});
|
||||
|
||||
export type QuestDBConfig = z.infer<typeof questdbConfigSchema>;
|
||||
import { z } from 'zod';
|
||||
|
||||
export const questdbConfigSchema = z.object({
|
||||
enabled: z.boolean().optional().default(true),
|
||||
host: z.string().default('localhost'),
|
||||
httpPort: z.number().optional().default(9000),
|
||||
pgPort: z.number().optional().default(8812),
|
||||
influxPort: z.number().optional().default(9009),
|
||||
database: z.string().optional().default('questdb'),
|
||||
});
|
||||
|
||||
export type QuestDBConfig = z.infer<typeof questdbConfigSchema>;
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
import { z } from 'zod';
|
||||
|
||||
export const redisConfigSchema = z.object({
|
||||
enabled: z.boolean().optional().default(true),
|
||||
host: z.string().default('localhost'),
|
||||
port: z.number().default(6379),
|
||||
password: z.string().optional(),
|
||||
username: z.string().optional(),
|
||||
db: z.number().optional().default(0),
|
||||
});
|
||||
|
||||
export type RedisConfig = z.infer<typeof redisConfigSchema>;
|
||||
import { z } from 'zod';
|
||||
|
||||
export const redisConfigSchema = z.object({
|
||||
enabled: z.boolean().optional().default(true),
|
||||
host: z.string().default('localhost'),
|
||||
port: z.number().default(6379),
|
||||
password: z.string().optional(),
|
||||
username: z.string().optional(),
|
||||
db: z.number().optional().default(0),
|
||||
});
|
||||
|
||||
export type RedisConfig = z.infer<typeof redisConfigSchema>;
|
||||
|
|
|
|||
|
|
@ -4,10 +4,12 @@ export const proxyConfigSchema = z.object({
|
|||
enabled: z.boolean().default(false),
|
||||
cachePrefix: z.string().optional().default('proxy:'),
|
||||
ttl: z.number().optional().default(3600),
|
||||
webshare: z.object({
|
||||
apiKey: z.string(),
|
||||
apiUrl: z.string().default('https://proxy.webshare.io/api/v2/'),
|
||||
}).optional(),
|
||||
webshare: z
|
||||
.object({
|
||||
apiKey: z.string(),
|
||||
apiUrl: z.string().default('https://proxy.webshare.io/api/v2/'),
|
||||
})
|
||||
.optional(),
|
||||
});
|
||||
|
||||
export const browserConfigSchema = z.object({
|
||||
|
|
@ -21,18 +23,23 @@ export const queueConfigSchema = z.object({
|
|||
concurrency: z.number().optional().default(1),
|
||||
enableScheduledJobs: z.boolean().optional().default(true),
|
||||
delayWorkerStart: z.boolean().optional().default(false),
|
||||
defaultJobOptions: z.object({
|
||||
attempts: z.number().default(3),
|
||||
backoff: z.object({
|
||||
type: z.enum(['exponential', 'fixed']).default('exponential'),
|
||||
delay: z.number().default(1000),
|
||||
}).default({}),
|
||||
removeOnComplete: z.number().default(100),
|
||||
removeOnFail: z.number().default(50),
|
||||
timeout: z.number().optional(),
|
||||
}).optional().default({}),
|
||||
defaultJobOptions: z
|
||||
.object({
|
||||
attempts: z.number().default(3),
|
||||
backoff: z
|
||||
.object({
|
||||
type: z.enum(['exponential', 'fixed']).default('exponential'),
|
||||
delay: z.number().default(1000),
|
||||
})
|
||||
.default({}),
|
||||
removeOnComplete: z.number().default(100),
|
||||
removeOnFail: z.number().default(50),
|
||||
timeout: z.number().optional(),
|
||||
})
|
||||
.optional()
|
||||
.default({}),
|
||||
});
|
||||
|
||||
export type ProxyConfig = z.infer<typeof proxyConfigSchema>;
|
||||
export type BrowserConfig = z.infer<typeof browserConfigSchema>;
|
||||
export type QueueConfig = z.infer<typeof queueConfigSchema>;
|
||||
export type QueueConfig = z.infer<typeof queueConfigSchema>;
|
||||
|
|
|
|||
|
|
@ -1,15 +1,17 @@
|
|||
import { createContainer, InjectionMode, asFunction, type AwilixContainer } from 'awilix';
|
||||
import { asClass, asFunction, createContainer, InjectionMode, type AwilixContainer } from 'awilix';
|
||||
import type { BaseAppConfig as StockBotAppConfig, UnifiedAppConfig } from '@stock-bot/config';
|
||||
import { appConfigSchema, type AppConfig } from '../config/schemas';
|
||||
import { toUnifiedConfig } from '@stock-bot/config';
|
||||
import {
|
||||
registerCoreServices,
|
||||
import { HandlerRegistry } from '@stock-bot/handler-registry';
|
||||
import { appConfigSchema, type AppConfig } from '../config/schemas';
|
||||
import {
|
||||
registerApplicationServices,
|
||||
registerCacheServices,
|
||||
registerCoreServices,
|
||||
registerDatabaseServices,
|
||||
registerApplicationServices
|
||||
} from '../registrations';
|
||||
import { HandlerScanner } from '../scanner';
|
||||
import { ServiceLifecycleManager } from '../utils/lifecycle';
|
||||
import type { ServiceDefinitions, ContainerBuildOptions } from './types';
|
||||
import type { ContainerBuildOptions, ServiceDefinitions } from './types';
|
||||
|
||||
export class ServiceContainerBuilder {
|
||||
private config: Partial<AppConfig> = {};
|
||||
|
|
@ -38,7 +40,10 @@ export class ServiceContainerBuilder {
|
|||
return this;
|
||||
}
|
||||
|
||||
enableService(service: keyof Omit<ContainerBuildOptions, 'skipInitialization' | 'initializationTimeout'>, enabled = true): this {
|
||||
enableService(
|
||||
service: keyof Omit<ContainerBuildOptions, 'skipInitialization' | 'initializationTimeout'>,
|
||||
enabled = true
|
||||
): this {
|
||||
this.options[service] = enabled;
|
||||
return this;
|
||||
}
|
||||
|
|
@ -51,7 +56,7 @@ export class ServiceContainerBuilder {
|
|||
async build(): Promise<AwilixContainer<ServiceDefinitions>> {
|
||||
// Validate and prepare config
|
||||
const validatedConfig = this.prepareConfig();
|
||||
|
||||
|
||||
// Create container
|
||||
const container = createContainer<ServiceDefinitions>({
|
||||
injectionMode: InjectionMode.PROXY,
|
||||
|
|
@ -77,17 +82,19 @@ export class ServiceContainerBuilder {
|
|||
|
||||
private applyServiceOptions(config: Partial<AppConfig>): AppConfig {
|
||||
// Ensure questdb config has the right field names for DI
|
||||
const questdbConfig = config.questdb ? {
|
||||
...config.questdb,
|
||||
influxPort: (config.questdb as any).influxPort || (config.questdb as any).ilpPort || 9009,
|
||||
} : {
|
||||
enabled: true,
|
||||
host: 'localhost',
|
||||
httpPort: 9000,
|
||||
pgPort: 8812,
|
||||
influxPort: 9009,
|
||||
database: 'questdb',
|
||||
};
|
||||
const questdbConfig = config.questdb
|
||||
? {
|
||||
...config.questdb,
|
||||
influxPort: (config.questdb as any).influxPort || (config.questdb as any).ilpPort || 9009,
|
||||
}
|
||||
: {
|
||||
enabled: true,
|
||||
host: 'localhost',
|
||||
httpPort: 9000,
|
||||
pgPort: 8812,
|
||||
influxPort: 9009,
|
||||
database: 'questdb',
|
||||
};
|
||||
|
||||
return {
|
||||
redis: config.redis || {
|
||||
|
|
@ -110,61 +117,88 @@ export class ServiceContainerBuilder {
|
|||
password: 'postgres',
|
||||
},
|
||||
questdb: this.options.enableQuestDB ? questdbConfig : undefined,
|
||||
proxy: this.options.enableProxy ? (config.proxy || { enabled: false, cachePrefix: 'proxy:', ttl: 3600 }) : undefined,
|
||||
browser: this.options.enableBrowser ? (config.browser || { headless: true, timeout: 30000 }) : undefined,
|
||||
queue: this.options.enableQueue ? (config.queue || {
|
||||
enabled: true,
|
||||
workers: 1,
|
||||
concurrency: 1,
|
||||
enableScheduledJobs: true,
|
||||
delayWorkerStart: false,
|
||||
defaultJobOptions: {
|
||||
attempts: 3,
|
||||
backoff: { type: 'exponential' as const, delay: 1000 },
|
||||
removeOnComplete: 100,
|
||||
removeOnFail: 50,
|
||||
}
|
||||
}) : undefined,
|
||||
proxy: this.options.enableProxy
|
||||
? config.proxy || { enabled: false, cachePrefix: 'proxy:', ttl: 3600 }
|
||||
: undefined,
|
||||
browser: this.options.enableBrowser
|
||||
? config.browser || { headless: true, timeout: 30000 }
|
||||
: undefined,
|
||||
queue: this.options.enableQueue
|
||||
? config.queue || {
|
||||
enabled: true,
|
||||
workers: 1,
|
||||
concurrency: 1,
|
||||
enableScheduledJobs: true,
|
||||
delayWorkerStart: false,
|
||||
defaultJobOptions: {
|
||||
attempts: 3,
|
||||
backoff: { type: 'exponential' as const, delay: 1000 },
|
||||
removeOnComplete: 100,
|
||||
removeOnFail: 50,
|
||||
},
|
||||
}
|
||||
: undefined,
|
||||
service: config.service,
|
||||
};
|
||||
}
|
||||
|
||||
private registerServices(container: AwilixContainer<ServiceDefinitions>, config: AppConfig): void {
|
||||
private registerServices(
|
||||
container: AwilixContainer<ServiceDefinitions>,
|
||||
config: AppConfig
|
||||
): void {
|
||||
// Register handler infrastructure first
|
||||
container.register({
|
||||
handlerRegistry: asClass(HandlerRegistry).singleton(),
|
||||
handlerScanner: asClass(HandlerScanner).singleton(),
|
||||
});
|
||||
|
||||
registerCoreServices(container, config);
|
||||
registerCacheServices(container, config);
|
||||
registerDatabaseServices(container, config);
|
||||
registerApplicationServices(container, config);
|
||||
|
||||
|
||||
// Register service container aggregate
|
||||
container.register({
|
||||
serviceContainer: asFunction(({
|
||||
config: _config, logger, cache, globalCache, proxyManager, browser,
|
||||
queueManager, mongoClient, postgresClient, questdbClient
|
||||
}) => ({
|
||||
logger,
|
||||
cache,
|
||||
globalCache,
|
||||
proxy: proxyManager, // Map proxyManager to proxy
|
||||
browser,
|
||||
queue: queueManager, // Map queueManager to queue
|
||||
mongodb: mongoClient, // Map mongoClient to mongodb
|
||||
postgres: postgresClient, // Map postgresClient to postgres
|
||||
questdb: questdbClient, // Map questdbClient to questdb
|
||||
})).singleton(),
|
||||
serviceContainer: asFunction(
|
||||
({
|
||||
config: _config,
|
||||
logger,
|
||||
cache,
|
||||
globalCache,
|
||||
proxyManager,
|
||||
browser,
|
||||
queueManager,
|
||||
mongoClient,
|
||||
postgresClient,
|
||||
questdbClient,
|
||||
}) => ({
|
||||
logger,
|
||||
cache,
|
||||
globalCache,
|
||||
proxy: proxyManager, // Map proxyManager to proxy
|
||||
browser,
|
||||
queue: queueManager, // Map queueManager to queue
|
||||
mongodb: mongoClient, // Map mongoClient to mongodb
|
||||
postgres: postgresClient, // Map postgresClient to postgres
|
||||
questdb: questdbClient, // Map questdbClient to questdb
|
||||
})
|
||||
).singleton(),
|
||||
});
|
||||
}
|
||||
|
||||
private transformStockBotConfig(config: UnifiedAppConfig): Partial<AppConfig> {
|
||||
// Unified config already has flat structure, just extract what we need
|
||||
// Handle questdb field name mapping
|
||||
const questdb = config.questdb ? {
|
||||
enabled: config.questdb.enabled || true,
|
||||
host: config.questdb.host || 'localhost',
|
||||
httpPort: config.questdb.httpPort || 9000,
|
||||
pgPort: config.questdb.pgPort || 8812,
|
||||
influxPort: (config.questdb as any).influxPort || (config.questdb as any).ilpPort || 9009,
|
||||
database: config.questdb.database || 'questdb',
|
||||
} : undefined;
|
||||
const questdb = config.questdb
|
||||
? {
|
||||
enabled: config.questdb.enabled || true,
|
||||
host: config.questdb.host || 'localhost',
|
||||
httpPort: config.questdb.httpPort || 9000,
|
||||
pgPort: config.questdb.pgPort || 8812,
|
||||
influxPort: (config.questdb as any).influxPort || (config.questdb as any).ilpPort || 9009,
|
||||
database: config.questdb.database || 'questdb',
|
||||
}
|
||||
: undefined;
|
||||
|
||||
return {
|
||||
redis: config.redis,
|
||||
|
|
@ -177,4 +211,4 @@ export class ServiceContainerBuilder {
|
|||
service: config.service,
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,48 +1,54 @@
|
|||
import type { Browser } from '@stock-bot/browser';
|
||||
import type { CacheProvider } from '@stock-bot/cache';
|
||||
import type { IServiceContainer } from '@stock-bot/types';
|
||||
import type { Logger } from '@stock-bot/logger';
|
||||
import type { MongoDBClient } from '@stock-bot/mongodb';
|
||||
import type { PostgreSQLClient } from '@stock-bot/postgres';
|
||||
import type { ProxyManager } from '@stock-bot/proxy';
|
||||
import type { QuestDBClient } from '@stock-bot/questdb';
|
||||
import type { SmartQueueManager } from '@stock-bot/queue';
|
||||
import type { AppConfig } from '../config/schemas';
|
||||
|
||||
export interface ServiceDefinitions {
|
||||
// Configuration
|
||||
config: AppConfig;
|
||||
logger: Logger;
|
||||
|
||||
// Core services
|
||||
cache: CacheProvider | null;
|
||||
globalCache: CacheProvider | null;
|
||||
proxyManager: ProxyManager | null;
|
||||
browser: Browser;
|
||||
queueManager: SmartQueueManager | null;
|
||||
|
||||
// Database clients
|
||||
mongoClient: MongoDBClient | null;
|
||||
postgresClient: PostgreSQLClient | null;
|
||||
questdbClient: QuestDBClient | null;
|
||||
|
||||
// Aggregate service container
|
||||
serviceContainer: IServiceContainer;
|
||||
}
|
||||
|
||||
export type ServiceCradle = ServiceDefinitions;
|
||||
|
||||
export interface ServiceContainerOptions {
|
||||
enableQuestDB?: boolean;
|
||||
enableMongoDB?: boolean;
|
||||
enablePostgres?: boolean;
|
||||
enableCache?: boolean;
|
||||
enableQueue?: boolean;
|
||||
enableBrowser?: boolean;
|
||||
enableProxy?: boolean;
|
||||
}
|
||||
|
||||
export interface ContainerBuildOptions extends ServiceContainerOptions {
|
||||
skipInitialization?: boolean;
|
||||
initializationTimeout?: number;
|
||||
}
|
||||
import type { Browser } from '@stock-bot/browser';
|
||||
import type { CacheProvider } from '@stock-bot/cache';
|
||||
import type { HandlerRegistry } from '@stock-bot/handler-registry';
|
||||
import type { Logger } from '@stock-bot/logger';
|
||||
import type { MongoDBClient } from '@stock-bot/mongodb';
|
||||
import type { PostgreSQLClient } from '@stock-bot/postgres';
|
||||
import type { ProxyManager } from '@stock-bot/proxy';
|
||||
import type { QuestDBClient } from '@stock-bot/questdb';
|
||||
import type { SmartQueueManager } from '@stock-bot/queue';
|
||||
import type { IServiceContainer } from '@stock-bot/types';
|
||||
import type { AppConfig } from '../config/schemas';
|
||||
import type { HandlerScanner } from '../scanner';
|
||||
|
||||
export interface ServiceDefinitions {
|
||||
// Configuration
|
||||
config: AppConfig;
|
||||
logger: Logger;
|
||||
|
||||
// Handler infrastructure
|
||||
handlerRegistry: HandlerRegistry;
|
||||
handlerScanner: HandlerScanner;
|
||||
|
||||
// Core services
|
||||
cache: CacheProvider | null;
|
||||
globalCache: CacheProvider | null;
|
||||
proxyManager: ProxyManager | null;
|
||||
browser: Browser;
|
||||
queueManager: SmartQueueManager | null;
|
||||
|
||||
// Database clients
|
||||
mongoClient: MongoDBClient | null;
|
||||
postgresClient: PostgreSQLClient | null;
|
||||
questdbClient: QuestDBClient | null;
|
||||
|
||||
// Aggregate service container
|
||||
serviceContainer: IServiceContainer;
|
||||
}
|
||||
|
||||
export type ServiceCradle = ServiceDefinitions;
|
||||
|
||||
export interface ServiceContainerOptions {
|
||||
enableQuestDB?: boolean;
|
||||
enableMongoDB?: boolean;
|
||||
enablePostgres?: boolean;
|
||||
enableCache?: boolean;
|
||||
enableQueue?: boolean;
|
||||
enableBrowser?: boolean;
|
||||
enableProxy?: boolean;
|
||||
}
|
||||
|
||||
export interface ContainerBuildOptions extends ServiceContainerOptions {
|
||||
skipInitialization?: boolean;
|
||||
initializationTimeout?: number;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,10 +3,7 @@ import { NamespacedCache, type CacheProvider } from '@stock-bot/cache';
|
|||
import type { ServiceDefinitions } from '../container/types';
|
||||
|
||||
export class CacheFactory {
|
||||
static createNamespacedCache(
|
||||
baseCache: CacheProvider,
|
||||
namespace: string
|
||||
): NamespacedCache {
|
||||
static createNamespacedCache(baseCache: CacheProvider, namespace: string): NamespacedCache {
|
||||
return new NamespacedCache(baseCache, namespace);
|
||||
}
|
||||
|
||||
|
|
@ -15,8 +12,10 @@ export class CacheFactory {
|
|||
serviceName: string
|
||||
): CacheProvider | null {
|
||||
const baseCache = container.cradle.cache;
|
||||
if (!baseCache) {return null;}
|
||||
|
||||
if (!baseCache) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return this.createNamespacedCache(baseCache, serviceName);
|
||||
}
|
||||
|
||||
|
|
@ -25,8 +24,10 @@ export class CacheFactory {
|
|||
handlerName: string
|
||||
): CacheProvider | null {
|
||||
const baseCache = container.cradle.cache;
|
||||
if (!baseCache) {return null;}
|
||||
|
||||
if (!baseCache) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return this.createNamespacedCache(baseCache, `handler:${handlerName}`);
|
||||
}
|
||||
|
||||
|
|
@ -35,10 +36,12 @@ export class CacheFactory {
|
|||
prefix: string
|
||||
): CacheProvider | null {
|
||||
const baseCache = container.cradle.cache;
|
||||
if (!baseCache) {return null;}
|
||||
|
||||
if (!baseCache) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Remove 'cache:' prefix if already included
|
||||
const cleanPrefix = prefix.replace(/^cache:/, '');
|
||||
return this.createNamespacedCache(baseCache, cleanPrefix);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue