-
Notifications
You must be signed in to change notification settings - Fork 17
Release v.0.3.0 #496
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Release v.0.3.0 #496
Conversation
* detect external IP using stun servers
* bump alloy from 0.9.0 to 0.15.7
* detect external IP using stun servers * move work submission to retry handler * improve network gas price fetching and placement of gas price increases * add anvil tests, adjust nonce management to be non cached * add foundry to test pipeline * add additional test for replacement tx
* Keep all tasks in the orchestrator store, add get_all functionality, change route from /tasks to /tasks/current to get the running task
* introduce server modes to improve scalability of svc * automatically deploy multiple versions of orchestrator (API and processor) via helm charts
* add basic scheduler and plugins
* Introduce plugins in orchestrator status updates that allow to react on status changes * Introduce webhook plugin that allows to fire webhooks when node status changes
* add basic implementation for status update plugins and intitial webhook plugin * introduce webhook plugin * Introduce Node grouping plugin for status update & scheduler * automatically generate p2p id on worker
* improve redis tests, fix node group assignments * add groups to api endpoint
* rebuild groups when one node dies
Release: 0.2.11
…k-to-group Improvement: Ensure a group is only working on a single task
* ability to have multiple node group configurations * ability to set compute requirements per node group
* introduce very basic task_scheduling configuration during task creation * allow node_groups plugin to schedule tasks based on matching plugin config
* restructure plugin approach and move all plugins to base plugins folder
* introduce storage config with file_name_template to allow custom file names per task * simple integration of the node_groups plugin into the storage configuration * move s3 credentials to env var only
* allow passing multiple toploc configs to validator in json format
* add node_group_size in file_name_template
* load node group config for orchestrator from env
* introduce a simple file counter based on upload requests that can be reused in storage config using `${upload_count}`
* adjust interfaces to new prime lib standards (Socket env var, event handling)
* implement group validation calls for toploc * large scale storage rewrite, introduce mock storage provider for better testing * update deployment, load toploc configs from env * setup toploc group e2e tests
* add ability to customize the socket path env var
* setup p2p relay
* align smart contracts repo * initial implementation for group invalidation * fix group tracking test * enhance tests * collect metrics for soft invalidations * fix running multiple nodes locally for testing, adjust orchestrator arg, fix soft validation * introduce rewards_distributor_contract, automatically set rewards rate at bootup, log worker rewards to console
Co-authored-by: varun-doshi <[email protected]>
…le nodes (#479) * automatically try to increase group size from single nodes * optimize status handling, include status updates in discovery also
- Avoid having tasks with the same name - Improve task response sorting
…koff (#483) * avoid worker cleanup on failure restart, remove exp. backoff
* improve validator redis performance
…#492) * fix circular docker volume dependency when restarting worker
* allow to set threshold of same ips per discovery node
* add nonce to auth middleware
* allow loading data from multiple disc svc
* add reason to toploc output capture, add rejections api
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR prepares the v0.3.0 release by refactoring and improving multiple core modules across the validator, shared, and orchestrator crates. Key changes include:
- Migrating blocking Redis calls to asynchronous ones with proper error handling and connection management.
- Updating contract interfaces and wallet provider usage along with new p2p and webhook plugin functionality.
- Enhancing task scheduling and volume mount processing with improved variable replacement and configuration validation.
Reviewed Changes
Copilot reviewed 103 out of 165 changed files in this pull request and generated 2 comments.
File | Description |
---|---|
crates/orchestrator/src/store/core/redis.rs | Added a flush command in connection test that may affect production data. |
crates/shared/src/models/task.rs | Updated volume mount logic to support variable replacements and validation with regex. |
crates/orchestrator/src/store/domains/node_store.rs | Transitioned multiple node update functions to async and introduced repeated connection acquisition. |
(Other files) | Numerous updates to async contracts, p2p client, request signing, and webhook plugins supporting overall feature and performance improvements. |
Comments suppressed due to low confidence (1)
crates/orchestrator/src/store/core/redis.rs:60
- Using FLUSHALL during the connection check can be dangerous in production as it clears all Redis data; consider removing or conditioning this call only for testing environments.
let _ = redis::cmd("FLUSHALL").query::<String>(&mut conn);
"${NODE_ADDRESS}", | ||
]; | ||
|
||
let re = regex::Regex::new(r"\$\{[^}]+\}").unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The regex for detecting variable placeholders is compiled on every validation call; caching the compiled Regex instance could improve performance when validating many volume mounts.
Copilot uses AI. Check for mistakes.
node_address: &Address, | ||
status: NodeStatus, | ||
) -> Result<()> { | ||
let mut con = self.redis.client.get_multiplexed_async_connection().await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Repeatedly acquiring a new multiplexed Redis connection in node update functions may introduce unnecessary overhead; consider caching or reusing the connection across multiple operations if possible.
Copilot uses AI. Check for mistakes.
No description provided.