Dux scalable implementation (TO BE UPDATED)
The Assignment type derives the serde
traits Serialize and Deserialize.
#![allow(unused)] fn main() { #[derive(Debug, Clone, Serialize, Deserialize)] pub struct Assignment { ... } }
Then, using the serde_json crate, we can do this :
#![allow(unused)] fn main() { // Serialize let serialized_assignment: String = serde_json::to_string(&assignment).unwrap(); // Send this String to another host via a TcpStream or anything else... // A serialized Assignment is received. // Deserialize it let deserialized_assignment: Assignment = serde_json::from_str(serialized_assignment).unwrap(); }
It means the work can be split between multiple machines. One machine will generate an Assignment based on a TaskList and a HostList and send it to another one which will actually run it on the targeted host. The results can then be sent to a last machine which will display them as part of a web interface for instance.
As an example, in this Dux scalable version, the work is divided between controllers and workers, which a message broker in the middle. The Dux controller publishes Assignments on a RabbitMQ queue and consumes results on another queue. The Dux workers consume Assignments, run them and publish the results.
This allows to scale the operation when needed by increasing the number of workers and/or their capacity to multithread.