The Dux crate
Principle
Based on Rust's type system, the workflow is as follows :
- Get a task list : what is the expected state of the managed hosts ? This step produces a
TaskList
struct. - Get a hosts list : which hosts are under the scope of this task list ? This step produces a
HostList
struct. - Generate
Jobs
: aJob
represents a host and allows to track what happens to this host. It contains everything needed to handle the host and apply the expected state. - Dry run or directly Apply the tasklist on the host by leveraging the
Job
Usage
Import the crate
cargo add duxcore
Now let's perform the usual example : setup a webserver (but, this time, right from your Rust code !)
use duxcore::prelude::*; fn main() { // First we need to define what the expected state of the target host is. let my_tasklist = r#"--- - name: Let's install a web server ! steps: - name: First, we test the connectivity and authentication with the host. ping: - name: Then we can install the package... with_sudo: true apt: package: '{{ package_name }}' state: present - name: ... and start & enable the service. with_sudo: true service: name: '{{ service_name }}' state: started enabled: true - name: What date is it on this host by the way ? register: host_date command: content: date +%Y-%m-%d" "%Hh%M - name: Let's see... debug: msg: 'date: {{ host_date.output }}' "#; // Then we create a 'Job'. let mut my_job = Job::new(); // We set who the target host of this Job is, and how to connect to it. my_job .set_address("10.20.0.203") .set_connection(HostConnectionInfo::ssh2_with_key_file("dux", "./controller_key")).unwrap(); // We give it some context and the task list. my_job .add_var("package_name", "apache2") .add_var("service_name", "apache2") .set_tasklist_from_str(my_tasklist, TaskListFileType::Yaml).unwrap() ; // We can finally apply the task list to this host. my_job.apply(); // Let's see the result. println!("{}", my_job.display_pretty()); }
This is the basic workflow of Dux. It is up to you to parallelize, distribute the work, display the results in some web interface or send them in a RabbitMQ queue... Whatever suits you best ! The whole point is to let you adapt this automation engine to the context of your already-existing infrastructure. Adapt the tool to the job !