Todemy currently has several moving parts including multiple mini “web apps” and both server sides of the marketplace. Deployment is tedious and steps can easily be missed. Sounds like a good excuse to write some rust…

Coordinating a production release

Well with payments right around the corner (coming soon!) theres many parts to deploy for all of Todemy to be live. Bits and pieces can easily be missed and there currently isn’t a build pipeline set-up or anything in regards to continuos delivery which makes sense as a 1 developer shop. However I love the opportunity to write additional tooling as it provides a great project opportunity to write rust. While the same can be achieved with a shell script a higher level language is much easier to maintain and expand upon (atleast for me shell isn’t that native for me to read easily I usually miss some nuances).

Simple Rust CLI tool

The idea is quite simple provide a json config file of describing the commands required to deploy. This lead into a great learning experience for rust and I’ll go through implementing some of the initial basic concepts. The tool is open source feel free to re-use some code concepts for your own use cases :).

The Clap crate handles all the cli set-up so i’ll skip that section as their docs are quite comprehensive take a look through the tody-cli repo for my set-up. Next is parsing of the json file so assuming we have the arguments set-up lets start with step 1. Reading and parsing a config file:

use serde_json;

#[derive(Debug, Deserialize)]
struct Step {
    command: String,
    args: Option<Vec<String>>
}

#[derive(Debug, Deserialize)]
struct DeployConfig {
    steps: Vec<Step>
}

fn retrieve_config<P: AsRef<Path>>(path: P) -> Result<Box<DeployConfig>, Box<Error>> {

    let file = File::open(path)?;

    let config = serde_json::from_reader(file)?;

    Ok(config)
}

let config = retrieve_config("example-config-path.json").expect("Failed to find config file!")

Quite simple we read some config from a file and use utilize serde to serialize it into our defined struct. Our struct consists of some steps with commands and optional arguments.

All the short-handed “?” syntax just sugar for option type handling. I’m quite a fan of the error handling methodology within the language and while tedious at first I believe after a while it’ll just become second nature as you get more comfortable with some short-hands and see further abstractions possible.

Next given our struct lets iterate and run some commands. Rusts standard struct Command makes it quite simple.

// Assuming initial set-up and config is avilable 
use std::process::Command;

for step in config.steps.into_iter() {
    let result = Command::new(step.command)
                    .output()
                    .expect("Our command to not fail :(");

    println!("Yay our step output: {}", String::from_utf8_lossy(&output.stdout));
}

Next we can refactor it slightly to include the optional arguments

// Assuming initial set-up and config is avilable 
use std::process::Command;

for step in config.steps.into_iter() {
    let mut command = Command::new(&step.command);
    match step.args {
        Some(ref args) => {
            for arg in args.iter() {
                command.arg(arg);
            }
        },
        _ => {}
    };

    let result = command
                    .output()
                    .expect("Our command to not fail :(");

    println!("Yay our step output: {}", String::from_utf8_lossy(&result.stdout));
}

Now this would work fine, but we might decide we are greedy and want to do all the steps in parallel just like a good pipeline if we can do it in parallel most of the time we will gain some performance boost. Not to mention a great opportunity to introduce and play with some threading logic.

use std::thread;

let mut step_threads = vec![];

for step in config.steps.into_iter() {
    step_threads.push(thread::spawn(move || {
        let mut command = Command::new(&step.command);
        match step.args {
            Some(ref args) => {
                for arg in args.iter() {
                    command.arg(arg);
                }
            },
            _ => {}
        };

        let result = command
                        .output()
                        .expect("Our command to not fail :(");

        println!("Yay our step output: {}", String::from_utf8_lossy(&result.stdout));
    }))
}

for thread in step_threads {
    let _ = thread.join().unwrap();
}

println!("All steps have finished!");

Thats pretty much all the tool did. Theres a small abstraction of having ‘sections’ instead of steps which chain a bunch of steps together. It allowed me to for example for each side of the app to bundle and upload all the web apps while also installing the serverless side. Plus I was able to describe ordering such as to ensure that the terraform was run initially.

I can see the Todemy tooling growing to potentially also facilitate some automated testing orchestration but thats for another blog post.