Implementing a Backend Service in Rust — Part 1
I’ve been using Rust for quite a while now.
Most notably, I’ve been involved in the development of eventually-rs
,
a Rust crate providing support for Event-sourced applications.
My background has to do with Systems Programming and Backend Development,
which was the main drive for creating eventually
.
In the process of writing the crate, I’ve been reimplementing a couple of those microservices I was currently working on on my daily job, as a sort of testing project to validate the soundness of the crate and its design decisions.
During this time, I’ve gradually learned how to implement much of the features a service needs for production-readiness, in the Rust ecosystem, such as: instrumentation, distributed tracing, package structure, containerization and so on.
With this series, I aim to lay those concepts down for you, whether you may or may not be familiar with them already, and apply them using stable Rust.
By the end of the series, we should have a (almost) production-ready Rust service, ready to be deployed somewhere in the cloud.
But… before jumping into the code, let’s talk about what we are going to build.
Dumbbit: A dumb Reddit clone
Well, the title says it all…
For personal projects, I usually draw inspirations from things I use a lot. As a matter of fact, I’m pretty active on Reddit, so why not trying to recreate a very small MVP of Reddit?
This is what we’re going to implement in the service:
- Topics (i.e. Subreddits)
- Create a new Topic
- Visualize a Topic
- Homepage
- Follow a Topic
- Display new Posts from followed Topics
- Posts
- Create a Post in a Topic
- Upvote or Downvote a Post
We can keep things fairly small, while still have enough content to cover much of the many aspects you might find in your typical service development endeavors.
Now that we know what we’re going to build, let’s talk about how to set up the repository that is going to host our code!
Repository Setup
This might probably be the least favorite part of the development cycle for many people. However, I find it of great importance to get it right, just as much as the actual code we’re going to write: it will help us write code that actually matters, faster.
In this section, we’re going to discuss: package structure, Continuous Integration pipeline, etc.
Create a new crate
Let’s use cargo
to create the new crate:
cargo new --lib dumbbit
This will create a library crate, which means the first and only Rust
source code file will be in src/lib.rs
.
I personally prefer having the main crate as a library, and add as many
binaries as artifacts needed under src/bin
. In our case, let’s create
an initial dumbbit.rs
file in src/bin
with a simple "Hello World"
:
//! src/bin/dumbbit.rs
fn main() {
println!("Hello World!")
}
If we run cargo run
we should get the following output:
$ cargo run
Hello World!
Why not virtual workspaces?
A Cargo Workspace allows to package multiple crates
in the same repository space, by sharing a single Cargo.lock
instead
of one per each crate member.
There are some notable examples of workspaces, such as futures-rs
.
A valid package structure idea might be to split multiple layers of the service application onto separate crates. This has the following advantages:
- Incremental compilation for the updated crates
- Concurrent crate compilation from Cargo during
cargo run
orcargo build
- Flat and tidy package structure
However, this makes things a bit more complicated with Docker images, and for most cases there are no concrete benefits of using such structure (unless, of course, your application has a high number of modules and dependencies).
Suggestion: start with a single crate, split layers using modules and if the number of modules reaches a “big enough” number, you can:
- Move the modules from
src/
to the root level, - Add a
Cargo.toml
for each module to the root level, - Change the
Cargo.toml
in the root level to a Cargo Workspace one - Import crates as needed
Set up a Continuous Integration pipeline
Continuing on setting up the development environment, next thing we gotta do is to set up a Continuous Integration pipeline.
At the very least, the CI should perform the following checks:
- Check that the code compiles successfully
- Run all kinds of testing suites
- Make sure the code conforms to our styleguides using a Linter
There might be additional steps you want to include in your CI pipeline (e.g. generating documentation), but for now this should be enough.
You can use whatever tool you like the most, but in our case, since the code is going to be hosted in Github, we’re going to use Github Actions.
Luckily, there are a number of very useful Actions already out there for Rust, such as:
- actions-rs/toolchain, to target a specific Rust version
- actions-rs/tarpaulin, to use
cargo-tarpaulin
for running tests and collect coverage profiles - actions-rs/clippy-check, to run the Clippy linter
In our particular case, we can use the following Workflow configuration
to enable testing with coverage, compilation and linting,
by adding this file to .github/workflows
:
name: Rust (stable)
on:
pull_request:
push:
branches:
- master
jobs:
# Make sure the project compiles, without trying to building the whole project.
check:
name: Check
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/[email protected]
- name: Install stable toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: Run cargo check
uses: actions-rs/cargo@v1
with:
command: check
# Run unit and integration tests in the project.
# Using tarpaulin to collect coverage reports and upload to Codecov (requires token).
tests:
name: Test Suite
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/[email protected]
- name: Install stable toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- name: Run cargo-tarpaulin (main)
uses: actions-rs/[email protected]
- name: Upload to codecov.io
uses: codecov/[email protected]
with:
token: ${{secrets.CODECOV_TOKEN}}
flags: unit
- name: Archive code coverage results
uses: actions/[email protected]
with:
name: code-coverage-report
path: cobertura.xml
# Run cargofmt and clippy to highlight warnings and style errors.
lints:
name: Lints
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/[email protected]
- name: Install stable toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
components: rustfmt, clippy
- name: Run cargo fmt
uses: actions-rs/cargo@v1
with:
command: fmt
args: --all -- --check
- name: Run cargo clippy
uses: actions-rs/clippy-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
args: --all-features
This pipeline will work whether we push directly to master
, or we open
a Pull Request.
Our first Dumb HTTP Server
The application transport choice for now is HTTP.
Then, let’s create a simple “Hello World” application for now!
Remember: we want this application to run on stable
so… no Rocket.
However, a nice stable
-compatible HTTP library we can use is tide
.
Let’s add the dependency in Cargo.toml
like so:
[dependencies]
tide = "0.14.0"
To use tide
however, we need the async-std
executor to
be able to run Futures in the main()
entrypoint. Let’s add it to the project:
[dependencies]
async-std = { version = "1.7.0", features = ["attributes"] }
Last but not least, I’m a huge fan of anyhow
for generic errors,
since I don’t particularly like the Box<dyn std::error::Error>
notation.
Let’s add that one in Cargo.toml
too:
[dependencies]
anyhow = "1.0"
With the new dependencies, we can put the new HTTP server code in src/bin/dumbbit.rs
.
The file should look like this:
#[async_std::main]
async fn main() -> anyhow::Result<()> {
let mut app = tide::new();
app.at("/").get(|_| async move { Ok("Hello, world!\n") });
Ok(app.listen("0.0.0.0:8080").await?)
}
As you can probably tell, the HTTP server is listening to incoming connections
on the port 8080
. tide::new()
creates a new router, and the at("/").get(..)
means that every GET http://localhost:8080/
requests will get Hello, world!
in response.
Pretty neat, huh?
async_std::main
is a macro that runs the async fn main()
onto the async_std
executor (if you don’t know what an executor is, you can check this link).
We can run the application using cargo run
.
Let’s test it out with curl
:
curl http://localhost:8080/ -v
* Trying ::1:8080...
* connect to ::1 port 8080 failed: Connection refused
* Trying 127.0.0.1:8080...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.72.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-length: 14
< date: Wed, 04 Nov 2020 23:12:40 GMT
< content-type: text/plain;charset=utf-8
<
Hello, world!
* Connection #0 to host localhost left intact
🎉 It works! 🎉
We have now a pretty decent HTTP server. Bet it can also handle quite a high traffic 😛
What’s next?
Now that we have a functioning HTTP server, the next tasks are:
- Implementing the domain layer
- Implementing the infrastructure layer (e.g. database connections, HTTP API, etc.)
- Package the application in a Docker image
- Deploy it somewhere!
In the next blog post, we’re going to tackle the Domain implementation. We’ll start with some light domain modeling using Event Storming, and how that would translate into our Rust code. You’ll find out, Rust’s type system allows for very expressive, and safe, domain modeling.
For now, you’ll find the Dumbbit code explained in this blog post so far in this repository, at the blog-post-part-1
label: https://github.com/ar3s3ru/dumbbit/tree/blog-post-part-1
Let me know what you think!
You can reach out to me on Github, or my email account. You can find me on Twitter or LinkedIn, all the places!