Hosting quickly 3 - A Neon database
This is the third (or third-and-a-halfth) article in a series called "Hosting quickly". I want to launch Rust apps quickly and often, and I want to have stable, common infrastructure for all of them.
- Hosting quickly 1 - Setting up Terraform, Github, and 1Password
- Hosting quickly 2 - Dioxus to the web on Fly.io
- Hosting quickly 2.5 - Discovering and calling a backend from Dioxus
To get to the point where I can develop my very first application off of Cochrane, I'm adding a database to my setup. Keeping with the free tier, low setup goals here, I'm using Neon. Additionally, I want this to work with local development so we'll add a Docker Compose configuration!
Adding Neon to Terraform
There seem to be two Neon providers (1, 2), which are near feature parity with each other. The first one, by Dmitry Kisler appears to be more active and more used, so let's go with that one.
A tiny bit of config needs to be added to infra/main.tf
to get a functioning connection URL to a database:
resource "neon_project" "db_project" {
name = var.github_name
branch {
name = "main"
database_name = "core"
role_name = "backend"
}
}
And when that's working, we can add it to the environment variables for our backend:
resource "fly_machine" "machine_backend" {
// ...
env = {
DATABASE_URL = neon_project.db_project.connection_uri
}
}
Connecting from our backend
Let's add two new files, backend/src/database.rs
and backend/src/config.rs
and add it to main.rs
with a mod
statements. Inside our config.rs
, we'll load configuration from file or environment, and in database.rs
we'll connect to the actual database.
In config.rs
let's define our database configuration:
#[derive(Deserialize, Getters)]
pub struct Config {
#[getset(get = "pub")]
database: DatabaseConfig,
}
#[derive(Deserialize)]
pub struct DatabaseConfig {
username: SecretString,
password: SecretString,
port: u16,
host: String,
database_name: String,
}
impl DatabaseConfig {
pub fn as_connect_options(&self) -> PgConnectOptions {
PgConnectOptions::new()
.host(&self.host)
.username(self.username.expose_secret())
.password(self.password.expose_secret())
.port(self.port)
.database(&self.database_name)
}
}
Two notable things here: the Getters
and getset
attributes, from the getset crate. This enforces the readonly nature of our Config
struct: it should only ever be constructed, never modified. Secondly the SecretString
s in the DatabaseConfig
. This comes from secrecy and gives some stronger guarantees around application secrets, including making it harder to accidentally log them.
We'll load our config from environment only for now, using the excellent config crate.
pub fn get_configuration() -> Result<Config, config::ConfigError> {
let settings = config::Config::builder()
.add_source(config::Environment::default().separator("__"))
.build()?;
settings.try_deserialize::<Config>()
}
Connecting to the database is simple enough, just a few lines of code in database.rs
:
pub async fn connect_to_postgres(settings: &DatabaseConfig) -> Result<PgPool, sqlx::Error> {
let connection = PgPool::connect_with(settings.as_connect_options()).await?;
Ok(connection)
}
Running a query
Let's add a silly little query for now:
async fn get_data(State(pool): State<PgPool>) -> Json<GetResult> {
let res = query!("SELECT 5 + 5 AS sum;")
.fetch_one(&pool)
.await
.unwrap();
Json(GetResult {
foo: res.sum.unwrap_or_default().to_string(),
})
}
#[tokio::main]
async fn main() {
let db_pool = connect_to_postgres(config.database())
.await
.unwrap();
let state = ServerState::new(db_pool);
// ...
let app = Router::new()
.route("/", get(get_data))
.layer(cors)
.with_state(state);
}
Passing state to an Axum router is sort of out-of-scope here, so I'll leave that as an exercise to the reader. Or for the reader to look up in the code on Github.
Local development
Right now, to test our connections I have to deploy it. But I'd like to test it before. I can keep running Docker images manually, using justfile
commands to string them together. That'll get complicated fast though, so maybe I should consider alternatives.
I'm pretty unfamiliar with making these kind of complex services run in different environments, especially locally. I have some options, like Docker Compose or running Kubernetes locally, potentially with Helm. Some searching quickly shows that lots of people are excited about Docker Compose. Let's go with that!
We'll create an local-dev/
folder, with a justfile
and a compose.yaml
:
start:
docker-compose -f compose.yaml up --build
stop:
docker-compose -f compose.yaml down
The --build
flag for the start
command is important! Without this, Docker will not try to rebuild your image, even if the filesystem has changed.
version: '3.1'
services:
backend:
build:
context: ../
dockerfile: backend/Dockerfile
ports:
- "8002:3000"
depends_on:
- postgres
environment:
- DATABASE__HOST=postgres
- DATABASE__PORT=5432
- DATABASE__USERNAME=postgres
- DATABASE__PASSWORD=example
- DATABASE__DATABASE_NAME=cochrane
frontend:
build:
context: ../
dockerfile: frontend/Dockerfile
ports:
- "8001:8080"
environment:
- 'DIOXUS_ENV={ "backend_url": "http://localhost:8002" }'
postgres:
image: "postgres:16-alpine"
restart: always
ports:
- "8101:5432"
environment:
POSTGRES_PASSWORD: example
POSTGRES_DB: cochrane
Some special things are worth mentioning! You can see we're setting up the configuration properties here using the environment
property on the backend
service, and pointing it to our Postgres instance. For each service that we have to build, we also set both a build.context
, relative to the compose.yaml
itself, and then a build.dockerfile
relative to that context.
But, to compile our sqlx queries, we need to have run cargo sqlx prepare
. And for that, we need to configure our database. So let's set up a way that we can boot our Postgres machine by itself and run the command against it:
start-service service:
docker-compose -f compose.yaml up -d {{service}}
stop-service service:
docker-compose -f compose.yaml down {{service}}
sqlx-prepare: (start-service "postgres")
cd .. && cargo sqlx prepare --database-url=postgres://postgres:example@localhost:8101/cochrane --workspace
Running up
without the -d
flag (for "detach") will attach to the Docker container and log its outputs to the terminal. We want it to run in the background in this case.
This way we can boot one specific service, keeping in mind that the backend Docker image will not work without running sqlx prepare
first. Then, using the appropriate Postgres URI, sqlx can connect to it and prepare its compilation files.
Now we're ready to run the magical just local-dev/start
, and hit our frontend at https://localhost:8001/. It should display {"foo":"10"}
on your frontpage. Awesome!
Fine, an actual query
To run a real query against a real database, we'll first want to set up sqlx migrations to actually create that table. Let's run cargo sqlx migrate add init
to set up our first migration. Then we add to the justfile:
sqlx-migrate: (start-service "postgres")
cd .. && cargo sqlx migrate run --database-url=postgres://postgres:example@localhost:8101/cochrane
sqlx-prepare: (start-service "postgres") sqlx-migrate
# ...
Let's fill our actual migration:
CREATE TABLE test (
id UUID NOT NULL PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL
)
Now all you need to do is run just local-dev/sqlx-migrate
and you'll see it migrating. We've also added sqlx-migrate
as a dependency to sqlx-prepare
, so running that after creating a new migration will make sure everything's ready for building a database!
Let's use our table from the API:
#[derive(Deserialize)]
struct PostRequest {
name: String,
}
#[debug_handler]
async fn post_data(
State(pool): State<PgPool>,
Path(id): Path<Uuid>,
Json(request): Json<PostRequest>,
) -> StatusCode {
query!(
"INSERT INTO test (id, name) VALUES ($1, $2) ON CONFLICT (id) DO UPDATE SET name = $2",
id,
request.name
)
.execute(&pool)
.await
.unwrap();
StatusCode::OK
}
I will, again, leave part of this to the reader. Specifically the part where you take this data out (the GET request), and adding this to the router. We now just need to add the usage of this to our frontend...
async fn api_post(url: url::Url, uuid: uuid::Uuid, name: String) -> () {
let client = reqwest::Client::new();
client
.post(url.join(uuid.to_string().as_str()).unwrap())
.json(&PostRequest { name })
.send()
.await
.map_err(|e| e.to_string())
.unwrap();
()
}
And use it:
let set_name = move |_| {
cx.spawn({
to_owned![backend_url, uuid];
let new_uuid = uuid::Uuid::new_v4();
async move {
api_post(backend_url, new_uuid, "Foo bar".to_string()).await;
uuid.set(Some(new_uuid));
}
})
};
And before you know it... it works!
Let's recap...
Wow, we did a lot here! We tackled adding a Neon database to our Rust applications and streamlined our development with Docker Compose. With a real database connection, we can finally start building applications with Cochrane. Stay tuned for some first examples of that! 👀
In the upcoming post, I'll add a basic version of running background tasks to Cochrane. Keep coming back!