Hot damn, I think I have the PERFECT workflow to building an application!!! Ok, probably not, BUT I have settled on what I think is a pretty straightforward process that lets you build things real fast. In the end, you be able to build:

  • bug-free
  • SQL-backed
  • Go applications
  • with great speed.

After all, isn’t that the answer to the ultimate question of life, the universe, and everything fun? Let’s get into it.

The Tools

As alluded to above, the goal is to (quickly) spin up a (bug free) Go application that stores its state in a SQL database. Now, most of us have done this and/or worked with a number of similar applications and every time it feels like it requires a bunch of error-prone code, which means you can kiss those “bug-free” and “great speed” points above buhbye 👋. For this reason, I’ve always loathed writing the database layer in my Go applications. The developers yearn for the FastAPI.

So how do we build a bug-free SQL backed Go application with great speed? In the end, we’re going to write pure SQL for our database interface, generate the corresponding Go code, and finally declaratively migrate the database. The process I’ll explain here does require a couple external tools, but I promise the tools are trivial to install, widely used, and very simple to get started with. You’ve probably heard of at least one of them and if you do any reading online about this space, you’ll find that they’re loved 💖. We’re going to use sqlc to generate the Go code from our SQL queries, and we’ll use atlas to run the migrations. It’s also worth mentioning that I’m using Postgres with the https://github.com/jackc/pgx driver (did someone say LISTEN/NOTIFY??).

Honestly the documentation for these tools is so good that I, having already imparted unto you invaluable knowledge, could stop here; you could go look at the examples and you’d be off to the races. But let’s go through an example here because it’ll be nice to have it all in one place.

First let’s write our schema and SQL queries. Put your declarative schema definition in schema.yml. Then write the plain ol’ SQL queries you want to use in query.yml:

-- schema.yml
CREATE TABLE entry (
  id VARCHAR(16) PRIMARY KEY,
  address VARCHAR(128),
  data text
);

-- query.yml
-- name: GetEntry :one
SELECT * FROM entry
WHERE id = $1 LIMIT 1;

-- name: ListEntry :many
SELECT * FROM entry
ORDER BY id;

-- name: CreateEntry :one
INSERT INTO entry (
  id, address, data
) VALUES (
  $1, $2, $3
)
RETURNING *;

-- name: UpdateEntry :exec
UPDATE entry
  set address = $2,
  data = $3
WHERE id = $1;

-- name: DeleteEntry :exec
DELETE FROM entry
WHERE id = $1;

Now lets generate our Go code. First you need to specify some sqlc configuration in sqlc.yml. This is pretty straightforward:

version: "2"
sql:
  - engine: "postgresql"
    queries: "query.sql"
    schema: "schema.sql"
    gen:
      go:
        package: "sqlc"
        out: "sqlc"
        sql_package: "pgx/v5"
        emit_json_tags: true

Now you can generate the corresponding Go code:

sqlc generate

Now let’s set up a simple server with a handler. You can run this server to verify that your database is alive and you can connect to it, but don’t expect to be able to fetch from it yet, since we haven’t run the migrations.

package main

import (
    "context"
    "encoding/json"
    "fmt"
    "net/http"
    "os"

    "example.com/brojonat/server/sqlc"
    "github.com/jackc/pgx/v5/pgxpool"
)

func getDB(ctx context.Context, url string) (*pgxpool.Pool, error) {
    db, err := pgxpool.New(ctx, url)
    if err != nil {
        return nil, fmt.Errorf("unable to connect to database: %v", err)
    }
    if err = db.Ping(ctx); err != nil {
        return nil, err
    }
    return db, nil
}

func handleTest(q *sqlc.Queries) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        entries, err := q.ListEntries(r.Context())
        if err != nil {
            http.Error(w, "internal error", http.StatusInternalServerError)
            return
        }
        w.WriteHeader(http.StatusOK)
        json.NewEncoder(w).Encode(entries)
    }
}

func main() {
    db, err := getDB(context.Background(), os.Getenv("DATABASE_URL"))
    if err != nil {
        fmt.Fprintf(os.Stderr, "could not connect to db: %s\n", err)
        os.Exit(1)
    }
    q := sqlc.New(db)
    http.HandleFunc("/test", handleTest(q))
    fmt.Println("listening on :8080...")
    http.ListenAndServe(":8080", nil)
}

Here’s the Makefile I use to run this, in case this env setup bit is useful:

define setup_env
    $(eval ENV_FILE := $(1))
    $(eval include $(1))
    $(eval export)
endef

run:
    $(call setup_env, .env)
    CGO_ENABLED=0 GOOS=darwin go build -o server .
    ./server

Finally lets declaratively migrate our database. I love this because I don’t have to maintain some ad hoc sequence of migrations. You’ve already declared the schema you want in schema.yml for your sqlc stuff; you can simply point atlas to that same file and it will figure out how to migrate you there, ezpz:

atlas schema apply \
    --url "${DATABASE_URL}" \
    --dev-url "docker://postgres" \
    --to "file://schema.sql"

Boom, done. The cherry on top is that if you do want to support versioned migrations that you check in to a repository, atlas lets you do that too, and it plays perfectly nicely with the process I’ve outlined here. It really is that simple. Now, go forth and build something!