Host your music portfolio for free with Faircamp and GitLab

Music streaming is volatile #

Amidst the era of streaming, owning music you care about is still important in my opinion. Streaming providers or labels can take down a release anytime in your location, or even globally. Plenty of music I discovered as a kid is missing from Spotify and its competitors, although I admit the culprit might be my taste, liking many underground and underrated artists from times when streaming and DRM weren’t a thing. By no means I’m saying ditch streaming as an artist because it’s one if not the most effective way to promote your music. Offering a copy can co-exist with it.

Last year, I started an outlook from being strictly part of music bands and started creating songs on my own, which also involves loosely collaborating with my musician friends. What I mean here is a bit more sophisticated than my usual “grab my phone to record this great guitar riff I found while fiddling around on the neck” workflow. I just finished a track and several others were in the making at the time of writing.

I was looking for a solution to showcase my own music project with the following features:

Bandcamp would fit most of these needs, but it lacks decentralized ownership over my presence. Also, there’s nothing to tinker with, so I wouldn’t be writing this if I selected a fully out-of-the-box SaaS.

A wise man once said to me, “if you can’t do it from the terminal, it’s not worth doing”.

Simon Repp’s awesome Faircamp is exactly what I was looking for. A platform designed to help artists distribute their music, emphasizing independence and control over their content.

I don’t intend this post to become a fully detailed guide, more like a chain of thought where I want to highlight decisions, implementation parts, and sneaky bits. I won’t touch upon the artistic side of things but rather try to keep an engineering mindset.

Adding content #

Faircamp is fairly easy to get started with. I just needed to populate a directory structure according to its comprehensive documentation.

Let’s create a Git repository (as you normally would upon entering any kind of software project) in an empty folder to start tracking files:

git init

My folder structure looks like this after adding my single record:

.
└── catalog
    ├── catalog.eno                 # landing page, describes artist
    ├── logo.jpg                    # cover photo on landing page
    └── Dopamine Routine            # release title
        ├── cover.jpg               # album art
        ├── Dopamine Routine.flac   # audio file
        └── release.eno             # describes release

Avoiding large diffs #

Even though audio files and artwork meant for release should normally be immutable, we already know the future might bring changes, and then you have to rename your record “Slave One” to “Secondary One”.

Luck is on your side if you don’t have the phrase in the lyrics, then you can slip by without altering the prosody and recording vocals again, just change the release title in the code and update the 60000x60000 pixel album art you made 10 years ago.

Slave I from Star Wars renamed to Secondary I

In such a scenario, Git LFS (Large File Storage) is a great utility to prevent bloating your repository by replacing large files with lightweight pointers, storing the actual file contents in a separate LFS server. This keeps the repository size smaller and more manageable.

Git LFS can be configured in a few steps:

  1. Installed it on my host: apt install git-lfs

  2. Enabled it in the local Git repository: git lfs install

  3. Populated .gitattributes file. Here I excluded all files with flac or jpg extension:

    *.flac filter=lfs diff=lfs merge=lfs -text
    *.jpg filter=lfs diff=lfs merge=lfs -text
    
  4. GitLab projects are configured with LFS enabled by default.

  5. I just committed my changes at this point.

Getting the site up locally #

When it comes to developing software locally, the essentials are ease-of-use (manager term for automation), reproducible builds, and short feedback cycle (you got me, I’m skipping testing on purpose, not because it’s not important, but a whole topic on its own).

Anyone else with basic knowledge should be able to get involved or take over from me without going through a number of mental breakdowns, given business logic is fairly simple. Of course, this is a fun-sized project, but these principles become increasingly important as project scope grows, and more people contribute to it. Catching issues early in the software development lifecycle can heavily reduce the costs of fixing them. It’s not just about convenience, but saving time and money.

Building the site #

While Faircamp has native runtimes, I prefer using the official docker image for portability and its reproducible nature.

Let’s build the site for the first time:

docker run                      \
    --name faircamp-build       \
    --volume ./:/data           \
    --interactive               \
    --tty                       \
    --rm                        \
    n3wjack/faircamp:1.2.0      \
        --catalog-dir ./catalog \
        --build-dir ./public

Okay, let’s decode what’s going on here. I mounted the project folder (current dir: ./) on the host to the working directory /data within the container so it can reach our carefully made music catalog. Set --tty so the container’s output is attached to my terminal and I can see build logs. Set --interactive so I can send an interrupt signal if I want to cancel the build for some reason. Pass --rm because I want to remove the container after it finishes (or fails). The data is persisted to the host through the volume mount so build artifacts are kept, despite the --rm switch.

About referencing the docker image: always pin dependency versions, my friends. It’s the only way I can be truly confident running the build will yield the same results at any other point in time. On the last lines, I passed arguments to Faircamp to specify what work directories to use.

At this point the build is reproducible, thanks to Git, Docker, and pinning versions down to patch level, so I dived into the ease-of-use part.

Excluding build artifacts from source control #

It’s a nice touch to prevent accidentally adding generated content to my repository. Doing so would shortly multiply its size and also bloat the structure for no good reason.

I set up a .gitignore file to achieve this:

catalog/.faircamp_cache/
public/

Git matches the defined paths with every file’s path in the repository and excludes them from the list of changed files, so they can’t be added to commits.

Serving the site locally #

Let’s find an unused port on the host machine so we can use that for spinning up the web server, without praying for de facto ports 3000 or 8080 to be available:

find_available_port() {
    local port
    # Start checking from port 1024 (ports below 1024 are reserved)
    for ((port = 1024; port <= 65535; port++)); do
        # Check if the port is available
        if ! nc -z localhost "$port" &>/dev/null; then
            echo "$port"
            return 0
        fi
    done
    echo "No available port found." >&2
    return 1
}

# Store an available port
AVAILABLE_PORT=$(find_available_port)

It’s time to launch the site locally. I did a workaround here, as the preview mode of Faircamp binds to 127.0.0.1, which is not accessible from the host machine, so I used another container. The sole purpose of this service is to serve the static site generated by Faircamp. I selected the lipanski/docker-static-website docker image because it’s unbelievably small, claimed to fit around 80 KB, packing just enough functionality to do the job.

docker run                                                  \
    --name faircamp-serve                                   \
    --volume ./public:/home/static                          \
    --volume ./config/httpd.conf:/home/static/httpd.conf:ro \
    --publish "${AVAILABLE_PORT}:3000"                      \
    --rm                                                    \
    lipanski/docker-static-website:2.4.0                    &

I mounted the public folder on the host to /home/static in the container, it’s the path it uses to serve static content. I don’t need any fancy routing, yet this container needs a config file, so I mounted an empty httpd.conf. To make the site reachable, I set the previously saved port to listen on the host with --publish, forwarding requests to port 3000 of the container, where the server listens.

Note that the command is started in the background with & so I can open the site in the default browser right after:

xdg-open "http://localhost:${AVAILABLE_PORT}"

Starting the web server in the background left me with one downside: it does not intercept SIGINT or SIGTERM signals, so it keeps running after my script stops, becoming a zombie process. I dealt with this problem by setting up a trap that stops the rogue container.

function cleanup {
  echo "Stopping webserver..."
  docker stop faircamp-serve
}

trap cleanup EXIT

Put it all together in a shell script, typing ./scripts/serve.sh is all it takes to build and bring up the site.

Integrating with VS Code #

To further increase convenience and lower the entry barrier for Visual Studio Code users like myself, I’d like to build and serve the site from the Run and Debug menu. A launch configuration is what we’re looking for.

I used the extension bash-debug to feed the script serve.sh to the launch configuration. Contents of launch.json inside .vscode directory:

{
  "version": "0.2.0",
  "configurations": [
    {
      "type": "bashdb",
      "request": "launch",
      "name": "serve",
      "cwd": "${workspaceFolder}",
      "program": "${workspaceFolder}/scripts/serve.sh",
      "args": [],
      "showDebugOutput": true,
      "terminalKind": "integrated"
    }
  ]
}

Remember I had a trap set to clean up the web server instance once serve.sh receives a termination signal? It works alright if I natively call the script, but debug mode kills the script forcefully when the session is stopped, so it does not receive the signal. This leaves us with a zombie process running in the background, consuming resources and spitting logs to the terminal. The solution here was to introduce a VS Code task that is triggered after the debug session is stopped and stops the web server.

I added tasks.json inside the .vscode directory:

{
  "version": "2.0.0",
  "tasks": [
    {
      "label": "stop webserver",
      "command": "docker stop faircamp-serve",
      "type": "shell",
      "problemMatcher": []
    }
  ]
}

And extended launch.json configuration with a trigger pointing to the added task:

"postDebugTask": "stop webserver"

Lastly, let’s define extensions, so VS Code can install the recommended set when I (or anyone else) open up the project for the first time. Contents of extensions.json in .vscode directory:

{
  "recommendations": [
    "mhutchie.git-graph",
    "GitLab.gitlab-workflow",
    "rogalmic.bash-debug",
    "timonwong.shellcheck",
    "davidanson.vscode-markdownlint",
    "esbenp.prettier-vscode"
  ]
}

There is still a minor inconvenience I could not solve: The debugger always pauses at the first command, so I have to click “continue” every time I start it.

Wondering how that will influence my sleep quality…

Publishing to the audience #

The site is ready, but it’s not really practical as long as it’s kept in the drawer.

There are a dozen of solutions where such a static site can be hosted online. I settled with GitLab because:

I quickly created a new project with their SaaS offering.

Syncing local and remote repository #

To connect my local Git repository to the empty remote one in the GitLab project, I set up a Git remote:

git remote add origin git@gitlab.com:hrvthmrk/librium-delta.git
git branch -M main
git push -uf origin main

Local commits are now pushed to GitLab, so I can start automating around it.

Automating with pipelines #

Every time a commit hits the main branch in the remote Git repository, I’d like to build and deploy the site without any further interaction.

Populated .gitlab-ci.yml pipeline descriptor file:

workflow:
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH

build:
  stage: build
  image:
    name: n3wjack/faircamp:1.2.0
    entrypoint: [""]
  pages: true
  rules:
    - when: always
  script:
    - faircamp --catalog-dir ./catalog --build-dir ./public
  artifacts:
    paths:
      - public

The section workflow: defines the trigger: run this pipeline only if changes are committed to the default (main) branch.

The CI runner automatically mounts the project directory to the build container, so faircamp can reach the music catalog in ./catalog and output the generated site contents to ./public folders as it would run on the host. Note that I used the very same version of the Faircamp docker image, so I can be confident the results will match my local builds.

To grab a plain shell within the container, I overrode its default entry point faircamp, so I could write the script: section, defining commands to execute.

The deployment part is surprisingly short, pages: true and the artifacts: sections are the only relevant bits. The story behind is that GitLab handles this part under the hood, I just needed to place site contents under the public folder. You can read more on how it works exactly.

BYOD (Bring Your Own Domain) #

Once the pipeline instance is glowing in green, the site appears under an internal GitLab subdomain, for me, it’s https://librium-delta-5a0c68.gitlab.io. I think owning a domain gives the site a more professional appeal.

As described in GitLab’s Deploy -> Pages menu, I created two DNS records in my domain provider’s admin panel: an ALIAS record to point my subdomain libriumdelta.hrvthmrk.com to GitLab’s hrvthmrk.gitlab.io, and a TXT record to verify the ownership of my domain. It takes a bit of time for entries to get propagated across the web, a feature teaching patience without showing you ads.

*Trying to stick to the topic*

I have a business idea about a simple progress bar that estimates global DNS propagation status. Revenue could be ad-based.

I also selected “automatic certificate management using Let’s Encrypt” option because

Lastly, in the settings section, enabled “Force HTTPS” to keep connections to my site secure via redirection, and set libriumdelta.hrvthmrk.com as the primary domain so requests targeting the internal domain land on the public address too.

Gathering analytics data #

I definitely became overly excited at this point.

But hey, why not gather some amusing intel about visitors to my art?!

Find all the details in Google’s documentation on how to get started with their Analytics product. Firstly, I created the analytics tag, and stored it in a file named gtag.html. Then I added this code snippet to every HTML page’s <head> section with a script that handles injection.

I just added one additional line to .gitlab-ci.yml after the build command to execute the injection:

script:
  - faircamp --catalog-dir ./catalog --build-dir ./public
  - bash -c ./scripts/inject-google-analytics.sh

Analytics data started showing right after the fresh release went live, just as expected.

Closing thoughts #

The goal is accomplished, my music is available on the World Wide Web, and I had a great time interconnecting my hobbies with this project. I could definitely get by with less effort, but the benefits are clear: I can manage content from code easily, deployments of the site are fully automated, and I hopefully showed a few best practices along the way, applicable to a wide range of software projects.

The source code of this project is available publicly in this repository.

My greatest gratitude goes to my friend Péter Bozsó, who developed the site generator beneath my blog. He also likes excellent solutions, check out what’s on his smart mind!