How to deploy a self-hosted Vaultwarden instance

Everyone, nowadays, has several accounts and credentials to take care of, that’s why everyone needs a decent and possibly open source password manager. When it comes to managing passwords there are many choices available on Linux: in the past, for example we talked about “pass”, a great, command line oriented, password-manager based on standard tools such as GPG and git. In this article we explore an alternative which can be the ideal solution for individuals and small organizations: Vaultwarden.

In this tutorial we learn how to deploy a self-hosted Vaultwarden instance and how to obtain a Let’s encrypt certificate using Caddy as a reverse proxy.

In this tutorial you will learn:

  • How to deploy a Vaultwarden instance using Docker
  • How to obtain a Let’s Encrypt TLS certificate using Caddy as a reverse proxy
how to deploy a self-hosted vaultwarden instance
How to deploy a self-hosted Vaultwarden instance

 

Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Distribution-independent
Software docker
Other Administrative privileges, being familiar with Docker, and a publicly accessible domain
Conventions # – requires given linux-commands to be executed with root privileges either directly as a root user or by use of sudo command
$ – requires given linux-commands to be executed as a regular non-privileged user

Bitwarden Vs Vaultwarden

Bitwarden is a feature-reach open source password manager which comes with all the features nowadays one expects from this type of software, such as 2 factor authentication login, support for the random generation of strong passwords, login forms autofill via dedicated browser extensions, mobile devices client applications, and much more.



Although creating a basic Bitwarden account is free of charge, due to its open source nature, it is possible to self-host a Bitwarden server, which is mostly written in C# and uses MS SQL as database. A Bitwarden instance, however, could be pretty resource-hungry and complex to maintain. For these reasons an alternative, more lightweight, implementation of the Bitwarden API was created: enter Vaultwarden. Vaultwarden, previously known as Bitwarden_RS, is written in Rust and uses Sqlite as a database. The recommended way to deploy a Vaultwarden instance is to use the official Docker image; in this tutorial we learn how to do it, and how to obtain a Let’s encrypt TLS certificate using Caddy as a reverse-proxy service.

What is Caddy?

Caddy is an open source web server written in Go, which automatically enables HTTPS for sites it serves. It can work as a reverse proxy,  and is able to automatically obtain and renew TLS certificates using the ACME (Automatic Certificate Management Environment) protocol.

Creating the docker-compose file

Since we need our dockerized Vaultwarden instance to work together with Caddy, which is itself available as a docker image, for an easier services orchestration and container communication, we want to describe our little infrastructure in a docker-compose file. Here is how it should look like:

version: "3"
services:
  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: always
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /srv/vaultwarden:/data

  caddy:
    image: caddy:2
    container_name: caddy
    restart: always
    ports:
      - 80:80
      - 443:443
    volumes:
      - /srv/caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - /srv/caddy/config:/config
      - /srv/caddy/data:/data
      - /srv/caddy/logs:/logs
    environment:
      DOMAIN: "<yourdomain>"
      EMAIL: "<youremail>"
      LOG_FILE: /srv/caddy/logs/access.log

Let’s analyze it.

The “vaultwarden” service

The first service we defined in the docker-compose file is “vaultwarden” and is based on the vaultwarden/server image. As part of the service definition, we specified we want the resulting container to be called “vaultwarden”, and, very important thing, we defined a restart policy so that service it is always restarted if it stops.



We also defined two bind mounts to map certain files on our server to other files inside the container. The first of those files is /etc/localtime: we mapped it to the corresponding file inside the container, to ensure Vaultwarden uses the same timezone of the host. With the second bind-mount, we mapped the /data directory inside the container, to a directory of our choice on the host; in this case I choose /srv/vaultwarden/data but the path is arbitrary. The “data” directory is where, among, the other things, Vaultwarden stores the Sqlite database file, which we absolutely want to backup periodically (by setting up a dedicated cron job, for example).

The “caddy” service

The second service we defined is the docker-compose file is “caddy”. We used the Caddy official docker image, named the container “caddy”, and, as for the “vaultwarden” service, we established an “always” restart policy. In order to use Caddy as a reverse-proxy, and let it act as an ACME client to obtain a Let’s Encrypt certificate, we also mapped ports 80 and 443 on the host to the same ports inside the container.

We than defined four bind mounts: with the first one we mapped /etc/caddy/CaddyFile inside the container to an arbitrary path on the host system (/srv/caddy/Caddyfile in this case). For those not familiar with Caddy, Caddyfiles are the human-friendly configurations used to define one or multiple websites (more on this in a moment).



The second, third, and fourth bind-mounts we defined map the /data, /config and /logs directories inside the container, to the /srv/caddy/data, /srv/caddy/config and /srv/caddy/logs directories on the host system, respectively.

Finally. we defined some environment variables which will be later used in the Caddyfile: DOMAIN,which is the domain we want to use for Vaultwarden and we want to get a certificate for, EMAIL which is the email to associate with the request, and LOG_FILE which is used to define the path of the file where Caddy logs should be written.

Creating the Caddyfile

Caddyfiles are used to define websites when using Caddy. In this case our Caddyfile should look like that:

{$DOMAIN}:443 {
  log {
    output file {$LOG_FILE} {
      roll_size 10MB
      roll_keep 10
    }
  }
  
  tls {$EMAIL}
  reverse_proxy /notifications/hub vaultwarden:3012
  reverse_proxy vaultwarden:80 {
    header_up X-Real-IP {remote_host}
  }
}

The first thing we specify in a Caddyfile is the website address. In this case we used the {$DOMAIN} notation, which will be substituted by the value of of the DOMAIN environment variable we defined in the docker-compose file for the “caddy” service. The site definition is than written between curly braces.

Logging setup

The first thing we defined is the logging behavior, via the log directive. By using output we specified where logs should be written. Caddy manages log messages using “modules”. The default is to use the “stderr” module and so to write logs to the console (to the standard error file descriptor). Since in this case we want logs to be written to a file, instead, we instructed Caddy to use the “file” module. Finally, Instead of hardcoding the log file path, we used the LOG_FILE environment variable we previously defined.

We than proceeded to define logs rotation via the roll_size and roll_keep directives. The former is used to define at what size a file should be “rolled” (10MB in this case), the latter to set how many log files should be kept.

Reverse proxy setup

The next thing we did, was to setup reverse proxy behavior for Caddy. We did it by using the reverse_proxy directive. The syntax to use with the directive is:

reverse_proxy [<matcher-token>][<upstreams...>]

The “matcher-token” is used to limit when the directive is applied. In this case we used a path starting with a slash: the request will be handled only if the it matches. The second argument used with the directive is the backend the request should be redirected to.



We redirect requests with the /notifications/hub path to “vaultwarden” on port 3012 (in this case “vaultwarden” is the name of the service we defined in the docker-compose file, which is also used as hostname for the container), and everything else to the same backend on port 80. The header_up directive is used to manipulate headers: in this case we set “X-Real-IP” to “{remote_host}” which will be substituted with the IP of the client. This is done so that Vaultwarden can use it in logs, and for services like fail2ban to be able to handle it.

Consistently to what we defined in the docker-compose file, we need to save the content to a file named “Caddyfile” (with no extension), under the /srv/caddy directory, which must be created beforehand:

$ sudo mkdir /srv/caddy

Starting the services

With the Caddyfile in place, we can now start the containers. We only need to run the docker-compose up command (in the same directory where the docker-compose file is located or by passing the path of the docker-compose file as argument to -f). Since we want the services to run in the background, we need to use the -d option:

$ sudo docker-compose up -d



As we previously stated, in order for Caddy to obtain a TLS certificate via HTTP challenge from Let’s encrypt, the domain we specified must be publicly accessible. If everything goes as expected, we should be able to reach Vaultwarden at the specified address:

Vaultwarden login page
Vaultwarden login page

It’s now possible to create an account and setup an encrypted vault.

Conclusions

In this tutorial we learned the difference between Bitwarden and Vaultwarden, and we saw how to deploy a self-hosted Vaultwarden instance on a private server with Docker. We also saw how to use Caddy as a reverse-proxy to obtain a TLS certificate from Let’s Encrypt.



Comments and Discussions
Linux Forum