Terminating TLS in Varnish with Hitch

Tags: ops (28) tls (3)

Varnish Cache, the open source version of Varnish does not support Transport Layer Security (TLS) natively. If you want to use HTTPS for encrypted traffic, you will need to terminate the TLS session on a separate TLS proxy.

Although there are plenty of suitable proxy servers out there that can terminate TLS, we recommend that you use Hitch.

What is Hitch?

Hitch is a dedicated TLS proxy server that is developed and maintained by Varnish Software. It is an open source project, it’s code can be downloaded from the Hitch website and there are even official packages available.

Unlike other proxy servers that are used for TLS termination, Hitch only does TLS termination and nothing else. This makes Hitch a lot more lightweight and a lot more powerful.

Hitch has no understanding of HTTP and can also be used for TLS termination in a non-HTTP context.

Architecture

When we look at the architecture of a typical Varnish setup with a Hitch TLS proxy, Hitch is usually installed on the Varnish server.

TLS termination diagram with Hitch

All HTTPS requests will pass through Hitch whereas plain HTTP requests are directly sent to Varnish. Because all HTTPS requests will eventually reach Varnish, it make sense to keep Hitch and Varnish close.

By hosting Hitch on the Varnish server performance will improve. To further boost performance, you can connect Hitch and Varnish using UNIX domain sockets (UDS). However, regular TCP connections are also supported.

Because an extra proxy server is added, Varnish will always consider Hitch to be the client and won’t have access to the original client IP address. This limitation can be circumvented by using the PROXY protocol.

The PROXY protocol prepends a header to the request payload containing meta information about the forwarded stream. The meta information in the PROXY header ranges from the source and destination address information of the client connection to TLS-related information. The PROXY header is transported through the various proxy servers. PROXY protocol support needs to be supported by your server and is configurable in Hitch and Varnish.

All the configuration that Hitch requires is stored in the hitch.conf file that we’ll discuss in detail throughout this tutorial.

Features

As described on https://hitch-tls.org/, these are some of the Hitch features:

  • ALPN/NPN for HTTP/2
  • Support for TLS1.2 and TLS1.3 and legacy TLS 1.0/1.1
  • SNI, with and without wildcard certificates
  • Automatic OCSP stapling support
  • Client certificate authentication
  • PROXY protocol to signal client IP/port to backend
  • Supports UNIX domain socket connections to origin
  • Safe for large installations: performant up to 15 000 listening sockets and 500 000 certificates
  • Support for seamless run-time configuration reloads of certificates and listen endpoints

This means that Hitch can terminate TLS sessions for multiple hostnames. Server Name Indication (SNI) support will ensure the right certificate is used.

This also means that multiple certificates can be loaded into Hitch, either through explicit includes or through glob patterns.

Hitch also supports the Online Certificate Status Protocol (OCSP) that checks the revocation status of a certificate by performing an HTTP call to the certificate authority and staples the result status onto the TLS handshake. This removes the need for a client to perform an OCSP call itself.

Application-Layer Protocol Negotiation (ALPN), which is a TLS extension is also supported by Hitch. This allows Hitch to inform Varnish about potential HTTP/2 support without having any awareness of a layer 7 protocol like HTTP.

As mentioned earlier, the PROXY protocol is supported, which transports client connection information through the request chain.

Besides TCP support for backend communcation, UNIX domain sockets (UDS) is also supported by Hitch.

Hitch doesn’t just support server TLS certificates, but also TLS certificates presented by the client. This results in support for mutual TLS (mTLS) which ensures that both the client and the server guarantee their respective identities.

Installing Hitch

You can install Hitch by compiling the source code, which is available through https://hitch-tls.org/source/. But it makes a lot more sense to use the official packages that are provided by Varnish Software.

The way you install Hitch will depend on your Linux distribution. In this tutorial we’ll cover the installation procedure for the following distributions:

  • Ubuntu
  • Debian
  • CentOS
  • Red Hat Enterprise Linux

We’ll also explain how to run Hitch in the cloud or on Docker without having to install the software.

Installing Hitch on Ubuntu and Debian

On Ubuntu and Debian the apt-get command can be used to install additional software. Before we install Hitch, we need to update the package list:

sudo apt-get update

Install the Hitch package requires some dependencies being installed:

sudo apt-get install debian-archive-keyring curl gnupg apt-transport-https

The next command will import the GPG key into the package manager configuration:

curl -s -L https://packagecloud.io/varnishcache/hitch/gpgkey | sudo apt-key add -

Now that the dependencies are in place, we can register the package repository:

. /etc/os-release
sudo tee /etc/apt/sources.list.d/hitch.list > /dev/null <<-EOF
deb https://packagecloud.io/varnishcache/hitch/$ID/ $VERSION_CODENAME main
EOF
sudo tee /etc/apt/preferences.d/hitch > /dev/null <<-EOF
Package: hitch
Pin: release o=packagecloud.io/varnishcache/*
Pin-Priority: 1000
EOF

Update the package list based on the repository configuration:

sudo apt-get update

And finally you can install Hitch:

sudo apt-get install hitch

Installing Hitch on CentOS and Red Hat Enterprise Linux

Before you can install Hitch on CentOS or Red Hat Enterprise Linux, you need to register the package repository by running the following commands:

. /etc/os-release
sudo tee /etc/yum.repos.d/hitch.repo > /dev/null <<-EOF
[varnishcache_hitch]
name=varnishcache_hitch
baseurl=https://packagecloud.io/varnishcache/hitch/el/${VERSION_ID%%.*}/$(arch)
repo_gpgcheck=0
gpgcheck=0
enabled=1
gpgkey=https://packagecloud.io/varnishcache/hitch/gpgkey
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
EOF

Then run the following command to install Hitch:

sudo yum install hitch

Running Hitch in the cloud

When you run Varnish in the cloud using one of the official cloud images, Hitch is automatically installed and configured with a self-signed certificate.

View the official Varnish cloud images

Run the official Hitch Docker container

There is not only an official Varnish image for Docker, but also an official Hitch image for Docker.

Learn how to run Varnish inside a Docker container

You can already download the latest version of the Docker image by running the following command:

docker pull hitch:latest

Running the Hitch container is as simple as running the following command:

docker run --name=hitch -p 443:443 hitch

Configuring the Hitch Docker container

Unfortunately chances are small that the default backend settings will suffice. The standard Hitch backend configuration is 127.0.0.1 on port 8443 using PROXY protocol version 2.

The following command overrides these settings by using Hitch runtime parameters:

docker run --name=hitch -p 443:443 \
	-v $(PWD)/cert.pem:/etc/hitch/cert.pem:ro \
	hitch --backend=\[varnish\]:8443 /etc/hitch/cert.pem

This command will run a Hitch container that exposes port 443 and that mounts the cert.pem certificate file in the container under /etc/hitch/cert.pm with read-only permissions. This mounted certificate will be used by passing the path of the certificate as the last argument.

By adding the --backend=\[varnish\]:8443 option, the backend that Hitch connects to is now varnish on port 8443.

Instead of using runtime parameters you can also mount a complete hitch.conf file that will be read by the hitch program. You can use the following command for that:

docker run --name=hitch -p 443:443 \
	-v $(PWD)/hitch.conf:/etc/hitch/hitch.conf:ro hitch

Docker Compose orchestration

You can use docker compose to orchestrate the container setup for Hitch, Varnish and the origin web server.

The example below is based on the Docker Compose example from the Running Varnish on Docker tutorial:

version: "3"
services:
  hitch:
    image: "hitch:latest"
    hostname: "hitch"
    container_name: "hitch"
    volumes:
    - ./hitch.conf:/etc/hitch/hitch.conf
    - ./cert.pem:/etc/hitch/cert.pem
    ports:
     - "443:443" 
    depends_on:
      - "varnish"
  varnish:
    image: varnish:stable
    container_name: varnish
    volumes:
      - "./default.vcl:/etc/varnish/default.vcl"
    ports:
      - "80:80"
    tmpfs:
      - /var/lib/varnish:exec
    environment:
      - VARNISH_SIZE=2G  
    depends_on:
      - "httpd" 
  httpd:
    image: httpd:latest
    container_name: httpd
    ports:
      - "8080:80"

This docker-compose.yml file simply adds the hitch container that listens on port 443 and mounts the certificate as well as a custom hitch.conf configuration file.

Inside the hitch.conf file it is expected to have a backend = "[varnish]:8443" configuration line and also a pem-file = "/etc/hitch/cert.pem" config line to use the mounted certificate.

Configuring Hitch

Hitch can be configured through a configuration file that is loaded through the --config runtime parameter. Systemd setups as well as the Docker container will automatically load the /etc/hitch/hitch.conf configuration file.

Standard configuration directives can also be overriden through a collection of runtime parameters. You can run man hitch to see the available runtime parameters.

Let’s have a look at the various ways Hitch can be configured.

Listening address

Hitch is a TLS proxy and while TLS is not strictly tied to HTTPS, in most cases you will run Hitch with a listening address that listens for incoming connections on port 443.

Here’s what the frontend configuration could look like in a Hitch configuration file:

frontend = {
    host = "*"
    port = "443"
}

The frontend can also be defined as a oneliner:

frontend = "[*]:443"

The frontend configuration can also be set through the --frontend runtime parameter as illustrated below:

hitch --frontend="[*]:443"

Your Hitch configuration can have multiple frontends. Here’s such an example:

hitch --frontend="[10.10.10.131]:443" --frontend="[10.10.10.132]:443"

Backend definition

Once Hitch has terminated the TLS session, the incoming request must be forwarded to a backend server. This will usually be Varnish.

Here’s an example of a backend directives that connects to port 8443 on the local machine:

backend = "[127.0.0.1]:8443"

For local connections you can also use UNIX domain sockets:

backend = "/var/run/varnish.sock"

This example features a UDS connection with a Varnish server that listens for incoming connections on the /var/run/varnish.sock socket.

Here’s how to use hitch runtime parameters to configure a backend:

hitch --backend="[127.0.0.1]:8443"

Certificate settings

Hitch is a TLS proxy, which involves x509 certificates. The location of these certificates can be specified by the pem-file configuration directive. If multiple certificates are loaded, Server Name Indication (SNI) is used to match the right certificate with the requested hostname.

Here’s an example of a pem-file configuration directive:

pem-file = "/etc/hitch/cert.pem"

This PEM file contains both the certificate and the private key. If these happen to be stored in separate files, you can load them using the following pem-file directive:

pem-file = {
    cert = "/etc/hitch/cert.pem"
    private-key = "/etc/hitch/key.pem"
}

Multiple pem-file directives can be added to your Hitch configuration, as illustrated below:

pem-file = "/etc/hitch/cert.pem"
pem-file = "/etc/hitch/cert2.pem"

If specific certificates should be mapped to specific frontends, you can include the pem-file directive inside those frontends:

frontend = {
    host = "10.10.10.131"
    port = "443"
    pem-file = "/etc/hitch/cert.pem"
}

frontend = {
    host = "10.10.10.132"
    port = "443"
    pem-file = "/etc/hitch/cert2.pem"
}

If you want to load multiple certificates, you can also use pem-dir to define one or more folders from which certificates should be loaded. You can combine this with the pem-dir-glob directive to define a glob pattern.

Here’s an example:

pem-dir = "/etc/hitch/cert.d"
pem-dir-glob = "*.pem"
pem-file = "/etc/hitch/fallback.pem"

This example will load all files that match the .pem extension in the /etc/hitch/cert.d folder. A fallback certificate is defined that loads /etc/hitch/fallback.pem in case SNI could match any certificates in the cert.d folder.

Here’s how you can set the certificate location through runtime parameters:

hitch -f "[*]:443" -b "[127.0.0.1]:8443" /etc/hitch/cert.pem

And here’s how you load multiple certificates:

hitch -f "[*]:443" -b "[127.0.0.1]:8443" /etc/hitch/cert.pem /etc/hitch/cert2.pem

It’s also possible to link certificates to your frontends through the -f parameter:

hitch -f "[*]:443+/etc/hitch/testcert.pem" -b "[127.0.0.1]:8443"

TLS protocol settings

Let’s start off by saying that SSL is dead. For the sake of clarity, we talk about TLS/SSL, but in reality we’re no longer using the SSL protocol. It’s all TLS.

The tls-protos directive allows us to set the TLS protocols that Hitch is willing to support. And although it is technically possible to mention SSLv3 as a potential protocol, the best way to configure this directive nowadays is as follows:

tls-protos = TLSv1.2 TLSv1.3

The --tls-protos runtime parameter can be used to override whatever is stored in your configuration file. Here’s an example of how to define the TLS protocols via the command line:

hitch -f "[*]:443" -b "[127.0.0.1]:80" \
    --tls-protos="TLSv1.3" /etc/hitch/cert.pem

PROXY protocol settings

As mentioned in the architecture section of this tutorial, the PROXY protocol can be used to transport meta information about the client connection to the backend.

This backend will probably by Varnish, so you’ll need to enable PROXY support in Varnish as well. This will be explained in the Varnish configuration section of the tutorial.

The write-proxy-v2 configuration directive can be used to attach a PROXY version 2 header to the forwarded request data:

write-proxy-v2 = on

This is the equivalent using runtime parameters:

hitch -f "[*]:443" -b "[127.0.0.1]:8443" \
    --write-proxy-v2=on /etc/hitch/cert.pem

If your backend doesn’t support PROXY protocol version 2, it is possible to attach a PROXY version 1 header instead. We can use the write-proxy-v1 directive for that:

write-proxy-v1 = on

This is the equivalent using runtime parameters:

hitch -f "[*]:443" -b "[127.0.0.1]:8443" \
    --write-proxy-v1=on /etc/hitch/cert.pem

If Hitch is positioned behind another proxy server that also supports the PROXY protocol, Hitch can forward the PROXY header to Varnish and still handle TLS termination.

We can enable incoming PROXY protocol connections by enabling the proxy-proxy configuration directive:

proxy-proxy = on

This can also be done by adding the --proxy-proxy=on runtime parameter to the hitch program.

ALPN settings

ALPN is short for Application-Layer Protocol Negotiation. It is a TLS extension that presents the supported application protocols during the TLS handshake.

From an HTTP perspective ALPN can be used to negotiate HTTP/2 support during the TLS handshake, instead of having to waste an extra round trip to upgrade an HTTP/1.1 connection to HTTP/2.

Here’s how you can configure ALPN through the alpn-protos configuration directive:

alpn-protos = "h2, http/1.1"

Through this configuration Hitch will announce to clients that it supports both HTTP/2 and HTTP/1.1. Here’s the equivalent using the --alpn-protos runtime parameter:

hitch -f "[*]:443" -b "[127.0.0.1]:80" \
    --alpn-protos="h2, http/1.1" /etc/hitch/cert.pem

Cipher settings

If TLSv1.3 were used as a TLS protocol, the ciphersuites directive would be used to determine which cryptographic algorithms are used.

Here’s the default value:

ciphersuites = "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256"

These algorithms are exclusive to TLSv1.3 and have no overlap with other TLS versions. If you’re on TLSv1.2 or older, you can use the ciphers directive to describe the accepted cryptographic algorithms:

ciphers = "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"

By specifying both ciphersuites and ciphers, you remain compatible with both TLSv1.2 and TLSv1.3. The reason for these two settings being separate is that TLSv1.3 brought with it a completely new set of cipher suites, none of which are compatible with older versions of TLS.

We advise sticking with the default values, and when in doubt, have a look at https://wiki.mozilla.org/Security/Server_Side_TLS.

You can also choose who is in charge of selecting the used ciphers. The prefer-server-ciphers directive is responsible for that.

This is the default value:

prefer-server-ciphers = off

This means the client chooses which cipher is used. If you set it to on, Hitch will choose. In that case the order of the specified ciphers is significant: a cipher specified early will take precedence over the ones specified later.

The ciphers can also be set via the --ciphers runtime parameter:

hitch -f "[*]:443" -b "[127.0.0.1]:80" \
    --ciphers="EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH" /etc/hitch/cert.pem

You can also add the --prefer-server-ciphers=on runtime parameter to Hitch if you want the server to be in charge of cipher selection.

OCSP stapling

OCSP is short for Online Certificate Status Protocol and is a protocol that checks the revocation status of TLS certificates. OCSP will check the status of the certificate by performing an HTTP call to the Certificate Authority’s OCSP server. The corresponding URL is stored inside the certificate. Although OCSP is a lot more efficient than its predecessor, the Certificate Revocation List (CRL) protocol, which downloaded a list of revoked certificates, it has potential performance implications.

Not only is there added latency for the end-user because of the OCSP call to the CA for every TLS connection, it also puts a lot of stress on the OCSP servers. These servers could end up serving millions of requests at the same time and might crumble under the heavy load. Unverified OCSP calls result in errors being displayed in the browser.

OCSP stapling is an alternative mechanism that shifts the responsibility for the OCSP call from the client to the server. This means that the server will perform occasional OCSP calls for the certificates it manages on behalf of the client. The resulting status is stapled onto the TLS handshake, and removes the need for a client to perform an OCSP call.

Hitch supports OCSP stapling and has some configuration directives to control certain aspects of this mechanism.

The ocsp-dir directive is the directory in which OCSP responses are cached. The default directory is /var/lib/hitch:

ocsp-dir = "/var/lib/hitch"

The lifetime of a cached OCSP response is determined by whether the refresh information is part of the response. When no such information is provided in the response, Hitch will refresh the status of a certificate with a certain frequency. This frequency is determined by the ocsp-refresh-interval directive. By default this is 1800 seconds.

Here’s an example of how to lower the value to 500 seconds:

ocsp-refresh-interval = 500

When connecting to an OCSP server, the ocsp-connect-tmo and the ocsp-resp-tmo should be respected. These settings represent the connect timeout and last byte timeout for these connections. Their respective default values are 4 seconds and 10 seconds.

Here’s an example in which we add some leniency by increasing the timeouts:

ocsp-connect-tmo = 6
ocsp-resp-tmo = 15

The stapled OCSP response is to be verified by the client, but by enabling ocsp-verify-staple, Hitch will also verify the response and remove the staple when the verification fails. It is up to the client to perform the OCSP check itself.

Setting the OCSP stapling directory via the command line can be done by setting the --ocsp-dir runtime parameter:

hitch -f "[*]:443" -b "[127.0.0.1]:80" \
    --ocsp-dir = "/var/lib/hitch" /etc/hitch/cert.pem

Security settings

In terms of security, the hitch program offers various configuration directives and runtime parameters to restrict access.

The user and group configuration parameters, as well as their -u and -g command line option counterparts, can be used to run the hitch worker thread as a user other than root.

For older versions of Hitch this is optional, but newer versions will refuse to run the worker thread as the root user.

Here’s how you can set the user and group in the configuration file:

user = hitch
group = hitch

And here’s an example on how to set the user and group with command line options:

hitch -f "[*]:443" -b "[127.0.0.1]:80" \
   -u hitch -g hitch /etc/hitch/cert.pem

The chroot configuration directive and its -r and --chroot counterparts can be used to restrict access to the file system. By assigning a directory, Hitch will not be able to access files outside of its designated directory.

Example Hitch configuration

Here’s a basic hitch.conf configuration file with the basic options you need to connect to backend.example.com on port 8443 using version 2 of the PROXY protocol:

frontend = "[*]:443"
daemon = on
user = "hitch"
group = "hitch"
backend = "[backend.example.com]:8443"
pem-dir = "/etc/hitch/cert.d"
pem-dir-glob = "*.pem"
pem-file = "/etc/hitch/cert.pem"
alpn-protos = "h2, http/1.1"
write-proxy-v2 = on

This configuration also announces HTTP/2 and HTTP/1.1 support through the ALPN TLS extension.

The TLS certificates can be found in the /etc/hitch/cert.d folder and /etc/hitch/cert.pem acts as the fallback certificate.

When you only use command line options, you will not be able to exactly match the configuration, but you’ll get pretty close if you use the following command:

hitch -f "[*]:443" -b "[localhost]:8443" \
    --alpn-protos="h2, http/1.1" --write-proxy-v2=on \
    --daemon=on -u hitch -g hitch /etc/hitch/cert.pem

The only difference is that there are is no pem-dir and pem-dir-glob options on the command line. /etc/hitch/cert.pem is the only certificate that is loaded. Potential extra certificates have to be added manually.

Varnish configuration

Hitch and Varnish are a perfect match. If you want to put Hitch in front of Varnish, you’ll need to modify the configuration of Varnish:

  • You should enable the PROXY protocol in Varnish
  • You should enable HTTP/2 support
  • You should offer UNIX domain socket support

Here’s how to do this.

Enable the PROXY protocol in Varnish

As described in the PROXY protocol tutorial, you should register a Varnish listening port that listens for PROXY protocol traffic:

varnishd \
    -a :80 \
    -a :8443,PROXY \
    -f /etc/varnish/default.vcl \
    -s malloc,2g

This example configuration uses -a :8443,PROXY to register a listening port that listens on port 8443 on all network interfaces for PROXY traffic.

To avoid that any malicious clients sending would send bogus PROXY headers, it would make sense to restrict PROXY access to the localhost by binding the listening port to localhost:

varnishd \
    -a :80 \
    -a localhost:8443,PROXY \
    -f /etc/varnish/default.vcl \
    -s malloc,2g

Enable HTTP/2 support

When Hitch is configured with alpn-protos = "h2, http/1.1", Varnish needs to have HTTP/2 support as well. The -p feature=+http2 runtime parameter for varnishd needs to be active, as illustrated below:

varnishd \
    -a :80 \
    -a localhost:8443,PROXY \
    -p feature=+http2 \
    -f /etc/varnish/default.vcl \
    -s malloc,2g

Enable UNIX domain sockets

If you’re connecting to Varnish locally from Hitch, you might as well skip regular TCP/IP and use UNIX domain sockets (UDS) instead.

Here’s an example that offers both solutions:

varnishd \
    -a :80 \
    -a localhost:8443,PROXY \
    -a /var/run/varnish.sock,PROXY,user=varnish,group=varnish,mode=660 \    
    -p feature=+http2 \
    -f /etc/varnish/default.vcl \
    -s malloc,2g

Naming listening addresses

At this point we’re dealing with a varnishd configuration that has 3 listening addresses. Since we didn’t assign a name to them, Varnish will name them a0, a1 and a2.

You can access these names in VCL through the local.socket VCL variable. In the VSL logs and the VSC counters these names will also appear.

To make it easier to determine which listening address was used, we can name them ourselves.

Here’s how you can do that:

varnishd \
    -a http=:80 \
    -a proxy=localhost:8443,PROXY \
    -a uds=/var/run/varnish.sock,PROXY,user=varnish,group=varnish,mode=660 \    
    -p feature=+http2 \
    -f /etc/varnish/default.vcl \
    -s malloc,2g

These are the names that were assigned:

  • http for the listening address on port 80 that accepts regular HTTP traffic
  • proxy for the listening address on port 8443 that accepts PROXY traffic
  • uds for the listening address on the /var/run/varnish.sock UDS that accepts PROXY traffic

Retrieve TLS information with vmod_proxy

Once Hitch is connected to Varnish using version 2 of the PROXY protocol, vmod_proxy can be used to retrieve the TLV attributes.

As described in the PROXY protocol tutorial, you can use the following VCL snippet to set the X-Forwarded-Proto header to either https or http:

vcl 4.1;

import proxy;

backend default {
    .host = "127.0.0.1";
    .port = "8080";
}

sub vcl_recv {
    if(!req.http.X-Forwarded-Proto) {
        if (proxy.is_ssl()) {
            set req.http.X-Forwarded-Proto = "https";
        } else {
            set req.http.X-Forwarded-Proto = "http";
        }
    }    
}

This header can be forwarded to the origin web server to inform it about the forwarded protocol. The origin web server can use this header value when generating URLS and prevent http:// URLS from being used in an HTTPS context and vice versa.

Mutual TLS

Mutual TLS (mTLS) is a concept where both the server and the client must guarantee their respective identities via TLS certificates. For client authentication the same X.509 standard will be used to issue client certificates.

It is up to the client to present the certificate when connecting to the server using TLS. The server can then verify the validity of that certificate, based on a CA chain that is stored on the server.

Hitch supports mTLS and offers two configuration directives to enable this:

client-verify = required
client-verify-ca = "/etc/hitch/certs/client-ca.pem"

This example requires the client to authenticate itself using a client certificate. This is done by setting client-verify = required. This means if the client doesn’t provide a certificate, or the certificate verification fails, the connection will be aborted.

If client-verify = optional is set, a client that does not send a client certificate will still be allowed to connect. But if a client sends a certificate that cannot be verified, the connection will be aborted.

The default value is client-verify = none, which means no client verification is required.

The client-verify-ca parameter refers to a file where the certificate chain is stored. The server will use this chain of certificates to verify the incoming client certificate.

Here’s an example of how to use client certificates with curl:

curl --cacert ca.crt \
     --key client.key \
     --cert client.crt \
     https://example.com

In this example, curl will connect to https://example.com, which may require the client to authenticate itself using a certificate.

  • The --cacert parameter allows the client to send the certificate chain. These are trusted certificates that the actual certificate depends on.
  • The --key parameter contains the location to the private key. This key is used to sign the certificate.
  • The --cert parameter refers to the actual certificate that is used for authentication.

Native TLS support in Varnish Enterprise

The reason why we developed Hitch is to compensate for the lack of native TLS support in Varnish Cache, the open source version of Varnish. However, Varnish Enterprise, the commercial version of Varnish does offer native TLS support.

Read the documentation for Varnish Enterprise’s native TLS feature

The feature was developed to circumvent the throughput limits of Hitch. Don’t get us wrong: Hitch is still very fast and can process up to 100 Gbps on a single instance. However, some Varnish users have a use case where they need to go way beyond that. Varnish Enterprise’s native TLS implementation delivers up to 500 Gbps of throughput.

In terms of configuration, the native TLS feature is strongly inspired on Hitch’s configuration format.

Enabling native TLS

The -A command line option can be used to register the TLS configuration file. As mentioned it uses the configuration format of Hitch:

varnishd \
    -A /etc/varnish/tls.conf \
    -a :80 \
    -a localhost:8443,PROXY \
    -a /var/run/varnish.sock,PROXY,user=varnish,group=varnish,mode=660 \
    -p feature=+http2 \
    -f /etc/varnish/default.vcl \
    -s malloc,2g

The /etc/varnish/tls.conf can literally be a regular Hitch configuration file. Varnish Enterprise will use the configuration directives it recognizes and will ignore the rest.

Example configuration

When we take the example Hitch configuration from this tutorial and strip off the configuration directives we don’t need, we end up with the following example configuration:

frontend = "[*]:443"
pem-dir = "/etc/varnish/cert.d"
pem-dir-glob = "*.pem"
pem-file = "/etc/varnish/cert.pem"

For native TLS we only really care about the listening address and the location of the certificate.