Deploy

HTTP/1.1, the most popular version of HTTP, has been out for nearly two decades. While receiving several updates since its inception, it has remained largely the same in operation.

In the meantime, the Internet has received an explosion of growth, websites have become larger and more complicated, and many performance issues with HTTP/1.1 have been uncovered.

The main problem is that HTTP/1.1 does not allow many resources to be efficiently delivered in parallel, over the same connection. The ability exists, but it has been hard to implement.

The technique was perfected by SPDY, a protocol developed by Google, and integrated into browsers and servers. People saw how well the protocol was performing, so version 2 of SPDY was taken as the basis for HTTP/2, and several aspects of SPDY were improved to get a working standard (notably header compression).

Now that HTTP/2 has perfected the technique, adoption of draft versions of the protocol has seen its way into some browsers. As of writing (May 15th, 2015), Google Chrome supports HTTP/2 draft 14. Support also exists in other browsers.

Most users with browsers that support HTTP/2, will only support it if the connection with the server is encrypted, and only if the server has been correctly configured. Until HTTP/2 sees its way into popular webservers like nginx and Apache, the recommended way is to use SPDY. In the case of nginx, you get the benefit of multiplexed connections. Throughout the rest of this guide, we’re going to show you how to deploy SPDY in an optimal fashion with nginx. If you’re not using nginx, remember you can always proxy the connection from nginx to your current webserver, and still keep the performance benefits.

Adopting HTTPS

To deploy SPDY, you’ll need to use HTTPS. If you aren’t already using HTTPS, you should be wary of the following problems that adoption can cause:

  • Mixed content — if you include an external HTTP resource inside a HTTPS page, then it will be blocked by the browser. For example, if your site is visited over https://www.example.com, and you include a script at https://foo.example.com/script.js, then it will be blocked by the browser, potentially breaking functionality. A warning will also appear in the browser, which can scare away potential users. In short, ensure that all the resources referenced in your page are accessed over a valid HTTPS connection.
  • Referrer stripping — if you have a link to a HTTP page from a HTTPS page, the referrer will not be propagated to the HTTP page when the user clicks the link. For example, if you are a HTTPS site that directs users to other people’s sites, and some of those sites use HTTP, then the referrer header will not be passed on, and the people who run that site will not see you referring to them (for example, in Google Analytics, it will appear as though the user visited the site directly as opposed to via your site). This may or may not be important to you. There is currently a standard in progress to override this behaviour, known as referrer policies. This allows you to include a meta tag or HTTP header inside your web page which tells the browser to send the referrer anyway (even if the destination site is HTTP). This is a working draft, so the loose ends haven’t been completely ironed out yet, but it has seen adoption by some popular browsers. Eventually we think it will become the recommended method to propagate referrer headers from HTTPS pages. If this is important to you, please test your set up!

Many people consider HTTPS/TLS to be slow. The actual encryption of data is usually very fast and does not impose any negligible effect. The main overhead is the handshake, which can take several round-trips before you’re ready to start exchanging data. We’ll show you how to improve on these negotiation times.

Others also consider a TLS certificate to be expensive and not worth the cost. A TLS certificate protecting a single domain can usually be purchased for about under 20 USD a year, or even free depending on which certificate authority you choose.

Generate a certificate signing request

If you don’t already have a certificate signed by a certificate authority, you’ll need to get one so you can present a trusted certificate to your users. Begin with generating a private key:

openssl genrsa -out example.com.key 2048

Keep this key absolutely secret for as long as possible. In this case, the key is not encrypted. This means that anyone that manages to retrieve the example.com.key file could decrypt traffic. We won’t cover encrypting the private key as most people choose to leave it unencrypted and store it in a safe place on a server. Encrypting it means that everytime you need to start the webserver, you need to enter the encryption passphrase to decrypt the key so that the webserver can use it. If you’d like to encrypt your private key, please contact us for more information.

Now that you have a private key, you can generate a certificate signing request:

openssl req -new -sha256 -key example.com.key -out example.com.csr

You’ll be asked to enter some information. Fill it in to the best of your ability, leaving the challenge password blank (this usually isn’t needed for most certificate authorities). The common name (CN) should be equivalent to the domain name you want to serve your HTTPS site from. Once done, there’ll be a file named example.com.csr, of which the contents should be given to your chosen certificate authority. They’ll take you through a validation process, and you’ll be given a certificate by the end of it all.

You can generate a private key and certificate signing request in one command as follows:

openssl req -new -newkey rsa:2048 -nodes -keyout example.com.key -out example.com.csr

Configuring TLS

For the rest, we will assume that you already have a certificate signed by a certificate authority. However, almost everything here is equally applicable to self-signed certificates, so you can use these to test your configuration. The simplest TLS configuration in nginx is as follows:

server {
    server_name www.example.com;
    listen 443 ssl;

    ssl_certificate /etc/ssl/certs/example.com.bundle.pem;
    ssl_certificate_key /etc/ssl/private/example.com.key;
}

/etc/ssl/certs/example.com.bundle.pem should contain, in order, the certificate for your site, as well as the intermediate certificates required to link your certificate with the root certificates bundled with browsers and operating systems. Try and include as few certificates as possible to reduce the size of the TLS handshake. You should never need to include the root certificate as these are already installed into browsers. If you need a hand doing this, contact us!

Great, so your server will be able to encrypt connections between client and server, but it’s not well optimised, and it definitely doesn’t fit the standards necessary for HTTP/2. nginx has specified some default values for other settings, so we’ll need to tweak them. SSL is the original name for the protocol developed by Netscape, and was superseded by TLS to make an open standard. SSL is now considered completely insecure, so let’s disable it:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

Now we’ll want to specify a list of ciphers. The ciphers dictate what algorithms will be used to secure the connection. Without going too far into details, most people will find the following sufficient:

ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA";
ssl_prefer_server_ciphers on;

We’ve enabled what are known as ephemeral Diffie-Hellman ciphers. We won’t go into the details, but it’s advised that you generate some Diffie-Hellman parameters which will be used to facilitate these ciphers. You can generate these with the following shell command:

openssl dhparam -out /etc/ssl/dhparam.pem 2048

And finally include them in your nginx configuration as follows:

ssl_dhparam /etc/ssl/dhparam.pem;

TLS session reuse

Next, we can take advantage of TLS session reuse. Initially, when you connect to a HTTPS site, you will go through a full handshake, and certain cryptographic parameters will be established. TLS session reuse allows you to save these parameters for the connection, and use them again if you try and reconnect. This saves on subsequent connection times, giving your users a quicker experience.

ssl_session_timeout 5m;
ssl_session_cache shared:SSL:16m;

This configuration will allow for sessions to persist for five minutes before expiring. Session parameters will be stored in a sixteen megabyte cache. According to the nginx documentation, one megabyte can store approximately 4000 sessions, so tune the configuration above appropriately, if you wish.

OCSP stapling

When you purchase a TLS certificate, you will usually be given the ability to revoke it. For example, if the private key is leaked, then you need a way to tell browsers that they should not trust the certificate anymore, as someone with the private key can now decrypt traffic. When you connect to a HTTPS-enabled site, and are presented with a certificate, your browser will do a check to see if the certificate has been revoked or not. This adds some extra steps to the TLS negotiation. A standard which is known as OCSP stapling gives you the ability to present the status of the certificate during the initial TLS handshake with the target site. nginx gives you the ability to present this status, and can be configured as follows:

ssl_stapling on;
resolver 8.8.8.8 8.8.4.4;

Simple! Almost. The problem is, is that OCSP is normally communicated over plain, unencrypted HTTP. We don’t want an attacker to be able to hijack this communication and tell the server that the certificate has been revoked, lest we want our users to see a browser warning.

To combat this, the OCSP response is usually signed. In order to verify the signature and that it is a valid response, we need to include the appropriate certificates. These will usually be the root and intermediate certificates concatenated together. The config:

ssl_stapling_verify on;
ssl_trusted_certificate /etc/ssl/certs/example.com.trusted.pem;

With /etc/ssl/certs/example.com.trusted.pem containing the appropriate certificates.

Strict transport security

HTTP strict transport security is a way to force a browser to always visit a certain site over HTTPS. If a HTTP link to the same site is encountered, it will automatically be translated to HTTPS in the browser before making the request. This is usually used as an instrument for security, but can be used for speed. If you’ve configured redirects from HTTP to HTTPS, and you have a resource on your site that is still configured to go over HTTP, then HSTS will avoid the redirect’s round-trip.

You could take it a step further and submit your site to the HSTS preload list. This is where HSTS is configured for your site inside the browser, rather than the browser learning about it from the site through HTTP headers. When a user visits your site, the browser notices the HSTS entry inside its database, and automatically translates the URL to HTTP. This chops off a round-trip when a user types a link straight into the address bar without HTTPS, or if a user discovers an old HTTP link to your site skulking around the web.

While only a minor improvement, it’s still something to think about. You can configure normal HSTS as follows:

add_header Strict-Transport-Security "max-age=31536000";

This will tell the browser to visit the site exclusively over HTTPS for one year. To configure for the preload list, contact us and we’ll give you a hand.

SEO

Once you’ve configured TLS to your liking, you’ll now want to consider how your site is presented to search engines and what happens when users visit it.

You may want to redirect all users to HTTPS. This will make sure only one version is presented on search engines, and that all your visitors use SPDY if it is available to them. This can easily be done on nginx as follows:

server {
    listen 80;
    server_name example.com www.example.com;

    return 301 https://www.example.com$request_uri;
}

Enabling SPDY

And finally, the all-important cherry on top. To enable SPDY, simply update the listen directive in your main server block to look like this:

listen 443 ssl spdy;

It was never so easy! SPDY also has the great feature of header compression. While the client is vulnerable to certain security-related attacks, you can still freely use it on the server:

spdy_headers_comp 6;

Feel free to change the value. 9 provides the best compression, but uses more processing power. This continues down to 1, which transmits the most bytes, but uses less processing power. Adjust accordingly based on your system and throughput.

Check out Appendix A for the entire configuraton. Please review the more resources section below for some extra information.

More resources

Appendix A

This is just a skeleton configuration that bundles all the configuration above into a single piece. Add your own directives, such as gzip, and change it to fit your system.

server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://www.example.com$request_uri;
}

server {
    listen 443 ssl spdy;
    server_name www.example.com;
    resolver 8.8.8.8 8.8.4.4;

    ssl_certificate /etc/ssl/certs/example.com.bundle.pem;
    ssl_certificate_key /etc/ssl/private/example.com.key;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS -AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA";
    ssl_prefer_server_ciphers on;
    ssl_dhparam /etc/ssl/dhparam.pem; # Pre-generated by our OpenSSL command

    ssl_session_timeout 5m;
    ssl_session_cache shared:SSL:16m;

    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/ssl/certs/example.com.trusted.pem;

    spdy_headers_comp 6;

    add_header Strict-Transport-Security "max-age=31536000";
}