Nginx tuning tips: HTTPS/TLS – Turbocharge TTFB/Latency
Are you looking to optimize the performance of Nginx? One way to do so is by tuning Nginx, to support the latest TLS (Transport Layer Security) protocols (TLS 1.2 & TLS 1.3). In this article, we’ll explore how optimizing Nginx’s TLS and SSL config can reduce Time To First Byte (TTFB) latency and turbocharge website speed, providing a better user experience.
Are SSL certificates using SSL or TLS?
Online we still use the term “SSL” (Secure Sockets Layer) to refer to the encryption protocol used for secure communication, even though they are actually using TLS. For example, SSL certificates are used to establish a secure connection between a client and a server over the internet. However, SSL itself is considered deprecated and insecure, and modern encryption protocols such as TLS are used instead.
Also, the term “SSL certificate” still lives on and is used informally to refer to digital certificates even though the actual protocol being used is TLS. That said, it’s important to note that TLS should be used instead of SSL as it is more secure and provides better protection against attacks.
To promote the use of secure encryption protocols, it’s recommended to start using terms like “TLS” and “TLS certificate” in our emails and communications. In fact, as I update this article, I’ll be removing the term SSL.
The importance of TLS 1.2 & TLS 1.3
In the era of digitalization, online security is a major concern for both individuals and businesses. Encryption plays a vital role in safeguarding online data, and one of the most popular encryption protocols is the Transport Layer Security (TLS) protocol. Lets briefly look at the importance of TLS 1.2 and TLS 1.3 (TLS 1.2+).
Since 30th June 2018, the PCI Security Standards Council has required that support for SSL 3.0 and TLS 1.0 be disabled and, more recently, to also disable TLS 1.1. So that as of updating this article, using TLS 1.2 and 1.3 is strongly recommended. In addition, in 2018 and more aggressively in the past year or two, Google Chrome/Chromium began to mark ‘HTTP’ websites as “not secure.” Over the past few years, the internet has swiftly transitioned to HTTPS. Over
90% 95% of Chrome’s traffic loads over HTTPS, and 100% of the web’s top 100 websites now use HTTPS by default!
With this in mind, let’s look at Nginx TLS tuning tips to improve the performance of Nginx + HTTPS for better TTFB and reduced latency.
Enable HTTP/2 or HTTP/3 & QUIC on Nginx
The first step in tuning Nginx for faster TTFB/latency with HTTPS is to ensure that at least HTTP/2 is enabled. HTTP/2 was first implemented in Nginx version 1.9.5 to replace spdy. Enabling the HTTP/2 module on Nginx is simple. We need to add the word http2 in the server block of our Nginx config file (ex. /etc/nginx/sites-enabled/sitename). (Remember: HTTP/2 requires HTTPS)
Look for this line:
listen 443 ssl;
change it to:
listen 443 ssl http2;
That’s it! HTTP/2 is used by 40% of all the websites, and HTTP/3 is used by only 20%. (Source) You can enable HTTP/3 & QUIC by following these (French) guides.
Check if HTTP/2 or HTTP/3 is enabled using Google Chrome
To confirm if HTTP/2 or HTTP/3 is enabled:
> open your website in Google Chrome
> right-click anywhere on the web page and select Inspect
> click the Network tab
> press F5 (on your keyboard) or refresh your web page manually
> the Protocol column should now show h2 (or h3-29) for all assets loaded via your server
> If the Protocol column is missing, you can add it using right-click.
Check if HTTP/2 or HTTP/3 is enabled using the command line
Test from your Linux/Mac command line with curl:
(Don’t also forget to curl test your CDN-hosted requests. Example: cdn.domain.com.
Compare KeyCDN, BunnyCDN and other CDN providers which support HTTP/2)
curl --http2 -I https://domain.com/
curl --http3 -I https://domain.com/
If the –http3 command does not work, you can also check here: https://www.http3check.net/
Enable SSL session cache
With HTTPS connections, instead of end-users connecting via one round trip (request sent, then the server responds), the connection needs an extra handshake. However, using HTTP/2 and enabling Nginx ssl_session_cache will ensure faster HTTPS performance for initial connections and faster-than-http page loads.
Using the option ssl_session_cache shared:SSL:[size], you can configure Nginx to share cache between all worker processes. One megabyte can store about 4000 sessions. You’ll also want to specify the time during (cache TTL) allowed for reuse:
ssl_session_cache shared:SSL:1m; # holds approx 4000 sessions ssl_session_timeout 1h; # 1 hour during which sessions can be re-used.
Disable SSL session tickets
Because the proper rotation of session ticket encryption key is not yet implemented in Nginx, you should turn this off for now.
Disable TLS versions 1.0 and 1.1
As we’ve discussed in the opening, HTTPS and HTTP/2(3) are a move toward the latest, fast and most secure web technology. In light of this, TLS 1.0 should be disabled. Update May 2022: its now recommended to disable TLSv1.1 and 1.2 and only enabling TLSv1.3 (Nginx 1.13+ required for TLSv1.3).
Legacy versions of Nginx, look for:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
modify the line to:
For Nginx 1.13+ enable TLSv1.3, look for:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
modify the line to:
Enable OCSP Stapling
OCSP (Online Certificate Status Protocol) stapling is an alternative approach to the OCSP for checking the revocation status of X.509 certificates. Enabling OCSP stapling allows the Nginx to bear the resource cost involved in providing OCSP responses by appending (“stapling”) a time-stamped OCSP response signed by the CA to the initial TLS handshake, eliminating the need for clients to contact the CA. Also see: Using OCSP Stapling to Improve Response Time and Privacy.
ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /path/to/full_chain.pem; resolver 18.104.22.168 22.214.171.124 valid=300s; resolver_timeout 5s;
Note: ssl_trusted_certificate specifies the trusted CA certificates chain file, in PEM format, used to verify client certificates and OCSP responses.
Reduce SSL buffer size
The Nginx ssl_buffer_size config option sets the size of the buffer used for sending data via HTTPS. By default, the buffer is set to 16k, a one-size-fits-all approach geared toward big responses. However, to minimize TTFB (Time To First Byte), it is often better to use a smaller value, for example:
(I was able to shave about 30 – 50ms off TTFB. Your mileage may vary.)
Full Nginx SSL_ config for improved TTFB
Above is my tuned Nginx SSL config. Pasted below for convenience:
ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; ssl_ecdh_curve secp384r1; # see here and here (pg. 485) ssl_session_cache shared:SSL:5m; ssl_session_timeout 24h; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /path/to/your/CA/chain.pem; resolver 126.96.36.199 188.8.131.52 valid=300s; resolver_timeout 5s; ssl_buffer_size 4k; # I've since found 8k works best for this blog. (test!!) Default = 16k
Test config, then reload Nginx after changes:
nginx -t nginx -s reload
Enable HTTP Strict Transport Security (HSTS)
Another Nginx HTTPS tip is to enable HSTS preload. HTTP Strict Transport Security (HSTS) is a header that allows a web server to declare a policy that browsers will only connect to using secure HTTPS connections and ensures end users do not “click-through” critical security warnings. (locks clients to HTTPS) This policy enforcement protects secure websites from downgrade attacks, SSL stripping, and cookie hijacking. Also, see https://hstspreload.org/#submission-requirements.
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
Other headers I use in my Nginx config for this blog are:
add_header X-Frame-Options sameorigin; # read here add_header X-Content-Type-Options nosniff; # read here add_header X-Xss-Protection "1; mode=block"; #read here
Also, see Analyze Your Website’s TTFB (Time to First Byte)
HTTP/2 reference and useful reading
- https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers – HTTP headers
- https://weakdh.org/sysadmin.html – Guide to Deploying Diffie-Hellman for TLS
- https://mozilla.github.io/server-side-tls/ssl-config-generator/ – Mozilla SSL Configuration Generator
- https://www.ssllabs.com/ssltest/ – SSL Server Test
- https://www.nginx.com/blog/http2-module-nginx/ – The HTTP/2 Module
- https://istlsfastyet.com – Is TLS Fast Yet?
- http://www.httpvshttps.com – HTTP vs HTTPS Test
- https://haydenjames.io/free-linux-server-monitoring-apm-sysadmins/ – Free web server monitoring
HTTP/3 & QUIC reference and useful reading
- https://daniel.haxx.se/http3-explained/ – HTTP/3 explained
- https://developer.akamai.com/blog/2020/04/14/quick-introduction-http3 – A QUICk Introduction to HTTP/3
- https://www.chromium.org/quic – Chromium: QUIC, a multiplexed stream transport over UDP.
- https://github.com/quicwg/base-drafts/wiki/Implementations – List of QUIC implementations.
- https://blog.cloudflare.com/http3-the-past-present-and-future/ – HTTP/3: the past, the present, and the future.
- https://engineering.fb.com/2020/10/21/networking-traffic/how-facebook-is-bringing-quic-to-billions/ – Facebook QUIC & HTTP/3.
- https://datatracker.ietf.org/doc/html/draft-ietf-quic-http-34 – Hypertext Transfer Protocol Version 3, draft-ietf-quic-http-34.
Published: June 30th, 2018
Last updated: May 16th, 2022
I never actually tried anything like this. I am curious to see if I can notice any differences. I will give this a try later today. Thanks for the information and for going into detail.
I am wondering though, I use Brave as my main browser. Would it be the same approach as Chrome?