NodeJS, Varnish + NginX

I am building a web service, and I chose NodeJS as a backend, due to its efficiency and simplicity. However, since nodejs is single thread – single process, other means of making the process more responsive under server high server load must be implemented.

There are many ways of achieving high concurrency with a NodeJS server

  • Use the NodeJS cluster module. That module, however, is considered to be unstable.
  • Use JXCore. JXCore supports NodeJS 0.10.x, which is less than ideal for those wishing to take advantage of all the goodies NodeJS 0.12.x and io.js (when it’s merged with NodeJS) have to offer. JXCore is now abandoned.
  • Write Better Code.

… but no matter what methodology you use, you will surely need a load balancer and a caching module.

Why do I need caching?

One of the main problems with having a server is the performance of the server under heavy load. When a device makes a request to the endpoint, this is the usual process followed;

  • The endpoint associates the request with a function that is going to server it.
  • Usually, the function involves making a request to the database (SQL / Mongo, etc)
  • The data is fetched, and manipulated.
  • Data is being returned to the client that requested it.

When a service starts dallying responses, it’s usually due to heavy traffic (provided that the underlying code structure is correct). The bottlenecks are usually the database i/o, and the the software’s load in order to queue and service the requests from a CPU standpoint. With caching, you can avoid much of this load, since you are going to cache similar requests, and instead of processing a request, making a request to the server, formatting the data and sending them back, you can return the response directly from the server’s cache storage (usually in-memory).

Varnish Cache

Enter Varnish. Varnish is an HTTP accelerator, which stores requests and responses to the server’s RAM, and, if a request arrives which is identical with one that was server a while ago, then the request will be served directly from the machine’s RAM, and will never reach the server that you have written. That will leave CPU and disk resources free for you to use.

Our setup

Before we begin, you should note that there are many things to change when you finish reading this post, since you must customise the server to your needs.

So we have a web server written in NodeJS, which is now running on port 3000, and we have installed a Varnish server to Ubuntu machine, which is up and running.

1) Open /etc/default/varnish

see the following lines?

DAEMON_OPTS="-a :6081 \
             -T localhost:6082 \
             -b localhost:8080 \
             -u varnish -g varnish \
             -S /etc/varnish/secret \
             -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G"

This means that while Varnish is running, it will listen to port 6081 (this is the port where external requests will be made), it will administer on port 6082, and when a request from a user arrives, it will draw the data from a server running on port 8080 on the same machine. This is the part you need to change (or comment out).

Change it to this:

# Listen on port 80, administration on localhost:6082, and forward to
# one content server selected by the vcl file, based on the request.
# Use a 256MB memory based cache.
#
DAEMON_OPTS="-a :80 \
             -T localhost:6082 \
             -f /etc/varnish/default.vcl \
             -S /etc/varnish/secret \
             -s malloc,256m"

We want to use a .vcl file for performing additional parametrizations.

2) Edit /etc/varnish/default.vcl

# Marker to tell the VCL compiler that this VCL has been adapted to the
# new 4.0 format.
vcl 4.0;


backend default{
    .host = "127.0.0.1";
    .port = "3000";
    .connect_timeout = 10s;
    .first_byte_timeout = 15s;
    .between_bytes_timeout = 60s;
    .max_connections = 800;
}

sub vcl_recv {
    unset req.http.cookie;
}
sub vcl_backend_response{
	# Set 2min cache if unset
	if (beresp.ttl <= 0s) {
	    set beresp.ttl = 120s; # Important, you shouldn't rely on this, SET YOUR HEADERS in the backend
	    set beresp.uncacheable = false;
	    return (deliver);
	 }
	return (deliver);
}
sub vcl_backend_response {
    # Happens after we have read the response headers from the backend.
    #
    # Here you clean the response headers, removing silly Set-Cookie headers
    # and other mistakes your backend does.
}

sub vcl_deliver {
    # Happens when we have all the pieces we need, and are about to send the
    # response to the client.
    #
    # You can do accounting or modifying the final object here.
	if (obj.hits > 0) {
                set resp.http.X-Cache = "HIT";
        } else {
                set resp.http.X-Cache = "MISS";
        }
}

We told Varnish to forward requests it doesn’t have cache for to localhost, on port 3000. vcl_backend_response will be called when the backend has responded with the content to be displayed. We are setting there a 120 seconds cache to be served with the headers to the client. However, this is something that should be primarily be handled by the backend server. I will show you how this is done in a short while.

In ‘vcl_deliver’, I am setting a header  ‘X-Cache’ that will return wether the cache is a MISS or a HIT. Use this for debugging reasons, and remove it when you get in production.

Restart the varnish server when you are done.

sudo service varnish restart

Now we have a NodeJS server backend, and a varnish caching server that points to the NodeJS server. There is one final touch. You need to tell your node server to return a ‘Cache-Control’ property to the header, in order to tell Varnish and the clients that they need to cache this response. How you do that is up to your server implementation, here’s how to do it using express.js

app.all('/*', function(req, res, next) {
	res.header('Cache-Control', 'public, max-age=120');
	next();
});

In which case you set the cache control to 2 minutes. If no such header exists, then Varnish will cache nothing.

3) Adding nginx for HTTPS support

Even when Nginx presence is not necessary, it is a nice addition if you want to serve static content like videos and images ( a thing that NodeJS does very poorly), especially when using HTTPS.

Below is a sample nginx file, used for serving content from https, using certificates from a specific location.

upstream node_server { 
    server 127.0.0.1:80; 
}

map $http_origin $cors_header {
    default "";
    "~^https?://[^/]+\.oramind\.net(:[0-9]+)?$" "$http_origin";
}

# HTTPS server
#
server {
	listen 443;
	server_name localhost;

	root html;
	index index.html index.htm;

	ssl on;
	ssl_certificate /etc/nginx/certs/ssl-unified.crt;
    	ssl_certificate_key /etc/nginx/certs/ssl.key;

	ssl_session_timeout 5m;

	ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
	ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
	ssl_prefer_server_ciphers on;

	location / {
		proxy_pass http://node_server;

	}
}

It will also work with nginx, and your content will be available both from HTTP and HTTPS.

Where do we go from here

This was just a short tutorial on speeding up your web service using Varnish and nginx. I may use Node, but Varnish actually works everywhere, with anything.

If you really want to improve scalability, you should consider the following additions to your setup.

  • Use memcached. Memcached is a memory cache module, but works slightly differently from Varnish. When your server makes a long query and returns the results, your server can store the results on memory, on a memcached server. This is especially handy when you have cornjobs that return a certain amount of results, and those results will be requested from the user. You can pre-fill with those results memcached, and forward the requests through NGINX to memcached, if varnish doesn’t have them cached yet.
  • Setup multiple servers, use multiple nginx instances and one varnish server on a separate machine that your users will use to request information.

 

I hope I helped people setup a simple varnish server, and avoid the mistakes I made while trying.