We are all living in a technological wasteland.


Slitaz project – Expansion – Part 1 – Varnish: HTTP cache

So, it’s been a while since I created some content about Slitaz. Remember our first server farm?

This additional part expands the existing Slitaz network with a Varnish HTTP cache.

So where to place this http cache? To determine the location of our cache there are two startegies:
a) Place the cache(s) up front so a cache hit will not stress the load balancers.
b) Place the cache(s) after the load balancers, so each cache search will be balanced too.

It just depends on which component is the most powerful one. You might consider multiple caches for redundancy and performance issues, but I’ll be making just one and placing it up front. (See image above.)

Let’s get started shall we?
This part continues from the base system built in Slitaz project – Part 1 – Building a base system.
As a starter let’s edit the hardware of our base system. Add a disk of 1.5 GB (this will be used for caching). And increase the RAM to 128 MB. This is needed because I noticed that the poor little thing can’t seem to manage Varnish on 64 MB of RAM. It keeps throwing memory errors when starting the service at boot.

For this machine I’ll assign the ip So remember the ip script? Let’s make a call:

/home/base/ip.sh httpcache 192.168.1 10 1

Now let’s format the newly added 1.5 GB disk.

fdisk /dev/hdb

For convenience: press o, n, p, 1, enter, enter, w

Create a mount point for the cache.

mkfs.ext2 -b 4096 /dev/hdb1
mkdir /mnt/data

And edit fstab to add this disk to be mounted.

nano /etc/fstab
dev/hdb1	/mnt/data	ext2	defaults	0	0

Once this is done you need to change the Lighttpd server which still runs on port 80. We will need this port for sending our cache.

nano /etc/lighttpd/lighttpd.conf
server.port = 81

Once this is done install the required dependencies and the toolchain to build Varnish.

tazpkg get-install slitaz-toolchain
tazpkg get-install pkg-config
tazpkg get-install libedit
tazpkg get-install libedit-dev
tazpkg get-install readline-dev
tazpkg get-install readline
tazpkg get-install ncurses-dev
tazpkg get-install ncurses-extra
tazpkg clean-cache

Now download Varnish and build it. Also once this build is completed create the logical links to ‘/usr/bin’, as our Varnish will be built in ‘/usr/local/bin’.

cd /home/base/
wget http://repo.varnish-cache.org/source/varnish-3.0.4.tar.gz
tar xzvf /home/base/varnish-3.0.4.tar.gz
cd /home/base/varnish-3.0.4
make install
cd /
rm -rf /home/base/varnish-3.0.4
rm /home/base/varnish-3.0.4.tar.gz
ln -s /usr/local/bin/varnishadm /usr/bin/varnishadm 
ln -s /usr/local/bin/varnishlog /usr/bin/varnishlog
ln -s /usr/local/bin/varnishreplay /usr/bin/varnishreplay
ln -s /usr/local/bin/varnishstat /usr/bin/varnishstat
ln -s /usr/local/bin/varnishtop /usr/bin/varnishtop
ln -s /usr/local/bin/varnishhist /usr/bin/varnishhist
ln -s /usr/local/bin/varnishncsa /usr/bin/varnishncsa
ln -s /usr/local/bin/varnishsizes /usr/bin/varnishsizes
ln -s /usr/local/bin/varnishtest /usr/bin/varnishtest
ln -s /usr/local/sbin/varnishd /usr/sbin/varnishd

Now let’s create the configuration directory and config files for Varnish. Varnish requires two files, a secret file (for CLI access) and a configuarion file. The configuration file is quite complex and kind of resembles like C. In this example I just used a quick example script found on the web. Important is the .host property and the .port. This makes the redirect tho the virtual loadbalancer address. Tip: With Varnish it’s also possible to loadbalance the servers without the requirement of an external one. Just google for ‘varnish multiple sites config’. However this is not in the scope of this example.

mkdir /etc/varnish
nano /etc/varnish/default.vcl

The script:

backend default {
  .host = "";
  .port = "80";
sub vcl_recv {
  if (req.request != "GET" &&
    req.request != "HEAD" &&
    req.request != "PUT" &&
    req.request != "POST" &&
    req.request != "TRACE" &&
    req.request != "OPTIONS" &&
    req.request != "DELETE") {
      /* Non-RFC2616 or CONNECT which is weird. */
      return (pipe);
  if (req.request != "GET" && req.request != "HEAD") {
    /* We only deal with GET and HEAD by default */
    return (pass);
  // Remove has_js and Google Analytics cookies.
  set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|__utm*|has_js|_chartbeat2)=[^;]*", "");
  // To users: if you have additional cookies being set by your system (e.g.
  // from a javascript analytics file or similar) you will need to add VCL
  // at this point to strip these cookies from the req object, otherwise
  // Varnish will not cache the response. This is safe for cookies that your
  // backend (Drupal) doesn't process.
  // Again, the common example is an analytics or other Javascript add-on.
  // You should do this here, before the other cookie stuff, or by adding
  // to the regular-expression above.
  // Remove a ";" prefix, if present.
  set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", "");
  // Remove empty cookies.
  if (req.http.Cookie ~ "^\s*$") {
    unset req.http.Cookie;
  if (req.http.Authorization || req.http.Cookie) {
    /* Not cacheable by default */
    return (pass);
  // Skip the Varnish cache for install, update, and cron
  if (req.url ~ "install\.php|update\.php|cron\.php") {
    return (pass);
  // Normalize the Accept-Encoding header
  // as per: http://varnish-cache.org/wiki/FAQ/Compression
  if (req.http.Accept-Encoding) {
    if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") {
      # No point in compressing these
      remove req.http.Accept-Encoding;
    elsif (req.http.Accept-Encoding ~ "gzip") {
      set req.http.Accept-Encoding = "gzip";
    else {
      # Unknown or deflate algorithm
      remove req.http.Accept-Encoding;
  // Let's have a little grace
  set req.grace = 30s;
  return (lookup);
// Strip any cookies before an image/js/css is inserted into cache.
sub vcl_fetch {
  if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") {
    // For Varnish 2.0 or earlier, replace beresp with obj:
    // unset obj.http.set-cookie;
    unset beresp.http.set-cookie;
sub vcl_deliver {
  if (obj.hits > 0) {
          set resp.http.X-Cache = "HIT";
        // set resp.http.X-Cache-Hits = obj.hits;
  } else {
          set resp.http.X-Cache = "MISS";
sub vcl_error {
  // Let's deliver a friendlier error page.
  // You can customize this as you wish.
  set obj.http.Content-Type = "text/html; charset=utf-8";
  synthetic {"
  <?xml version="1.0" encoding="utf-8"?>
  <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
      <title>"} + obj.status + " " + obj.response + {"</title>
      <style type="text/css">
      #page {width: 400px; padding: 10px; margin: 20px auto; border: 1px solid black; background-color: #FFF;}
      p {margin-left:20px;}
      body {background-color: #DDD; margin: auto;}
    <div id="page">
    <h1>Page Could Not Be Loaded</h1>
    <p>We're very sorry, but the page could not be loaded properly. This should be fixed very soon, and we apologize for any inconvenience.</p>
    <hr />
    <h4>Debug Info:</h4>
    <p>Status: "} + obj.status + {" Response: "} + obj.response + {" XID: "} + req.xid + {"</p></div>

Once this is done, create the secret file.

uuidgen > /etc/varnish/secret && chmod 0600 /etc/varnish/secret
mkdir /mnt/data/varnish/

And let’s add the properties for our deamon to the daemons.conf file. In this case the options define a ‘varnish_storage.bin’ file on our added hard drive of 1 GB, and it’s listening on all interfaces, port 80. Also a CLI is enabled at localhost interface on port 6082 (just an example).

nano /etc/daemons.conf
#Varnish HTTP cache options.
VARNISH_OPTIONS="-a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s file,/mnt/data/varnish/$INSTANCE/varnish_storage.bin,1G"

Now let’s create the Varnish daemon service.

nano /etc/init.d/varnishd
# /etc/init.d/varnishd: Start, stop and restart web server on SliTaz,
# at boot time or with the command line. Daemons options are configured
# with /etc/daemons.conf
. /etc/init.d/rc.functions
. /etc/daemons.conf
DESC="Varnish HTTP cache"
case "$1" in
    if active_pidfile $PIDFILE varnish ; then
      echo "$NAME already running."
      exit 1
    echo -n "Starting $DESC: $NAME... "
    if ! active_pidfile $PIDFILE varnish ; then
      echo "$NAME is not running."
      exit 1
    echo -n "Stopping $DESC: $NAME... "
    kill `cat $PIDFILE`
    rm $PIDFILE
    if ! active_pidfile $PIDFILE varnish ; then
      echo "$NAME is not running."
      exit 1
    echo -n "Restarting $DESC: $NAME... "
    kill `cat $PIDFILE`
    rm $PIDFILE
    sleep 2
    echo ""
    echo -e "\033[1mUsage:\033[0m /etc/init.d/`basename $0` [start|stop|restart]"
    echo ""
    exit 1
exit 0

Assign execute rights to the script.

chmod +x /etc/init.d/varnishd

And add it to the daemons on startup.

nano /etc/rcS.conf
RUN_DAEMONS="dbus hald slim firewall dropbear varnishd lighttpd"

Now reboot the machine and you should see Varnish starting with the machine. A ps aux should show Varnish running.

ps aux | grep -i varnish

If it’s not running try starting the application manually with the -d option (short for debug). This will show the exact cause of the failure.

If it’s running you should see the ‘varnish_storage.bin’ being created in ‘/mnt/data/varnish/’.

cd /mnt/data/varnish/
ls -l
root@httpcache:/mnt/data/varnish# ls -l
total 0
-rwxrwxrwx    1 root     root     1073741824 Jun 19 21:26 varnish_storage.bin

To test if your setup is functional, just point a web browser to and check the headers. I’m using FireBug (Firefox extension) to view the headers in this case. They show added varnish values.

So that’s about it. Easy no? And as always here are the download files: httpcache.7z (38,3 MB)