This additional part expands the existing Slitaz network with a Varnish HTTP cache.
So where to place this http cache? To determine the location of our cache there are two startegies:
a) Place the cache(s) up front so a cache hit will not stress the load balancers.
b) Place the cache(s) after the load balancers, so each cache search will be balanced too.
It just depends on which component is the most powerful one. You might consider multiple caches for redundancy and performance issues, but I’ll be making just one and placing it up front. (See image above.)
Let’s get started shall we?
This part continues from the base system built in Slitaz project – Part 1 – Building a base system.
As a starter let’s edit the hardware of our base system. Add a disk of 1.5 GB (this will be used for caching). And increase the RAM to 128 MB. This is needed because I noticed that the poor little thing can’t seem to manage Varnish on 64 MB of RAM. It keeps throwing memory errors when starting the service at boot.
For this machine I’ll assign the ip 192.168.1.10. So remember the ip script? Let’s make a call:
/home/base/ip.sh httpcache 192.168.1 10 1 reboot
Now let’s format the newly added 1.5 GB disk.
For convenience: press o, n, p, 1, enter, enter, w
Create a mount point for the cache.
mkfs.ext2 -b 4096 /dev/hdb1 mkdir /mnt/data
And edit fstab to add this disk to be mounted.
dev/hdb1 /mnt/data ext2 defaults 0 0
Once this is done you need to change the Lighttpd server which still runs on port 80. We will need this port for sending our cache.
server.port = 81
Once this is done install the required dependencies and the toolchain to build Varnish.
tazpkg get-install slitaz-toolchain tazpkg get-install pkg-config tazpkg get-install libedit tazpkg get-install libedit-dev tazpkg get-install readline-dev tazpkg get-install readline tazpkg get-install ncurses-dev tazpkg get-install ncurses-extra tazpkg clean-cache
Now download Varnish and build it. Also once this build is completed create the logical links to ‘/usr/bin’, as our Varnish will be built in ‘/usr/local/bin’.
cd /home/base/ wget http://repo.varnish-cache.org/source/varnish-3.0.4.tar.gz tar xzvf /home/base/varnish-3.0.4.tar.gz cd /home/base/varnish-3.0.4 ./configure make make install cd / rm -rf /home/base/varnish-3.0.4 rm /home/base/varnish-3.0.4.tar.gz ln -s /usr/local/bin/varnishadm /usr/bin/varnishadm ln -s /usr/local/bin/varnishlog /usr/bin/varnishlog ln -s /usr/local/bin/varnishreplay /usr/bin/varnishreplay ln -s /usr/local/bin/varnishstat /usr/bin/varnishstat ln -s /usr/local/bin/varnishtop /usr/bin/varnishtop ln -s /usr/local/bin/varnishhist /usr/bin/varnishhist ln -s /usr/local/bin/varnishncsa /usr/bin/varnishncsa ln -s /usr/local/bin/varnishsizes /usr/bin/varnishsizes ln -s /usr/local/bin/varnishtest /usr/bin/varnishtest ln -s /usr/local/sbin/varnishd /usr/sbin/varnishd
Now let’s create the configuration directory and config files for Varnish. Varnish requires two files, a secret file (for CLI access) and a configuarion file. The configuration file is quite complex and kind of resembles like C. In this example I just used a quick example script found on the web. Important is the .host property and the .port. This makes the redirect tho the virtual loadbalancer address. Tip: With Varnish it’s also possible to loadbalance the servers without the requirement of an external one. Just google for ‘varnish multiple sites config’. However this is not in the scope of this example.
mkdir /etc/varnish nano /etc/varnish/default.vcl
Once this is done, create the secret file.
uuidgen > /etc/varnish/secret && chmod 0600 /etc/varnish/secret
And let’s add the properties for our deamon to the daemons.conf file. In this case the options define a ‘varnish_storage.bin’ file on our added hard drive of 1 GB, and it’s listening on all interfaces, port 80. Also a CLI is enabled at localhost interface on port 6082 (just an example).
#Varnish HTTP cache options. VARNISH_OPTIONS="-a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s file,/mnt/data/varnish/$INSTANCE/varnish_storage.bin,1G"
Now let’s create the Varnish daemon service.
#!/bin/sh # /etc/init.d/varnishd: Start, stop and restart web server on SliTaz, # at boot time or with the command line. Daemons options are configured # with /etc/daemons.conf # . /etc/init.d/rc.functions . /etc/daemons.conf NAME=Varnish DESC="Varnish HTTP cache" DAEMON=/usr/sbin/varnishd OPTIONS=$VARNISH_OPTIONS PIDFILE=/var/run/varnishd.pid case "$1" in start) if active_pidfile $PIDFILE varnish ; then echo "$NAME already running." exit 1 fi echo -n "Starting $DESC: $NAME... " $DAEMON $OPTIONS status ;; stop) if ! active_pidfile $PIDFILE varnish ; then echo "$NAME is not running." exit 1 fi echo -n "Stopping $DESC: $NAME... " kill `cat $PIDFILE` rm $PIDFILE status ;; restart) if ! active_pidfile $PIDFILE varnish ; then echo "$NAME is not running." exit 1 fi echo -n "Restarting $DESC: $NAME... " kill `cat $PIDFILE` rm $PIDFILE sleep 2 $DAEMON $OPTIONS status ;; *) echo "" echo -e "\033[1mUsage:\033[0m /etc/init.d/`basename $0` [start|stop|restart]" echo "" exit 1 ;; esac exit 0
Assign execute rights to the script.
chmod +x /etc/init.d/varnishd
And add it to the daemons on startup.
RUN_DAEMONS="dbus hald slim firewall dropbear varnishd lighttpd"
Now reboot the machine and you should see Varnish starting with the machine. A ps aux should show Varnish running.
ps aux | grep -i varnish
If it’s not running try starting the application manually with the -d option (short for debug). This will show the exact cause of the failure.
If it’s running you should see the ‘varnish_storage.bin’ being created in ‘/mnt/data/varnish/’.
cd /mnt/data/varnish/ ls -l
root@httpcache:/mnt/data/varnish# ls -l total 0 -rwxrwxrwx 1 root root 1073741824 Jun 19 21:26 varnish_storage.bin
To test if your setup is functional, just point a web browser to http://192.168.1.10 and check the headers. I’m using FireBug (Firefox extension) to view the headers in this case. They show added varnish values.
So that’s about it. Easy no? And as always here are the download files: httpcache.7z (38,3 MB)