enira.net

We are all living in a technological wasteland.

RSS
people

Slitaz project – Part 5 – Web cluster nodes

So now we have our file storage and our database. Time to start on the front-end layer: the web clusters. These web clusters will be running LighTTPD which we previously installed.

For our nodes I will use the following IP scheme:

webnode1 192.168.1.211
webnode2 192.168.1.212
webnode3 192.168.1.213
webnode4 192.168.1.214

We start by preparing webnode1, which is a copy of our base image we made in part 1.

/home/base/ip.sh webnode1 192.168.1 211 1
reboot

Install GlusterFS on this webnode, we need it for accessing the data cluster. I am not going to explain this anymore. (For more info see: Slitaz project – Part 2 – Cluster storage nodes (GlusterFS))

tazpkg get-install flex 
tazpkg get-install python
tazpkg get-install readline-dev
tazpkg get-install mpc-library
tazpkg get-install elfutils
tazpkg get-install openssl-dev
tazpkg get-install slitaz-toolchain
 
tazpkg clean-cache
 
cd /home/base
wget http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.0/glusterfs-3.3.0.tar.gz
tar xzvf /home/base/glusterfs-3.3.0.tar.gz
cd /home/base/glusterfs-3.3.0
 
./configure
make
make install
 
cd /
rm -rf /home/base/glusterfs-3.3.0
rm /home/base/glusterfs-3.3.0.tar.gz

Let’s create a mount point on ‘/var/domains/web’ and mount it. This will be the directory on which the cluster is mounted, and it will be the virtual domain directory too.

mkdir /var/domains/web

Now add the glusterfs cluster to the startup script.

nano /etc/init.d/local.sh
echo "Starting network storage... "
/usr/local/sbin/glusterfs -s 192.168.1.221 --volfile-id slitaz-volume /var/domains/web

Once this is done, reboot the server to see if the file system mounts. After this it’s time to install the mysql dependencies for connecting with the database. I want database support for my webserver.

tazpkg get-install php-mysql
tazpkg get-install php-mysqli 
tazpkg clean-cache

Now lets create a new virtual host on port 80.

nano /etc/lighttpd/vhosts.conf
$SERVER["socket"] == ":80" {
server.document-root = "/var/domains/web"
server.errorlog = "/var/domains/web-error.log"
}

Almost done. This is a little test script I’ve made to test the functionality with the backend servers. Just put it on the cluster and each node will show it’s own name and if he has connection with the database. This will be the temporary index page.

nano /var/domains/web/index.php
<?php
echo "<h1>Welcome,</h1><br>";
echo "You are now connected with server: ".exec('hostname');
echo "@".$_SERVER['SERVER_ADDR']."<br>";
echo "Connection with the database is: ";
$link = mysql_connect("192.168.1.231","web","web");
if (!$link) {
    die("<font color=\"#FF0000\">inactive</font>");
}
echo "<font color=\"00FF00\">active</font>";
?>

Now copy 3 times and change the IP of each node to reflect configuration discussed at the beginning of the post. (Tip: start with the last machine and work up to the first to avoid IP conflicts with 192.168.1.211.)

/home/base/ip.sh webnode4 192.168.1 214 1
/home/base/ip.sh webnode3 192.168.1 213 1
/home/base/ip.sh webnode2 192.168.1 212 1

Voila, all servers responding as they should be.

And as usual, the vmware images: webnodes.7z (159 MB)

Note: this file doesn’t contains the test php script. As this is generated on the clusters. You will need to recreate this file yourself!

Leave a Reply

You must be logged in to post a comment.