enira.net

We are all living in a technological wasteland.

RSS
people

Slitaz project – Expansion – Part 2 – FTP Server

This FTP machine will get the ip 192.168.1.241. Still remember the ip script? Let’s make a call and reboot the server.

/home/base/ip.sh ftp 192.168.1 241 1
reboot

Now install the server and clean the cache.

tazpkg get-install pure-ftpd
tazpkg clean-cache

Once pureftpd is installed you will need to create the correct user for this. In this case I will use ftpgroup and ftpuser.

addgroup -S ftpgroup
adduser -SH ftpuser -G ftpgroup

Tip: to verify if the accounts exist cat the group and passwd file. If they mention ‘ftpgroup’ and ‘ftpuser’, everything is fine.

cat /etc/passwd 
cat /etc/group

So once the unix user is made, create the sub user(contributor in this case) in ‘/home/ftpusers’ and change the owner of the contributor map to ftpuser.

mkdir /home/ftpusers
mkdir /home/ftpusers/contributor
chown ftpuser -R /home/ftpusers/contributor

Now add ad new contributor user to ftpd and assign the unix user of this ftp user to ftpuser

pure-pw useradd contributor -u ftpuser -d /home/ftpusers/contributor
pure-pw mkdb

To verify if the user exists you can write:

pure-pw show contributor

You can see that this user operates under the ftpuser/ftpgroup.

root@base:/# pure-pw show contributor
 
Login              : contributor
Password           : $1$yljZ5iF0$RSfbAJ4ZDtyAQtjOYSKwg.
UID                : 100 (ftpuser)
GID                : 101 (ftpgroup)
Directory          : /home/ftpusers/contributor/./
Full name          :
Download bandwidth : 0 Kb (unlimited)
Upload   bandwidth : 0 Kb (unlimited)
Max files          : 0 (unlimited)
Max size           : 0 Mb (unlimited)
Ratio              : 0:0 (unlimited:unlimited)
Allowed local  IPs :
Denied  local  IPs :
Allowed client IPs :
Denied  client IPs :
Time restrictions  : 0000-0000 (unlimited)
Max sim sessions   : 0 (unlimited)

Now the user is done, create the startup script. First assign the service to load during startup. This can be done in the rcS.conf file.

nano /etc/rcS.conf
RUN_DAEMONS="dbus hald slim firewall dropbear lighttpd pure-ftpd"

Once this is done edit the daemons file to add the options that will be added to the service.

nano /etc/daemons.conf
# Pure FTPd
PUREFTPD_OPTIONS="-4 -H -A -B -j -l puredb:/etc/pureftpd.pdb"

Once this is done, edit the /etc/init.d/pure-ftp file. This is the service start file. Apparently there is a fuckup in this file as the pure-ftpd service isn’t like the others. It didn’t contain daemon options and everything is just in the startup script. And I don’t like that >:(. So let’s change it.

nano /etc/init.d/pure-ftpd

Find this line:

OPTIONS="-4 -H -A -B"

and replace it with:

OPTIONS=$PUREFTPD_OPTIONS

Voila FTP server is done, now reboot the machine and see if the service is running once you reboot:

ps | grep -i pure-ftpd
root@base:~# ps | grep -i pure-ftpd
 1392 root       0:00 pure-ftpd (SERVER)
 1436 root       0:00 grep -i pure-ftpd

If everything is running you should be able to connect to ftp://contributor@192.168.1.241/ with windows (or FileZilla, whatever client you are using.)

That’s it, below is the download link for this small tutorial:
ftp.7z (12,4 MB)

No Comments |

Slitaz project – Expansion – Part 1 – Varnish: HTTP cache

So, it’s been a while since I created some content about Slitaz. Remember our first server farm?

This additional part expands the existing Slitaz network with a Varnish HTTP cache.

So where to place this http cache? To determine the location of our cache there are two startegies:
a) Place the cache(s) up front so a cache hit will not stress the load balancers.
b) Place the cache(s) after the load balancers, so each cache search will be balanced too.

It just depends on which component is the most powerful one. You might consider multiple caches for redundancy and performance issues, but I’ll be making just one and placing it up front. (See image above.)

Let’s get started shall we?
This part continues from the base system built in Slitaz project – Part 1 – Building a base system.
As a starter let’s edit the hardware of our base system. Add a disk of 1.5 GB (this will be used for caching). And increase the RAM to 128 MB. This is needed because I noticed that the poor little thing can’t seem to manage Varnish on 64 MB of RAM. It keeps throwing memory errors when starting the service at boot.

For this machine I’ll assign the ip 192.168.1.10. So remember the ip script? Let’s make a call:

/home/base/ip.sh httpcache 192.168.1 10 1
reboot

Now let’s format the newly added 1.5 GB disk.

fdisk /dev/hdb

For convenience: press o, n, p, 1, enter, enter, w

Create a mount point for the cache.

mkfs.ext2 -b 4096 /dev/hdb1
mkdir /mnt/data

And edit fstab to add this disk to be mounted.

nano /etc/fstab
dev/hdb1	/mnt/data	ext2	defaults	0	0

Once this is done you need to change the Lighttpd server which still runs on port 80. We will need this port for sending our cache.

nano /etc/lighttpd/lighttpd.conf
server.port = 81

Once this is done install the required dependencies and the toolchain to build Varnish.

tazpkg get-install slitaz-toolchain
tazpkg get-install pkg-config
tazpkg get-install libedit
tazpkg get-install libedit-dev
tazpkg get-install readline-dev
tazpkg get-install readline
tazpkg get-install ncurses-dev
tazpkg get-install ncurses-extra
 
tazpkg clean-cache

Now download Varnish and build it. Also once this build is completed create the logical links to ‘/usr/bin’, as our Varnish will be built in ‘/usr/local/bin’.

cd /home/base/
wget http://repo.varnish-cache.org/source/varnish-3.0.4.tar.gz
tar xzvf /home/base/varnish-3.0.4.tar.gz
cd /home/base/varnish-3.0.4
 
./configure
make
make install
 
cd /
rm -rf /home/base/varnish-3.0.4
rm /home/base/varnish-3.0.4.tar.gz
 
ln -s /usr/local/bin/varnishadm /usr/bin/varnishadm 
ln -s /usr/local/bin/varnishlog /usr/bin/varnishlog
ln -s /usr/local/bin/varnishreplay /usr/bin/varnishreplay
ln -s /usr/local/bin/varnishstat /usr/bin/varnishstat
ln -s /usr/local/bin/varnishtop /usr/bin/varnishtop
ln -s /usr/local/bin/varnishhist /usr/bin/varnishhist
ln -s /usr/local/bin/varnishncsa /usr/bin/varnishncsa
ln -s /usr/local/bin/varnishsizes /usr/bin/varnishsizes
ln -s /usr/local/bin/varnishtest /usr/bin/varnishtest
ln -s /usr/local/sbin/varnishd /usr/sbin/varnishd

Now let’s create the configuration directory and config files for Varnish. Varnish requires two files, a secret file (for CLI access) and a configuarion file. The configuration file is quite complex and kind of resembles like C. In this example I just used a quick example script found on the web. Important is the .host property and the .port. This makes the redirect tho the virtual loadbalancer address. Tip: With Varnish it’s also possible to loadbalance the servers without the requirement of an external one. Just google for ‘varnish multiple sites config’. However this is not in the scope of this example.

mkdir /etc/varnish
nano /etc/varnish/default.vcl

The script:

backend default {
  .host = "192.168.1.200";
  .port = "80";
}
 
sub vcl_recv {
  if (req.request != "GET" &&
    req.request != "HEAD" &&
    req.request != "PUT" &&
    req.request != "POST" &&
    req.request != "TRACE" &&
    req.request != "OPTIONS" &&
    req.request != "DELETE") {
      /* Non-RFC2616 or CONNECT which is weird. */
      return (pipe);
  }
 
  if (req.request != "GET" && req.request != "HEAD") {
    /* We only deal with GET and HEAD by default */
    return (pass);
  }
 
  // Remove has_js and Google Analytics cookies.
  set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|__utm*|has_js|_chartbeat2)=[^;]*", "");
 
 
  // To users: if you have additional cookies being set by your system (e.g.
  // from a javascript analytics file or similar) you will need to add VCL
  // at this point to strip these cookies from the req object, otherwise
  // Varnish will not cache the response. This is safe for cookies that your
  // backend (Drupal) doesn't process.
  //
  // Again, the common example is an analytics or other Javascript add-on.
  // You should do this here, before the other cookie stuff, or by adding
  // to the regular-expression above.
 
 
  // Remove a ";" prefix, if present.
  set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", "");
  // Remove empty cookies.
  if (req.http.Cookie ~ "^\s*$") {
    unset req.http.Cookie;
  }
 
  if (req.http.Authorization || req.http.Cookie) {
    /* Not cacheable by default */
    return (pass);
  }
 
  // Skip the Varnish cache for install, update, and cron
  if (req.url ~ "install\.php|update\.php|cron\.php") {
    return (pass);
  }
 
  // Normalize the Accept-Encoding header
  // as per: http://varnish-cache.org/wiki/FAQ/Compression
 
  if (req.http.Accept-Encoding) {
    if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") {
      # No point in compressing these
      remove req.http.Accept-Encoding;
    }
    elsif (req.http.Accept-Encoding ~ "gzip") {
      set req.http.Accept-Encoding = "gzip";
    }
    else {
      # Unknown or deflate algorithm
      remove req.http.Accept-Encoding;
    }
  }
 
  // Let's have a little grace
  set req.grace = 30s;
 
  return (lookup);
}
 
// Strip any cookies before an image/js/css is inserted into cache.
sub vcl_fetch {
 
  if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") {
    // For Varnish 2.0 or earlier, replace beresp with obj:
    // unset obj.http.set-cookie;
    unset beresp.http.set-cookie;
 
  }
 
}
 
sub vcl_deliver {
  if (obj.hits > 0) {
          set resp.http.X-Cache = "HIT";
        // set resp.http.X-Cache-Hits = obj.hits;
 
  } else {
          set resp.http.X-Cache = "MISS";
  }
}
 
sub vcl_error {
  // Let's deliver a friendlier error page.
  // You can customize this as you wish.
  set obj.http.Content-Type = "text/html; charset=utf-8";
  synthetic {"
  <?xml version="1.0" encoding="utf-8"?>
  <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
   "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
  <html>
    <head>
      <title>"} + obj.status + " " + obj.response + {"</title>
      <style type="text/css">
      #page {width: 400px; padding: 10px; margin: 20px auto; border: 1px solid black; background-color: #FFF;}
      p {margin-left:20px;}
      body {background-color: #DDD; margin: auto;}
      </style>
    </head>
    <body>
    <div id="page">
    <h1>Page Could Not Be Loaded</h1>
    <p>We're very sorry, but the page could not be loaded properly. This should be fixed very soon, and we apologize for any inconvenience.</p>
    <hr />
    <h4>Debug Info:</h4>
    <p>Status: "} + obj.status + {" Response: "} + obj.response + {" XID: "} + req.xid + {"</p></div>
    </body>
   </html>
  "};
  return(deliver);
}

Once this is done, create the secret file.

uuidgen > /etc/varnish/secret && chmod 0600 /etc/varnish/secret
mkdir /mnt/data/varnish/

And let’s add the properties for our deamon to the daemons.conf file. In this case the options define a ‘varnish_storage.bin’ file on our added hard drive of 1 GB, and it’s listening on all interfaces, port 80. Also a CLI is enabled at localhost interface on port 6082 (just an example).

nano /etc/daemons.conf
#Varnish HTTP cache options.
VARNISH_OPTIONS="-a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s file,/mnt/data/varnish/$INSTANCE/varnish_storage.bin,1G"

Now let’s create the Varnish daemon service.

nano /etc/init.d/varnishd
#!/bin/sh
# /etc/init.d/varnishd: Start, stop and restart web server on SliTaz,
# at boot time or with the command line. Daemons options are configured
# with /etc/daemons.conf
#
. /etc/init.d/rc.functions
. /etc/daemons.conf
 
NAME=Varnish
DESC="Varnish HTTP cache"
DAEMON=/usr/sbin/varnishd
OPTIONS=$VARNISH_OPTIONS
PIDFILE=/var/run/varnishd.pid
 
case "$1" in
  start)
    if active_pidfile $PIDFILE varnish ; then
      echo "$NAME already running."
      exit 1
    fi
    echo -n "Starting $DESC: $NAME... "
    $DAEMON $OPTIONS
    status
    ;;
  stop)
    if ! active_pidfile $PIDFILE varnish ; then
      echo "$NAME is not running."
      exit 1
    fi
    echo -n "Stopping $DESC: $NAME... "
    kill `cat $PIDFILE`
    rm $PIDFILE
    status
    ;;
  restart)
    if ! active_pidfile $PIDFILE varnish ; then
      echo "$NAME is not running."
      exit 1
    fi
    echo -n "Restarting $DESC: $NAME... "
    kill `cat $PIDFILE`
    rm $PIDFILE
    sleep 2
    $DAEMON $OPTIONS
    status
    ;;
  *)
    echo ""
    echo -e "\033[1mUsage:\033[0m /etc/init.d/`basename $0` [start|stop|restart]"
    echo ""
    exit 1
    ;;
esac
 
exit 0

Assign execute rights to the script.

chmod +x /etc/init.d/varnishd

And add it to the daemons on startup.

nano /etc/rcS.conf
RUN_DAEMONS="dbus hald slim firewall dropbear varnishd lighttpd"

Now reboot the machine and you should see Varnish starting with the machine. A ps aux should show Varnish running.

ps aux | grep -i varnish

If it’s not running try starting the application manually with the -d option (short for debug). This will show the exact cause of the failure.

If it’s running you should see the ‘varnish_storage.bin’ being created in ‘/mnt/data/varnish/’.

cd /mnt/data/varnish/
ls -l
root@httpcache:/mnt/data/varnish# ls -l
total 0
-rwxrwxrwx    1 root     root     1073741824 Jun 19 21:26 varnish_storage.bin

To test if your setup is functional, just point a web browser to http://192.168.1.10 and check the headers. I’m using FireBug (Firefox extension) to view the headers in this case. They show added varnish values.

So that’s about it. Easy no? And as always here are the download files: httpcache.7z (38,3 MB)

2 Comments |

Slitaz project – Part 6 – Load balancer

This last part will focus on the loadbalancers. These loadbalancers will balance the load over our four webnodes. Because I couldn’t get lvs working for heartbeat, I’ve created a simple python script which takes over the virtual IP. Basically the same functionality, only no pain in the ass with missing dependencies for compiling an open-source package.

Let’s start by prepping ‘loadbalancer1’, I will use IP 192.168.1.200 to manage incoming virtual connections. And IP 192.168.1.201 for loadbalancer1 and IP 192.168.1.202 for loadbalancer 2.

/home/base/ip.sh loadbalancer1 192.168.1 201 1
reboot

Now before we can use HAProxy, we must disable port 80, which is default still open on the LighTTPD servers. Just edit ‘lighttpd.conf’ and change the default port to 81 (our admin instance).

nano /etc/lighttpd/lighttpd.conf
# Port, default for HTTP traffic is 80.
#
server.port = 81

Now we can start with HAProxy, grab the toolchain. As we need to compile.

tazpkg get-install slitaz-toolchain
tazpkg get-install python
tazpkg get-install fcron

It is possible that you recieve following error:

Installation of : fcron
================================================================================
Copying fcron...                                                     [ OK ]
Extracting fcron...                                                  [ OK ]
Extracting the pseudo fs... (lzma)                                   [ OK ]
Installing fcron... cp: can't create '/etc/init.d': File exists
                                                                     [ Failed ]
Removing all tmp files...                                            [ OK ]
================================================================================
fcron (3.0.4) is installed.

This is because the service file is called ‘init.d’ in the package, and not ‘fcron’.Which tries to install the startup folder. I provided a renamed copy of this on this site. If you wish to do it manually, unextract the fcron tazpkg. It contains a small lzma filesystem in which you will see the ‘init.d’ file.
Grabbing it from this site:

wget http://enira.net/wp-content/uploads/2012/07/fcron.txt -O /etc/init.d/fcron

Allow execute:

chmod +x /etc/init.d/fcron

Now let’s cleanup again:

tazpkg clean-cache

And let’s also grab our sources and make the application. I found that pcre is already installed so I can use the ‘USE_PCRE=1’ flag.

cd /home/base
wget http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.21.tar.gz
tar xzvf /home/base/haproxy-1.4.21.tar.gz
cd /home/base/haproxy-1.4.21
 
make TARGET=linux26 USE_PCRE=1 ARCH=i386
make install

And a little bit of cleaning.

cd /
rm -rf /home/base/haproxy-1.4.210
rm /home/base/haproxy-1.4.21.tar.gz

Now it’s time to make our configuration file. I will place this in ‘/etc/haproxy/haproxy.conf’.

mkdir /etc/haproxy
nano /etc/haproxy/haproxy.conf

So now this is the configuration file for haproxy. I defined my gluster nodes and mysql servers too as a backend machine. This is easy so I can get a quick view of the ‘slitaz farm’ health on ‘http://192.168.1.201/stats’ or ‘http://192.168.1.202/stats’.

global
	daemon
        maxconn 4096
	pidfile /var/run/haproxy.pid
 
defaults
        mode http
        timeout connect 5000ms
        timeout client 50000ms
        timeout server 50000ms
 
frontend http
        bind *:80
	balance leastconn 
        default_backend webnodes
 
backend loadbalancers
        stats enable
        server loadbalancer1 192.168.1.201:81 check
        server loadbalancer2 192.168.1.202:81 check
 
backend webnodes
	stats enable
	stats auth slitaz:slitaz
        stats uri /stats
 
      	option httpclose
      	option forwardfor
 
	server webnode1 192.168.1.211:80 weight 1 maxconn 512 check
	server webnode2 192.168.1.212:80 weight 1 maxconn 512 check
	server webnode3 192.168.1.213:80 weight 1 maxconn 512 check 
	server webnode3 192.168.1.214:80 weight 1 maxconn 512 check 
 
backend storage
        stats enable
	server glusternode1 192.168.1.221:81 check
	server glusternode2 192.168.1.222:81 check
	server glusternode3 192.168.1.223:81 check
	server glusternode4 192.168.1.224:81 check
	server glusterclient 192.168.1.229:81 check
 
backend mysql
        stats enable
	server mysqlmaster 192.168.1.231:81 check
	server mysqlslave 192.168.1.232:81 check

Anyway now this is over, it’s time to make a service script.

nano /etc/init.d/haproxy
#!/bin/sh
# /etc/init.d/haproxy: Start, stop and restart haproxy loadbalancer on SliTaz,
# at boot time or with the command line. Daemons options are configured
# with /etc/daemons.conf
#
. /etc/init.d/rc.functions
. /etc/daemons.conf
 
NAME=HAproxy
DESC="load balancer"
DAEMON=/usr/local/sbin/haproxy
OPTIONS=$HAPROXY_OPTIONS
PIDFILE=/var/run/haproxy.pid
 
case "$1" in
  start)
    if active_pidfile $PIDFILE haproxy ; then
      echo "$NAME already running."
      exit 1
    fi
    echo -n "Starting $DESC: $NAME... "
    $DAEMON $OPTIONS
    status
    ;;
  stop)
    if ! active_pidfile $PIDFILE haproxy ; then
      echo "$NAME is not running."
      exit 1
    fi
    echo -n "Stopping $DESC: $NAME... "
    kill `cat $PIDFILE`
    rm $PIDFILE
    status
    ;;
  restart)
    if ! active_pidfile $PIDFILE haproxy ; then
      echo "$NAME is not running."
      exit 1
    fi
    echo -n "Restarting $DESC: $NAME... "
    kill `cat $PIDFILE`
    rm $PIDFILE
    sleep 2
    $DAEMON $OPTIONS
    status
    ;;
  *)
    echo ""
    echo -e "\033[1mUsage:\033[0m /etc/init.d/`basename $0` [start|stop|restart]"
    echo ""
    exit 1
    ;;
esac
 
exit 0

Add rights to it:

chmod +x /etc/init.d/haproxy

Now let’s edit the deamon config file:

nano /etc/daemons.conf

And add the line:

# HAproxy options.
HAPROXY_OPTIONS="-f /etc/haproxy/haproxy.conf"

And edit our start up rcS file.

nano /etc/rcS.conf

Also add the haproxy and crontabs to the services:

RUN_DAEMONS="dbus hald slim firewall dropbear lighttpd haproxy fcron"

Now let’s add the heartbeat service. For this I created a small script with python (thats why you need to install python at the beginning of this post).

nano /home/base/heartbeat.py
#! /usr/bin/python
 
import socket
import fcntl
import struct
import array
import os
import time
import datetime
 
#Variables
########################
VIRTUAL = "192.168.1.200"
MASK = "255.255.255.0"
VIRTUAL_IFACE = "eth0:0"
LOOP_PING = 1
LOOP_SLEEP = 5
LOOP_EXECUTE = 9
########################
 
def all_interfaces():
    max_possible = 128  # arbitrary. raise if needed.
    bytes = max_possible * 32
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    names = array.array('B', '\0' * bytes)
    outbytes = struct.unpack('iL', fcntl.ioctl(
        s.fileno(),
        0x8912,  # SIOCGIFCONF
        struct.pack('iL', bytes, names.buffer_info()[0])
    ))[0]
    namestr = names.tostring()
    return [namestr[i:i+32].split('\0', 1)[0] for i in range(0, outbytes, 32)]
 
def get_ip_address(ifname):
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    return socket.inet_ntoa(fcntl.ioctl(
        s.fileno(),
        0x8915,  # SIOCGIFADDR
        struct.pack('256s', ifname[:15])
    )[20:24])
 
t0 = datetime.datetime.now()
 
for num in range(LOOP_EXECUTE):
        interfaces = all_interfaces()
        virtual = False
 
        for interface in interfaces:
                ip = get_ip_address(interface)
 
                if interface == VIRTUAL_IFACE:
                        # we found a virtual interface
                        virtual = True
 
        # Ping and see if the virtual works
        response = os.system("ping -c 1 -W " + str(LOOP_PING) + " " + VIRTUAL)
        if response == 0:
                print "Virtual responding... done"
        else:
                print "Virtual not responding .. binding"
                os.system("ifconfig " + VIRTUAL_IFACE + " " + VIRTUAL + " netmask " + MASK + " up")
 
        time.sleep(LOOP_SLEEP)
 
        print "Execution: " + str(datetime.datetime.now() - t0) +"s"

Add the script to the crontab:

fcrontab -u root -e

press ‘i’ and add:

@ 1 /usr/bin/python /home/base/heartbeat.py &gt;&gt; /dev/null

Quit by pressing: ‘:’, ‘w’, ‘q’

To view your cron jobs type:

fcrontab -u root -l

Now shutdown the server and copy the loadbalancer1. This will function as loadbalancer2:

/home/base/ip.sh loadbalancer2 192.168.1 202 1
reboot

Start loadbalancer1 again and you should now be balancing connections.

You should see HAProxy stats at http://192.168.1.201/stats loadbalancer1 (username: slitaz, password:slitaz) or http://192.168.1.202/stats loadbalancer2 (username: slitaz, password:slitaz)

lb2stats

(As you can see from this stats page example glusterclient and loadbalancer1 is down. Easy no?)

Now you can connect to the virtual IP too, it will redirect to one of our four nodes: http://192.168.1.200/
webnode3

Now it’s time to test the loadbalancer. Find the loadbalancer which has ‘eth0:0’ (our virtual interface).
Tip:

ifconfig
root@loadbalancer1:~# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:0E:AC:A0
          inet addr:192.168.1.201  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2144 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2937 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:150148 (146.6 KiB)  TX bytes:201556 (196.8 KiB)
          Interrupt:19 Base address:0x2000
 
eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:0E:AC:A0
          inet addr:192.168.1.200  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:19 Base address:0x2000
 
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:383 errors:0 dropped:0 overruns:0 frame:0
          TX packets:383 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:21900 (21.3 KiB)  TX bytes:21900 (21.3 KiB)

Once you found it, shut it down or reboot it. (In my case loadbalancer1)

C:\Users\Enira>ping 192.168.1.200 -t
 
Pinging 192.168.1.200 with 32 bytes of data:
Reply from 192.168.1.200: bytes=32 time=492ms TTL=64
Reply from 192.168.1.200: bytes=32 time

As you can see the timeouts occur when I rebooted the machine. And an ifconfig on loadbalancer2 clearly shows that the has taken over the virtual eth0:0 interface:

root@loadbalancer2:~# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:DB:4B:CF
          inet addr:192.168.1.202  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6551 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9197 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:457143 (446.4 KiB)  TX bytes:641346 (626.3 KiB)
          Interrupt:19 Base address:0x2000
 
eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:DB:4B:CF
          inet addr:192.168.1.200  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:19 Base address:0x2000
 
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:1119 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1119 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:61932 (60.4 KiB)  TX bytes:61932 (60.4 KiB)

That’s it, for now. And the VMWare images: loadbalancers.7z (67.1 MB)

No Comments |

Slitaz project – Part 5 – Web cluster nodes

So now we have our file storage and our database. Time to start on the front-end layer: the web clusters. These web clusters will be running LighTTPD which we previously installed.

For our nodes I will use the following IP scheme:

webnode1 192.168.1.211
webnode2 192.168.1.212
webnode3 192.168.1.213
webnode4 192.168.1.214

We start by preparing webnode1, which is a copy of our base image we made in part 1.

/home/base/ip.sh webnode1 192.168.1 211 1
reboot

Install GlusterFS on this webnode, we need it for accessing the data cluster. I am not going to explain this anymore. (For more info see: Slitaz project – Part 2 – Cluster storage nodes (GlusterFS))

tazpkg get-install flex 
tazpkg get-install python
tazpkg get-install readline-dev
tazpkg get-install mpc-library
tazpkg get-install elfutils
tazpkg get-install openssl-dev
tazpkg get-install slitaz-toolchain
 
tazpkg clean-cache
 
cd /home/base
wget http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.0/glusterfs-3.3.0.tar.gz
tar xzvf /home/base/glusterfs-3.3.0.tar.gz
cd /home/base/glusterfs-3.3.0
 
./configure
make
make install
 
cd /
rm -rf /home/base/glusterfs-3.3.0
rm /home/base/glusterfs-3.3.0.tar.gz

Let’s create a mount point on ‘/var/domains/web’ and mount it. This will be the directory on which the cluster is mounted, and it will be the virtual domain directory too.

mkdir /var/domains/web

Now add the glusterfs cluster to the startup script.

nano /etc/init.d/local.sh
echo "Starting network storage... "
/usr/local/sbin/glusterfs -s 192.168.1.221 --volfile-id slitaz-volume /var/domains/web

Once this is done, reboot the server to see if the file system mounts. After this it’s time to install the mysql dependencies for connecting with the database. I want database support for my webserver.

tazpkg get-install php-mysql
tazpkg get-install php-mysqli 
tazpkg clean-cache

Now lets create a new virtual host on port 80.

nano /etc/lighttpd/vhosts.conf
$SERVER["socket"] == ":80" {
server.document-root = "/var/domains/web"
server.errorlog = "/var/domains/web-error.log"
}

Almost done. This is a little test script I’ve made to test the functionality with the backend servers. Just put it on the cluster and each node will show it’s own name and if he has connection with the database. This will be the temporary index page.

nano /var/domains/web/index.php
<?php
echo "<h1>Welcome,</h1><br>";
echo "You are now connected with server: ".exec('hostname');
echo "@".$_SERVER['SERVER_ADDR']."<br>";
echo "Connection with the database is: ";
$link = mysql_connect("192.168.1.231","web","web");
if (!$link) {
    die("<font color=\"#FF0000\">inactive</font>");
}
echo "<font color=\"00FF00\">active</font>";
?>

Now copy 3 times and change the IP of each node to reflect configuration discussed at the beginning of the post. (Tip: start with the last machine and work up to the first to avoid IP conflicts with 192.168.1.211.)

/home/base/ip.sh webnode4 192.168.1 214 1
/home/base/ip.sh webnode3 192.168.1 213 1
/home/base/ip.sh webnode2 192.168.1 212 1

Voila, all servers responding as they should be.

And as usual, the vmware images: webnodes.7z (159 MB)

Note: this file doesn’t contains the test php script. As this is generated on the clusters. You will need to recreate this file yourself!

No Comments |

Slitaz project – Part 4 – Mysql replication

So this post will focus on creating a mysql master server and a slave replicating this master. Mysql replication is a good way to have realtime backups.

First let’s start with the master server. I will use the IP address 192.168.1 231 for the master and 192.168.1 232 for the slave.
Create a copy of our base and add a 1GB IDE disk drive to this. This disk will be where our database will be written to. (Still using the 512MB main disk seems wrong for me.)

Fire up our script, and reboot (as usual)

/home/base/ip.sh mysqlmaster 192.168.1 231 1
reboot

So now let’s add our disk. I noticed that in the starup script, Slitaz uses ‘/var/lib/mysql’ a lot. To avoid any problems later on, I will just mount the disk to the place where the mysql database is kept.

fdisk /dev/hdb

Press: o, n, p, 1, enter, enter, w

mkfs.ext2 -b 4096 /dev/hdb1
mkdir /var/lib/mysql
nano /etc/fstab
dev/hdb1	/var/lib/mysql	ext2	defaults	0	0

Now our storage is prepped, it’s time to install the mysql server. On the master server I will install the package ‘php-mysqli’ too because it is required for phpmyadmin. (Note: also agree to any additional required packages)

tazpkg get-install mysql
tazpkg get-install php-mysqli 
tazpkg clean-cache

Now let’s add it as a service (easy peasy stuff, you should be quite familiar with this by now.)

nano /etc/rcS.conf
RUN_DAEMONS="dbus hald slim firewall dropbear lighttpd mysql"

For our convenience, the default Slitaz installation comes with a configuration for machines with a low amount of memory. So I will be deleting the default configuration and using this configuration instead.

rm /etc/mysql/my.cnf
mv /etc/mysql/my-small.cnf /etc/mysql/my.cnf

Now before we can start mysql, the config file needs a little bit of tweaking. The ‘bind-address’ setting is used to allow external machines to make a connection. This is needed because we want our webnodes to connect to this database. Also the setting ‘log-bin’ needs to be enabled. This allows our master mysql server to keep logs for our slave.

nano /etc/mysql/my.cnf
[mysqld]
bind-address = 192.168.1.231
log-bin=mysql-bin

Now reboot the server. This will cause Slitaz/MySQL to generate the needed files for the MySQL database. These are generated the first time the service is started.

Now let’s login to our newly created server. Normally the password is left empty (just press ENTER).

mysql -u root -p

Now this server needs a few extra accounts: One account for our slave server which is located at 192.168.1.232, and should be restricted to that IP. One account for phpMyAdmin (called myadmin here). One account to distribute to our web servers so they can connect to the database (Optional: restricted to this subnet).

GRANT ALL ON *.* TO slave@'192.168.1.232' IDENTIFIED BY 'slave';
GRANT ALL ON *.* TO myadmin@'localhost' IDENTIFIED BY 'myadmin';
GRANT ALL ON *.* TO web@'192.168.1.0/255.255.255.0' IDENTIFIED BY 'web';
exit

So now as promised, the installation of phpMyAdmin. I like this little piece of software, and because there is already a web server installed it’s easy to include. So just download it and unpack it to the admin domain.

cd /home/base
wget http://downloads.sourceforge.net/project/phpmyadmin/phpMyAdmin/3.5.1/phpMyAdmin-3.5.1-all-languages.tar.gz
tar xzvf /home/base/phpMyAdmin-3.5.1-all-languages.tar.gz
mkdir /var/domains/admin/phpMyAdmin
mv /home/base/phpMyAdmin-3.5.1-all-languages/* /var/domains/admin/phpMyAdmin
rm -rf /home/base/phpMyAdmin-3.5.1-all-languages
rm /home/base/phpMyAdmin-3.5.1-all-languages.tar.gz

So now let’s configure this instance. A configuration example can be found as ‘config.sample.inc.php’. We will use this to configure our phpMyAdmin.

mkdir /var/domains/admin/phpMyAdmin/config
cp /var/domains/admin/phpMyAdmin/config.sample.inc.php /var/domains/admin/phpMyAdmin/config/config.inc.php

Just edit the newly created config file and change the blowfisch secret. (Else phpMyAdmin will complain.)

nano /var/domains/admin/phpMyAdmin/config/config.inc.php
$cfg['blowfish_secret'] = 'slitazsecret';

Done, easy no? Now you can use the link ‘http://192.168.1.231:81/phpMyAdmin/‘ to access phpMyAdmin. Use the username/password combination: myadmin/myadmin.

Part one of our master server is done. Time to work on our slave. This slave will replicate all changes made in the master database. This is a great backup solution. If our master server fails the slave can be reconfigured to a master server. And bring the systems up and running again in no time.

Like with the master, we will continue from our base system and also add a 1GB IDE disk. I will be using the ip 192.168.1.232 for the slave.

/home/base/ip.sh mysqlslave 192.168.1 232 1
reboot
fdisk /dev/hdb

Press: o, n, p, 1, enter, enter, w

Also this disk needs to be mounted on ‘/var/lib/mysql’ like the master.

mkfs.ext2 -b 4096 /dev/hdb1
mkdir /var/lib/mysql
nano /etc/fstab
dev/hdb1	/var/lib/mysql	ext2	defaults	0	0

On our slave I will not install phpMyAdmin. (Installing this is optional, install instructions are described in the part of the mysqlmaster.)

tazpkg get-install mysql
tazpkg clean-cache
nano /etc/rcS.conf
RUN_DAEMONS="dbus hald slim firewall dropbear lighttpd mysql"

Now let’s use our configuration file for small memory servers.

rm /etc/mysql/my.cnf
mv /etc/mysql/my-small.cnf /etc/mysql/my.cnf 
 
nano /etc/mysql/my.cnf

Off course, this file also needs some changes. These include adding the slave options. The ‘server-id’ must be different and the server instance must know how to connect to the master server.

[mysqld]
bind-address = 192.168.1.232
server-id = 2
master-host=192.168.1.231
master-port=3306
master-user=slave
master-password=slave
master-connect-retry=60

Now let’s reboot this server too and let it generate all needed files.

Now the difficult part: creating a data snapshot. This is done to make the servers synchronize. First the master tables have to be locked, a data dump has to be made. Then this data dump needs to be uploaded to the slave. The slave needs to be started and all tables will need to be unlocked on the master database.

This requires two putty sessions to each machine. One on the master to lock the database and one to dump the database. One on the slave to stop the slave instance (and restart it) and one to restore the backup to the slave.

session1:mysqlmaster

mysql -u root -p
flush tables with read lock;

session2:mysqlmaster

mysqldump --all-databases --master-data > /var/domains/admin/dbdump.db

Now on our slave we need to reset the state of our slave to accept the data dump.
session1:mysqlslave

mysql -u root -p
stop slave;
reset slave;

Now let’s download the dump on on mysqlslave and dump it into the database.
session2:mysqlslave

wget -O /home/base/dbdump.db http://192.168.1.231:81/dbdump.db
mysql -u root -p &lt; /home/base/dbdump.db

Now start our slave again in session1 (which is still connected to the mysql database.)
session1:mysqlslave

slave start;

Now unlock our tables again in the master.
session2:mysqlmaster

unlock tables;

And delete the database dumps on our slave server.

rm /home/base/dbdump.db

And on our master server.

rm /var/domains/admin/dbdump.db

All done, our tables should now perfectly synchronize. Open a MySQL Workbench on both databases and watch them synchronize! (I am not going to explain this program to you right now. Look it up if you don’t know how to use it.)

Finished!

And here are the files (contains mysqlmaster and mysqlslave): mysql.7z (20.7 MB)

No Comments |

Slitaz project – Part 3 – Samba client mounting GlusterFS (GlusterFS)

–This article continues from the base.7z made in ‘Slitaz project – Part 1 – Building a base system‘.

Before I like to build the webservers, I like to take some time to build a GlusterFS client and Samba server. This machine is optional but it is a great practise creating a simple glusterfs and show a lot about the client. It’s also fun to test your nodes with.

This machine i’ll call ‘glusterclient’ and I will use the ip 192.168.1 229. Just copy another base machine and change the ip/hostname:

/home/base/ip.sh glusterclient 192.168.1 229 1

As installing GlusterFS is explained in part two, I won’t go over all the details. I’ll just give you the commands:

tazpkg get-install flex 
tazpkg get-install python
tazpkg get-install readline-dev
tazpkg get-install mpc-library
tazpkg get-install elfutils
tazpkg get-install openssl-dev
tazpkg get-install slitaz-toolchain
 
tazpkg clean-cache
 
cd /home/base
wget http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.0/glusterfs-3.3.0.tar.gz
tar xzvf /home/base/glusterfs-3.3.0.tar.gz
cd /home/base/glusterfs-3.3.0
 
./configure
make
make install
 
cd /
rm -rf /home/base/glusterfs-3.3.0
rm /home/base/glusterfs-3.3.0.tar.gz

So once GlusterFS is installed, we make a mount point for the partition:

mkdir /mnt/glusterfs

For some strange reason using the command ‘mount -t glusterfs 192.168.1.221:/slitaz-volume /mnt/glusterfs’ doesn’t works. Also adding the mount point to /etc/fstab seems not to function at all.
The only workaround I have found thus far is adding a call to the glusterfs executable to the startup in ‘local.sh’. (If somebody knows a way to make fstab work: comment or contact me!)

nano /etc/init.d/local.sh

And add these lines at the bottom of the file:

echo "Starting network storage... "
/usr/local/sbin/glusterfs -s 192.168.1.221 --volfile-id slitaz-volume /mnt/glusterfs

When we reboot we should find our glusterfs file system mounted.

root@glusterclient:~# df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/root               494.8M    264.6M    204.7M  56% /
tmpfs                    28.6M         0     28.6M   0% /dev/shm
192.168.1.221:slitaz-volume
                         31.5G     72.0M     29.8G   0% /mnt/glusterfs

You can also see that the ‘df -h’ command produces a line break after the mounted GlusterFS volume name. This is why I changed the ‘class.parseProgs.inc.php’ file. It would generate errors at http://192.168.1.229:81/

Now let’s continue and install samba. (Samba seems to need cups to work.)

tazpkg get-install cups
tazpkg get-install samba
 
tazpkg clean-cache

For Samba I always like to create different users. (Just a security habit I guess…)

addgroup smb
adduser -G smb smb #and put a smb password
smbpasswd -a smb #and put a smb password

samba comes with a preloaded config file but I prefer to use mine, just replace all.

nano /etc/samba/smb.conf
[global]
workgroup = SLITAZ
security = USER
 
[glusterfs]
comment = GlusterFS
valid users = smb
read only = No
browseable = Yes
path = /mnt/glusterfs

Last step is adding it to the deamons to start.

nano /etc/rcS.conf
RUN_DAEMONS="dbus hald slim firewall dropbear lighttpd samba"

Once you reboot you just need to set the file permissions:

chmod 777 -R /mnt/glusterfs

After this you can go to your samba share at \\192.168.1.229\glusterfs (and login using smb account.) It’s fun to add a few files and watch the cluster nodes. (Tip: try ‘ls -l’ in the ‘/mnt/data’ folder of a node and watch)

And off course: glusterclient.7z (50.9 MB)

No Comments |

Slitaz project – Part 2 – Cluster storage nodes (GlusterFS)

–This article continues from the base.7z made in ‘Slitaz project – Part 1 – Building a base system‘.

Anyway, now that our base is made, it’s time to do something with it.

In this part I will create four GlusterFS nodes that will stripe the files across each server. GlusterFS defines a minimum hardware of 1 GB of RAM and 8 GB of disk space. Soooo… about the requirements: I’ll add a new disk of 8GB, but the memory limitations. I like a challenge, 64 Megs ought to be enough (for now).

So just copy the base system. And let’s rename it to ‘glusternode1’. Add a new 8 GB disk (or more) to it. Again IDE! and boot the machine.

Remember the ip script? Time to put it to some use. The first thing I like to do is assign a static IP so I can use putty to connect to it. After boot just type:

/home/base/ip.sh glusternode1 192.168.1 221 1

Reboot the machine and voila. For the gluster nodes I intend to use the following ip’s:

glusternode1       192.168.1.221
glusternode2       192.168.1.222
glusternode3       192.168.1.223
glusternode4       192.168.1.224

Ok so, time to format the disk that we’ll use for our storage and assign it.

fdisk /dev/hdb

For convenience: press o, n, p, 1, enter, enter, w
Format it using ext2:

mkfs.ext2 -b 4096 /dev/hdb1

Create the mount point:

mkdir /mnt/data

And add the disk to fstab.

nano /etc/fstab
dev/hdb1	/mnt/data	ext2	defaults	0	0

Once the server is rebooted, the disk should just pop up.

Now that our disk is ready, we need to install GlusterFS. Before we can build GlusterFS from source, we need some dependencies:

tazpkg get-install flex 
tazpkg get-install python
tazpkg get-install readline-dev
tazpkg get-install mpc-library
tazpkg get-install elfutils
tazpkg get-install openssl-dev
tazpkg get-install slitaz-toolchain

Just press ‘y’ to all additional dependencies (there are a lot of these).

Also, I like things clean.

tazpkg clean-cache

As of writing, the latest version of GlusterFS is 3.3.0 so I will be using this verison. Feel free to use a newer version and let me know if it still works.

I’ll be working in the base folder (for the convenience). Just download and extract GlusterFS.

cd /home/base
wget http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.0/glusterfs-3.3.0.tar.gz
tar xzvf /home/base/glusterfs-3.3.0.tar.gz
cd /home/base/glusterfs-3.3.0

Let’s configure our source:

./configure

After configure you should get a result like this:

GlusterFS configure summary
===========================
FUSE client        : yes
Infiniband verbs   : no
epoll IO multiplex : yes
argp-standalone    : no
fusermount         : no
readline           : yes
georeplication     : yes

Now let’s make GlusterFS and install it.

make
make install

Cleaning up:

cd /
rm -rf /home/base/glusterfs-3.3.0
rm /home/base/glusterfs-3.3.0.tar.gz

To follow the same conventions like lighttpd and samba I created a deamon script. You can download or create it yourself. This script can be used to start GlusterFS as a deamon.

Option 1: download

wget -O /etc/init.d/glusterd http://www.enira.net/wp-content/uploads/2012/06/glusterd.txt

Option 2: create

nano /etc/init.d/glusterd
#!/bin/sh
# /etc/init.d/glusterd: Start, stop and restart web server on SliTaz,
# at boot time or with the command line. Daemons options are configured
# with /etc/daemons.conf
#
. /etc/init.d/rc.functions
. /etc/daemons.conf
 
NAME=GlusterFS
DESC="gluster deamon"
DAEMON=/usr/local/sbin/glusterd
OPTIONS=$GLUSTERFS_OPTIONS
PIDFILE=/var/run/glusterd.pid
 
case "$1" in
  start)
    if active_pidfile $PIDFILE glusterd ; then
      echo "$NAME already running."
      exit 1
    fi
    echo -n "Starting $DESC: $NAME... "
    $DAEMON $OPTIONS
    status
    ;;
  stop)
    if ! active_pidfile $PIDFILE glusterd ; then
      echo "$NAME is not running."
      exit 1
    fi
    echo -n "Stopping $DESC: $NAME... "
    kill `cat $PIDFILE`
    rm $PIDFILE
    status
    ;;
  restart)
    if ! active_pidfile $PIDFILE glusterd ; then
      echo "$NAME is not running."
      exit 1
    fi
    echo -n "Restarting $DESC: $NAME... "
    kill `cat $PIDFILE`
    rm $PIDFILE
    sleep 2
    $DAEMON $OPTIONS
    status
    ;;
  *)
    echo ""
    echo -e "\033[1mUsage:\033[0m /etc/init.d/`basename $0` [start|stop|restart]"
    echo ""
    exit 1
    ;;
esac
 
exit 0

So add execute rights to the newly created script.

chmod +x /etc/init.d/glusterd

I noticed that it seems impossible to generate a PID for this executable. Luckily GlusterFS can make it’s own with the ‘–pid-file’ option. So before it can be used as a startup deamon you need to pass this as an optional paramater. These are found in daemons.conf.

nano /etc/daemons.conf

Just add these lines at the bottom:

# GlusterFS
GLUSTERFS_OPTIONS="--pid-file=/var/run/glusterd.pid"

And let’s add it as a deamon service to the file rcS.conf:

nano /etc/rcS.conf

Just search the line ‘RUN_DEAMONS’ and add ‘glusterd’ to it.

RUN_DAEMONS="dbus hald slim firewall dropbear lighttpd glusterd"

Now reboot your server and you should see a glusterd process.

root@glusternode1:~# ps aux | grep gluster
 1441 root       0:00 /usr/local/sbin/glusterd --pid-file=/var/run/glusterd.pid
 1496 root       0:00 grep gluster

Also this first start generates the ‘/var/lib/glusterd’ which we need.

To identify each gluster node they have to have their own unique uuid. Because if we copy the glusternode1 3 times, each server will list the same uuid, causing gluster to think all four machines are localhost!
This uuid can be found in the file ‘/var/lib/glusterd/glusterd.info’. Let’s assign a random one for ‘glusternode1’.

echo UUID=`uuidgen -r` > /var/lib/glusterd/glusterd.info

Finished with glusternode1! Phew. 3 to go. Shutdown the glusternode1 (tip:halt) and copy it three times(glusternode2,3,4).

Now let’s start assigning the IP’s and random uuid’s beginning with the last cluster. (To avoid ip conflicts.)

/home/base/ip.sh glusternode4 192.168.1 224 1
echo UUID=`uuidgen -r` > /var/lib/glusterd/glusterd.info
reboot
/home/base/ip.sh glusternode3 192.168.1 223 1
echo UUID=`uuidgen -r` > /var/lib/glusterd/glusterd.info
reboot
/home/base/ip.sh glusternode2 192.168.1 222 1
echo UUID=`uuidgen -r` > /var/lib/glusterd/glusterd.info
reboot

Now we have four nodes who know nothing about each other. To connect em all I’ll be working on glusternode1.

Just probe each node and they will synch configuration.

gluster peer probe 192.168.1.222
gluster peer probe 192.168.1.223
gluster peer probe 192.168.1.224

You can verify the connection with the ‘peer status’ command.

gluster peer status

Your output on glusternode1 should sort of look like this (uuid’s will vary):

root@glusternode1:~# gluster peer status
Number of Peers: 3
 
Hostname: 192.168.1.222
Uuid: efabc10f-5830-408d-aa07-2c05d4f074ef
State: Peer in Cluster (Connected)
 
Hostname: 192.168.1.223
Uuid: 5241b592-477f-4c5f-bb15-4b13fc48903b
State: Peer in Cluster (Connected)
 
Hostname: 192.168.1.224
Uuid: a54bfa2c-c1b7-4e9f-aaeb-18f45f7d66bc
State: Peer in Cluster (Connected)

If the output is similar like above, we can start creating a volume. I’ll be using the default striping. This means that all files will be spread on the cluster. There are various other possibilities but I won’t handle these.
Following command will create the storage cluster:

gluster volume create slitaz-volume 192.168.1.221:/mnt/data 192.168.1.222:/mnt/data 192.168.1.223:/mnt/data 192.168.1.224:/mnt/data

Offcourse one of the drawbacks of using such a small amount of RAM (64 MB) is that gluster will shit himself trying to load. Default glusterfs tries to use 64MB, wich is too much for the machine to handle. So we’ll need to tweak the cache-size. I tested a few values and 16MB seems to work best. (4MB and 8MB are too small.)

gluster volume set slitaz-volume cache-size 16MB

So normally everything should be ok now and a ‘volume info’ command should succeed.

gluster volume info
root@glusternode1:/mnt/data# gluster volume info
 
Volume Name: slitaz-volume
Type: Distribute
Volume ID: c3262294-869b-48b9-a9fb-1d95167fe182
Status: Created
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 192.168.1.221:/mnt/data
Brick2: 192.168.1.222:/mnt/data
Brick3: 192.168.1.223:/mnt/data
Brick4: 192.168.1.224:/mnt/data
Options Reconfigured:
performance.cache-size: 16MB

Notice the ‘created’ status, this means our volume isn’t started yet. Starting can be done with:

gluster volume start slitaz-volume

Voila, your server nodes are finished and ready to accept files.

And like previous post. You can download the vmwares (all 4) here: glusternodes.7z (156.0 MB)

No Comments |

Slitaz project – Part 1 – Building a base system

One of the main advantages of virtualization is that you can copy your machines. With one base system you can generate a lot of machines quite fast without having to install each machine individually.

This is exactly what we are going to do in Part 1. In this part I am using VMWare, eventually I am planning to make this tutorial also available for VirtualBox.

Let’s start. Fire up your VMWare (or VirtualBox) and create a virtual machine with:
– 1 CPU
– 512 MB of RAM (I’ll explain this later)
– A hard drive of 512 MB – Use IDE! not SCSI. (I don’t know if this is fixed in Slitaz 4.0 but SCSI drives didn’t work for me back in version 3.0.) EDIT: in Oracle VM Virtual Box it is possible to use a SCSI drive.
– Bridged internet connection (in VMWare) or (I think) direct access in VirtualBox
– Other Linux 2.6 kernel

Once this machine has been created, download Slitaz at: http://mirror.slitaz.org/iso/4.0/slitaz-4.0.iso and use it as boot CD for your virtual machine.

Why the 512 MB RAM and why a regular Slitaz 4.0 image?
Explanation:In Slitaz 3.0 I found the slitaz-installer located in the base system (~8MB iso) but since Slitaz 4.0 I found it much easier using the GUI installer to install the Slitaz base package. Which needs, offcourse, more RAM. 512 MB is way too much but we can still change the amount of RAM our machine uses later on.

Ok, let’s continue. Boot up the virtual machine and watch the Slitaz magic do it’s job. Select ‘SliTaz Live’ and continue the booting with default settings. (I changed my keymap to be-latin1 as that is the layout of my keboard.)

First things first. We need to prepare our disk. Slitaz comes with GParted built in, which is super easy to use.

Just jump to: ‘Applications > System Tools > Gparted Partition Editor’ pop in the root password (which is ‘root’) and off we go.

In Gparted we select our /dev/hda (if not already selected) and use, ‘Device > Create Partition Table’. Press Apply and rightclick on the gray block. Select ‘New’ and press the ‘Add’ button. (EDIT: In virtual box it’s /dev/sda by default)
After this just press ‘Apply’ to apply the settings.

Voila, partition set. Quite easy.

Now to install Slitaz: Go to the Slitaz Panel, which can be found ‘System Tools > SliTaz Panel’. Use root/root as username/password. And go for the ‘Install’ option on the right top corner.

Skip the partitioning ‘Continue Installation’ (we just did this).

Now we need the base system. From inside SliTaz download the base iso from http://mirror.slitaz.org/iso/4.0/flavors/slitaz-4.0-base.iso . (Use Midori)

Select ISO and fill in your downloaded file(mine is: /home/tux/slitaz-4.0-base.iso)
Go to ‘Install Slitaz to partition’ and select ‘/dev/hda1’.

Set the hostname to ‘base’ and the user login to ‘base’.
Also click on the box that says ‘Install GRUB bootloader’.

That’s it. Press proceed and watch Slitaz install and reboot. Now shutoff the machine and cut the RAM to 64 MB and boot. One slim base install.

Look at these stats. Quite sexy, no?

root@base:~# free
total         used         free       shared      buffers
Mem:         58556        13668        44888            0         1668
-/+ buffers:              12000        46556
Swap:            0            0            0
 
root@base:~# df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/root               494.8M     29.5M    439.8M   6% /
tmpfs                    28.6M         0     28.6M   0% /dev/shm

Now let’s continue to install SSH. SSH is definitely a must.

Login with root/root and install nano (I really prefer this one above vi because its simplicity.)

tazpkg get-install nano

By default dropbear is already installed. We simply need to enable it in our startup file. This file is located in /etc/rcS.conf. Find the line that says ‘RUN_DAEMONS’ and add dropbear.
By default apache and mysql are also in the deamon list but you can remove these. (Reboot the server and continue with putty)

nano /etc/rcS.conf
RUN_DAEMONS="dbus hald slim firewall dropbear"
reboot

Now we can use putty to connect to our server. (Hint: command for finding out the server ip address is ‘ifconfig’)

Usually I like to know what’s happening on my servers. For this I’ve used phpsysinfo in the past. It offers me a quick overview through the web. However this requires a webserver and php.

So download all packages needed for lighttpd (press ‘y’ to install missing dependencies).

tazpkg get-install pcre-dev
tazpkg get-install lighttpd
tazpkg get-install php

Add lighttpd to the daemons:

nano /etc/rcS.conf
RUN_DAEMONS="dbus hald slim firewall dropbear lighttpd"

For security reasons I will be running this admin section of lighttpd on a different port and a different folder.

mkdir /var/domains/
mkdir /var/domains/admin

Let’s edit our virtual hosts file:

nano /etc/lighttpd/vhosts.conf

And add these lines:

$SERVER["socket"] == ":81" {
server.document-root = "/var/domains/admin"
server.errorlog = "/var/domains/admin-error.log"
}

Install phpsysinfo.

cd /home/base/
wget http://downloads.sourceforge.net/project/phpsysinfo/phpsysinfo/2.5.4/phpsysinfo-2.5.4.tar.gz
tar xzvf /home/base/phpsysinfo-2.5.4.tar.gz
mv /home/base/phpsysinfo/* /var/domains/admin/
rm -rf /home/base/phpsysinfo
rm /home/base/phpsysinfo-2.5.4.tar.gz
mv /var/domains/admin/config.php.new /var/domains/admin/config.php

By default phpSysInfo doesn’t recognizes the Slitaz distribution. But this can be fixed. Just make following changes:

Edit the file ‘class.Linux.inc.php’ and search (Ctrl+W) for ‘who’ and change function users. (As slitaz doesn’t supports ‘who -q’)

nano /var/domains/admin/includes/os/class.Linux.inc.php
function users () {
$strResult = 0;
$strBuf = "users=".execute_program('who', '-a | grep -i pts -c');
if( $strBuf != "ERROR" ) {
$arrWho = split( '=', $strBuf );
$strResult = $arrWho[1];
}
return $strResult;
}

Also search for the hostname function and comment out the ‘else’ rule.

function chostname () {
$result = rfts( '/proc/sys/kernel/hostname', 1 );
if ( $result == "ERROR" ) {
$result = "N.A.";
} else {
//      $result = gethostbyaddr( gethostbyname( trim( $result ) ) );
}
return $result;
}

Also an edit of ‘class.parseProgs.inc.php’ is required. (For glusterfs because parsing of the glusterfs string is incorrect.)

nano /var/domains/admin/includes/os/class.parseProgs.inc.php

Move to the function ‘parse_filesystems’ and add following line:

$df = execute_program('df', '-k' . $this->df_param );
$df = str_replace("volume\n", "volume", $df );
$df = preg_split("/\n/", $df, -1, PREG_SPLIT_NO_EMPTY);

Download the Slitaz icon and add it to the images dir. (I know png isn’t the same as ico, but sue me. These are details.)

wget -O /var/domains/admin/images/slitaz.png http://www.slitaz.org/favicon.ico

Finally edit the distros.ini file and add a Slitaz rule:

nano /var/domains/admin/distros.ini
[Slitaz]
Name = "Slitaz"
Image = "slitaz.png"
Files = "/etc/slitaz-release"

Now let’s change ownership of the files (as lighttpd is running using the www user).

chown -R www /var/domains

Voila server ready, reboot the server or use ‘/etc/init.d/lighttpd restart’. (Lighttpd is probably already running after the install.)

Just a few final steps and we are finished:

a) cleanup

b) download/create an IP shell script

c) add a swap file (of 64 MB) just for keeping everything a little bit more healthy. (Alternative: create a swap partition, this partition should be recognized by Slitaz and automatically used.)

This will cleanup our package cache:

tazpkg clean-cache

(Only cleans around 3 MB but hey, we are not on a server using terrabyte hard drives.)

Now creating the IP script is just for convenience later on. This script can be used to change static IP’s fast.

Update 1: script can be downloaded: wget -O /home/base/ip.sh http://www.enira.net/wp-content/uploads/2012/06/ip.txt

nano /home/base/ip.sh
#!/bin/sh
 
if [ $# -eq 4 ]
then
 
echo "Backing up script..."
cp /etc/network.conf /etc/network.conf.bck
echo "Setting ip..."
 
echo INTERFACE=\"eth0\" > /etc/network.conf
echo DHCP=\"no\" >> /etc/network.conf
echo STATIC=\"yes\" >> /etc/network.conf
echo  >> /etc/network.conf
echo IP=\"$2.$3\" >> /etc/network.conf
echo NETMASK=\"255.255.255.0\" >> /etc/network.conf
echo GATEWAY=\"$2.$4\" >> /etc/network.conf
echo DNS_SERVER=\"$2.$4\" >> /etc/network.conf
echo  >> /etc/network.conf
echo  >> /etc/network.conf
echo DHCP=\"no\" >> /etc/network.conf
echo STATIC=\"yes\" >> /etc/network.conf
echo  >> /etc/network.conf
echo IP=\"$2.$3\" >> /etc/network.conf
echo NETMASK=\"255.255.255.0\" >> /etc/network.conf
echo GATEWAY=\"$2.$4\" >> /etc/network.conf
echo DNS_SERVER=\"$2.$4\" >> /etc/network.conf
echo  >> /etc/network.conf
echo  >> /etc/network.conf
echo WIFI="no" >> /etc/network.conf
 
echo "Setting hostname..."
 
cp /etc/hostname /etc/hostname.bck
 
echo $1 > /etc/hostname
 
echo "Done."
else
 
echo "Usage: ip.sh    "
echo "Example: ip.sh testname 192.168.1 41 1"
fi
chmod +x /home/base/ip.sh

And create the swap file:

dd if=/dev/zero of=/swap bs=1024 count=65536
mkswap /swap

Add this to the local.sh script (this is the boot script of Slitaz) so it will enable the swap file at boot.

nano /etc/init.d/local.sh
swapon /swap

Finished! This will be the base system on which I will continue to build all others.

You can download the base vmware I generated here: base.7z (15.0 MB)
Or the virtual box image:base-virtualbox.7z (15.2 MB)

No Comments |

Slitaz project – Introduction: I love virtualization!

An introduction, ah yes…

Ever since I started in ‘the IT business’ I always dreamt of creating huge networks. As a young student it wasn’t simple achieving this goal. Machines cost a lot of money, which is sparse when you are a student. Luckily I found out about virtualization.

Ever since I found out about it, I wanted to do something big. Really big. Not in terms of resources but complexity.

A few years ago I found about Slitaz, a small operating system that (according to a few tests I’ve done) is happy with only 64 MB of RAM and 512 MB HDD space.
Perfect for virtualizing a huge network. A modest amount of 2GB of RAM and 16GB of disk space allows me to virtualize 32 Slitaz machines! Ouch… I just hope my processor can keep up all this load. (My current PC specs: Intel Core i5-2500 3.3Ghz / 8 GB RAM)

The goal is to virtualize and build:
– 2 load balancers using haproxy and keepalived
– 4 front end webservers with lighttpd (with php support)
– 2 mysql machines. 1 master and 1 slave (for backup)
– a cluster filesystem (probably glusterfs or moosefs) using 4 cluster nodes and 1 controller node
Total: 13 machines using 832 MB or RAM and 6.5 GB of RAM.

– optional goal: Virtualized router (probably wolverine from coyotelinux or smoothwall)

More to come…

Update 1: The cluster FS will be GlusterFS
Update 2: Keepalived scrapped. (If somebody knows how to get it working. Comment!)

Index:

 

Also I created a little scheme to illustrate the project:

No Comments |