In the past I’ve made a post about the unstability of the AsRock C2750D4I. Guess what, problems aren’t gone with this motherboard.
I am suspecting the RAID controller of the motherboard. When the server experiences heavy load, at least two disks disconnect, bringing down the software RAID.


Let’s start by finding out the disk layout of my RAID5.

cat /proc/mdstat
md5 : inactive sde1[3](S) sdh1[1](S) sdf1[0](S) sdd1[4](S)
      11720536064 blocks super 1.2

This shows that my RAID is spread across sde1, sdh1, sdf1 and sdd1. The last error logs from dmesg showed my that sdh1 went down and sdf1 went down before the RAID crash.

So let’s try to find some more information about these two crashed drives.

sudo lshw -c disk

The result will show you a little bit more information about each drive.

       description: ATA Disk
       product: SAMSUNG HD103SI
       physical id: 0.0.0
       bus info: scsi@2:0.0.0
       logical name: /dev/sda
       version: 1AG0
       serial: S20XJDWS700323
       size: 931GiB (1TB)
       capabilities: partitioned partitioned:dos
       configuration: ansiversion=5 sectorsize=512 signature=0007f8a5
       description: ATA Disk
       product: SAMSUNG HD103SI
       physical id: 0.0.0
       bus info: scsi@3:0.0.0
       logical name: /dev/sdb
       version: 1AG0
       serial: S20XJDWZ118279
       size: 931GiB (1TB)
       capabilities: partitioned partitioned:dos
       configuration: ansiversion=5 sectorsize=512 signature=00071895
       description: ATA Disk
       product: KINGSTON SVP200S
       physical id: 0.0.0
       bus info: scsi@5:0.0.0
       logical name: /dev/sdc
       version: 502A
       serial: 50026B7331033DD9
       size: 55GiB (60GB)
       capabilities: partitioned partitioned:dos
       configuration: ansiversion=5 sectorsize=512 signature=91a29a16
       description: ATA Disk
       product: ST3000DM001-1CH1
       vendor: Seagate
       physical id: 0.0.0
       bus info: scsi@6:0.0.0
       logical name: /dev/sdd
       version: CC24
       serial: Z1F27VHM
       size: 2794GiB (3TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=0556e5e5-1e62-42f4-a89c-29813a6f4a18 sectorsize=4096
       description: ATA Disk
       product: Hitachi HDS5C303
       vendor: Hitachi
       physical id: 0.0.0
       bus info: scsi@7:0.0.0
       logical name: /dev/sde
       version: MZ6O
       serial: MCE9215Q0B5MLW
       size: 2794GiB (3TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=ec9054e2-94c3-4d74-8fea-2d34ce0b92ac sectorsize=4096
       description: ATA Disk
       product: Hitachi HDS5C303
       vendor: Hitachi
       physical id: 0.0.0
       bus info: scsi@8:0.0.0
       logical name: /dev/sdf
       version: MZ6O
       serial: MCE9215Q0BHTDV
       size: 2794GiB (3TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=2f6f5a9b-441e-467d-861c-852e2bdefb5e sectorsize=4096
       description: ATA Disk
       product: WDC WD40EFRX-68W
       vendor: Western Digital
       physical id: 0.0.0
       bus info: scsi@9:0.0.0
       logical name: /dev/sdg
       version: 80.0
       serial: WD-WCC4E1653628
       size: 3726GiB (4TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=4ac4a5a9-ccd1-42c5-907a-9272c076a15c sectorsize=4096
       description: ATA Disk
       product: TOSHIBA DT01ACA3
       vendor: Toshiba
       physical id: 0.0.0
       bus info: scsi@10:0.0.0
       logical name: /dev/sdh
       version: MX6O
       serial: 63NZKNRKS
       size: 2794GiB (3TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=24069398-46d0-4b01-9e8e-2530cb9f1cf8 sectorsize=4096

The logical name field shows that my Toshiba (sdh) drive and my Hitachi drive (sdf) where impacted by the last drive/SATA error on the board. This information can be used to physically track the SATA cables to the correct drive.
So now that we have the disk names, we need to find out which controller is the culprit for throwing these errors.

First let’s identify the bus addresses of all SATA controllers available on the motherboard.

sudo lshw -c storage

The connected controller in the bus info field.

       description: SATA controller
       product: 88SE9172 SATA 6Gb/s Controller
       vendor: Marvell Technology Group Ltd.
       physical id: 0
       bus info: pci@0000:04:00.0
       version: 11
       width: 32 bits
       clock: 33MHz
       capabilities: storage pm msi pciexpress ahci_1.0 bus_master cap_list rom
       configuration: driver=ahci latency=0
       resources: irq:55 ioport:c040(size=8) ioport:c030(size=4) ioport:c020(size=8) ioport:c010(size=4) ioport:c000(size=16) memory:df410000-df4101ff memory:df400000-df40ffff
       description: SATA controller
       product: 88SE9230 PCIe SATA 6Gb/s Controller
       vendor: Marvell Technology Group Ltd.
       physical id: 0
       bus info: pci@0000:09:00.0
       version: 11
       width: 32 bits
       clock: 33MHz
       capabilities: storage pm msi pciexpress ahci_1.0 bus_master cap_list rom
       configuration: driver=ahci latency=0
       resources: irq:56 ioport:d050(size=8) ioport:d040(size=4) ioport:d030(size=8) ioport:d020(size=4) ioport:d000(size=32) memory:df610000-df6107ff memory:df600000-df60ffff
       description: SATA controller
       product: Atom processor C2000 AHCI SATA2 Controller
       vendor: Intel Corporation
       physical id: 17
       bus info: pci@0000:00:17.0
       version: 02
       width: 32 bits
       clock: 66MHz
       capabilities: storage msi pm ahci_1.0 bus_master cap_list
       configuration: driver=ahci latency=0
       resources: irq:48 ioport:e0d0(size=8) ioport:e0c0(size=4) ioport:e0b0(size=8) ioport:e0a0(size=4) ioport:e040(size=32) memory:df762000-df7627ff
       description: SATA controller
       product: Atom processor C2000 AHCI SATA3 Controller
       vendor: Intel Corporation
       physical id: 18
       bus info: pci@0000:00:18.0
       version: 02
       width: 32 bits
       clock: 66MHz
       capabilities: storage msi pm ahci_1.0 bus_master cap_list
       configuration: driver=ahci latency=0
       resources: irq:54 ioport:e090(size=8) ioport:e080(size=4) ioport:e070(size=8) ioport:e060(size=4) ioport:e020(size=32) memory:df761000-df7617ff

Now for each driven we can search the corresponding SATA controller address, this is listed as the pci values found above.

sudo udevadm info -q all -n /dev/sde | grep DEVPATH
E: DEVPATH=/devices/pci0000:00/0000:00:03.0/0000:02:00.0/0000:03:01.0/0000:04:00.0/ata8/host7/target7:0:0/7:0:0:0/block/sde
sudo udevadm info -q all -n /dev/sdd | grep DEVPATH
E: DEVPATH=/devices/pci0000:00/0000:00:03.0/0000:02:00.0/0000:03:01.0/0000:04:00.0/ata7/host6/target6:0:0/6:0:0:0/block/sdd
sudo udevadm info -q all -n /dev/sdf | grep DEVPATH
E: DEVPATH=/devices/pci0000:00/0000:00:04.0/0000:09:00.0/ata9/host8/target8:0:0/8:0:0:0/block/sdf
sudo udevadm info -q all -n /dev/sdh | grep DEVPATH
E: DEVPATH=/devices/pci0000:00/0000:00:04.0/0000:09:00.0/ata11/host10/target10:0:0/10:0:0:0/block/sdh

The last number before /ata is the device which it is connected to. So this means that sde and sdd are connected to an ATA device at 0000:04:00.0 which equals to the Marvell 88SE9172 SATA 6Gb/s Controller.
The drives sdf and sdh are connected to the ATA device at 0000:09:00.0, which translates to the Marvell 88SE9230 PCIe SATA 6Gb/s Controller.

Which is the asshole throwing me errors.

Now with these information we can unplug the disks, from the controller throwing the errors. The location of the Marvell 88SE9230 is explained in the manual at You can verify the physical existence on the board, together with the disk names found previously.

So I rerouted all disks (I prefer 3Gbps SATA above a dysfunctional 6Gbps any day) and since then the NAS has been stable.


This post builds on part 2: NAS – Part 2: Software and services. It’s a detection script to see if your RAID is failing. In the past I’ve had my fair share of failed RAID configurations.

I do know the package mdadm can send alerts, however this small script which can be extended to detect specific changes in RAID/system configuration without using the built in reporting.


First let’s start by installing mailutils. This package is needed

sudo apt-get install mailutils

Next up is the ‘ssmtp’ package. This package will allow you to send a mail.

sudo apt-get install ssmtp

Create the ssmtp directory (if it doesn’t exists).

sudo mkdir /etc/ssmtp/

And create an ssmtp.conf file.

sudo nano /etc/ssmtp/ssmtp.conf

This ssmtp.conf requires a username(author) and password(authpass). Also a mail hub (smtp, example:


To test your configuration you can try to send a test mail. Just change ’’ to your email adress.

echo "This is a test" | mail -s "Test"

If everything works you are ready to create your cron job script. (I will create this script in my user directory, however you can create this wherever you want.)

cd ~

The underscore of ‘cat /proc/mdstat’ is used by mdadm to notify you of any failing RAID disks. So I’ll be checking for this character.

EMAIL="<target email>"
FROM="<from email>"
cat /proc/mdstat > /tmp/cron-email
if grep -q "_" "$EMAILMESSAGE"; then
   mail -aFrom:$FROM -s "$SUBJECT" "$EMAIL" < $EMAILMESSAGE

Let’s assign execute rights to our script.

sudo chmod +x ./

That’s it! Now assign this to a cron job. I assigned my cron job to run daily.
Also happy scripting (when extending this script).


The final step to my NAS is keeping my dynamic IP bound to a DNS host. I am using to manage and handle the dynamic DNS.

This script is adapted and based on th script found at:


This following script will change all hosts assigned to your account to the current IP you are running this script from.

#insert SHA-1 hash here (format): username|password
echo "Calling $info_url ..."
ip=$(dig @ | grep "" | grep "0" | awk '{ print $5} ')
echo "Current IP is: $ip"
# get the current dns settings...
for each in `curl -s "$info_url"`
        domain=`echo "$each" | cut -d"|" -f1`
        dns_ip=`echo "$each" | cut -d"|" -f2`
        update_url=`echo "$each" | cut -d"|" -f3`
        echo "$domain ..."
        if [ "$ip" != "$dns_ip" ]
                echo "Updating $dns_ip =>$ip ..."
                curl "$update_url" >> log
        echo "OK"

Now run this with a job in crontab to update your DNS.

crontab -e

When configuring my NAS I noticed that the ASrock C2750D4I behaves rather sloppy. Uptime never reached more than 24 hours.
Online I can find other people who are experiencing the same issues with this board:

This is how I made it stable (been running 7 days now without reboots)

NIC drivers

A quick glance at the Intel website shows an update for the NIC:
Let’s install it:

cd ~
tar xvf igb-5.1.2.tar.gz
cd ~/igb-5.1.2/src
sudo make install

Edit the modules file and add ‘igb’.

sudo nano /etc/modules

Let’s check if it loads.

sudo modprobe igb

Reboot the machine and verify if the new drivers are loaded.

sudo reboot
modinfo igb


filename:       /lib/modules/3.13.0-24-generic/kernel/drivers/net/igb/igb.ko
version:        5.1.2
license:        GPL
description:    Intel(R) Gigabit Ethernet Network Driver
author:         Intel Corporation, 

Last step cleanup the files.

sudo rm -rf ~/igb-5.1.2
sudo rm ~/igb-5.1.2.tar.gz

Disable Intel Speedstep

Disable your Intel Speedstep and C-Bit in the BIOS. The manual states that Intel Speedstep could ‘make your system unstable’. On this board, yes it does.

SATA cables + Boot disk to Intel controller

The manual recommended the use of the Intel RAID controller for OS disks. (Which I didn’t) So I swapped the SATA cable with a more expensive one (found some postings of people reporting e better stability using better SATA cables), and moved the boot disk to the Intel SATA controller.

These steps solved my instability with this board. Whilst on paper this board is the most awesome buy you could do (passive cooled, 12 SATA ports, quad core Atom, 20 Watt). In reality it’s as picky as a spoiled toddler. Definitely a not buy. At the price of ~€350 this is quite an expensive pain in the ass.

However, is there a comparable product?


Owncloud is pretty awesome, it provides me with my files everywhere I want on the world. However sometimes accessing my files is rather trivial. Think in terms of hotel lobbies, public access points. Sometimes there are some real restrictions on ports being used. By default my ISP blocks all server traffic below 1024, which is in my opinion a rather rude. I want my files! Luckily we can use the Amazon t1.micro (free tier) to provide a solution to this.

Preparing the Amazon image

So select a free tier Amazon t1.micro. This should be free the first year so no worries. As for configuration. Open the SSL and HTTPS port. Once this instance is running login to the instance as ‘ec2-user’ with your certificate file.

Installing HAProxy

Before we can compile we need to install the build tools.

sudo yum install -y make gcc openssl-devel pcre-devel pcre-static

Now download HAProxy and build it.

cd ~
tar -xzf haproxy-1.5-dev24.tar.gz
cd haproxy-1.5-dev24
make clean
sudo make install

By default HAProxy is installed in the /usr/local folder, create a logical link or change the variable from the make.

sudo ln -s /usr/local/sbin/haproxy /usr/sbin/haproxy

Because we installed from source, there is no service script. So let’s create one.

sudo nano /etc/init.d/haproxy
# haproxy
# chkconfig:   - 85 15
# description:  HAProxy is a free, very fast and reliable solution \
#               offering high availability, load balancing, and \
#               proxying for TCP and  HTTP-based applications
# processname: haproxy
# config:      /etc/haproxy/haproxy.cfg
# pidfile:     /var/run/
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
prog=$(basename $exec)
[ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog
check() {
    $exec -c -V -f /etc/$prog/$prog.cfg
start() {
    $exec -c -q -f /etc/$prog/$prog.cfg
    if [ $? -ne 0 ]; then
        echo "Errors in configuration file, check with $prog check."
        return 1
    echo -n $"Starting $prog: "
    # start it up here, usually something like "daemon $exec"
    daemon $exec -D -f /etc/$prog/$prog.cfg -p /var/run/$
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
stop() {
    echo -n $"Stopping $prog: "
    # stop it here, often "killproc $prog"
    killproc $prog
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
restart() {
    $exec -c -q -f /etc/$prog/$prog.cfg
    if [ $? -ne 0 ]; then
        echo "Errors in configuration file, check with $prog check."
        return 1
reload() {
    $exec -c -q -f /etc/$prog/$prog.cfg
    if [ $? -ne 0 ]; then
        echo "Errors in configuration file, check with $prog check."
        return 1
    echo -n $"Reloading $prog: "
    $exec -D -f /etc/$prog/$prog.cfg -p /var/run/$ -sf $(cat /var/run/$
    return $retval
force_reload() {
fdr_status() {
    status $prog
case "$1" in
        [ ! -f $lockfile ] || restart
        echo $"Usage: $0 {start|stop|status|restart|try-restart|reload|force-reload}"
        exit 2

And assign execute rights.

sudo chmod +x /etc/init.d/haproxy

Configuration of HAProxy

now to configure HAProxy create the config file.

sudo mkdir -p /etc/haproxy
sudo nano /etc/haproxy/haproxy.cfg

To forward an HTTPS port use the mode TCP. This example forwards from the IP (example). It proxifies (or tunnels) port 22443 to 443 and 22222 to 2222.

       maxconn 10000
       timeout connect 500s
       timeout client 5000s
       timeout server 1h
frontend https_proxy
        mode tcp
        bind *:443
        default_backend https_servers
frontend ssh_proxy
        bind *:2222
        mode tcp
        default_backend ssh_servers
backend ssh_servers
        mode tcp
        server ssh
backend https_servers
        mode tcp
        server server1

This should do it. Your SSH and HTTPS connection are routed trough Amazon.

As for Owncloud (version 6.x), you will need to add your domain (example: to the config/config.php file:

  'trusted_domains' =>
  array (
    0 => '...........',


Owncloud is simply amazing. It’s like a Dropbox at home.
For my NAS I will be running this program in an instance in a virtual machine. This is done because I’ll be opening this machine to the outside of the world. Also it’s much easier to backup and dispose.

The VMWare instance

Let’s start with configuring the VMWare instance. I’ll be using the Ubuntu LTS server edition for this instance, as it uses less system resources than a full desktop environment.

Configure the VMWare instance according to the following specifications:
– CPU: 2 virtual CPU’s (1 thread each)
– RAM: 512 MB
– Disk: 6GB
– Operating System: Ubuntu 14.04 LTS

Whilst installing I used ‘automatic updates’ so I don’t have to manage this VMWare instance and I also installed OpenSSH server during the install procedure.

Installing Owncloud

Start by installing all needed packages and dependencies for Owncloud. Also enable the Apache2 headers and rewrite module.

sudo apt-get install apache2 php5 php5-gd php-xml-parser php5-intl php5-sqlite php5-mysql smbclient curl libcurl3 php5-curl php5-json php-apc
sudo a2enmod rewrite
sudo a2enmod headers
sudo service apache2 restart

Installing Owncloud is quite easy, just download the package, extract and fire up a web browser.

cd ~
tar -xjvf owncloud-6.0.0a.tar.bz2
sudo cp -r owncloud /var/www/
rm -rf ~/owncloud
rm -rf ~/owncloud-6.0.0a.tar.bz2

Fix all rights in the ‘/var/www’ folder:

sudo chown -R www-data:www-data /var/www/

That’s about it, now you can follow the http:///owncloud link and configure your Owncloud. You will need a MySQL database for this application.

Optional: Moving Owncloud to RAID1 share

I prefer to move my data and Owncloud to a network share which is backed by a RAID1 configuration. In case one of my automatic updates shits the server.

Create a mount point for your data. I’ll be using \\\owncloud as share. The username will be ‘www-data’. As Apache2 uses this username to read and write.
Create the account on the host system and create the share directory.

sudo smbpasswd -a www-data
sudo mkdir -p /media/raid1/owncloud
sudo chown -R www-data:www-data /media/raid1/owncloud/

Add the share to samba.

sudo nano /etc/samba/smb.conf
comment = Raid 1 secure backup storage
path = /media/raid1/owncloud
valid users = www-data
public = no
browseable = no
writable = yes

On the Owncloud instance install the ‘cifs-utils’ package.

sudo apt-get install cifs-utils

Create the folder to mount and mount the network share.

sudo mkdir -p /mnt/network/tmp
sudo mount -t cifs -o user=www-data,password=password // /mnt/network/tmp

Test your share and move all data.

sudo mv /var/www/owncloud/* /mnt/network/tmp/

Now for the fstab file, create a credentials file.

sudo nano /home/owncloud/.cloudcredentials

Add the username and password to the credentials file.


Restrict access to this credentials file.

sudo chmod 600 /home/owncloud/.cloudcredentials

Add the mount to the ‘/etc/fstab’ file.

sudo nano /etc/fstab
// /var/www/owncloud cifs credentials=/home/owncloud/.cloudcredentials,iocharset=utf8,sec=ntlm 0 0 

That’s it, happy file synchronizing.


In part two I’ve discussed the basic services for my NAS. This post will discuss the building of a media center. At the time of writing there are two dominant media players: Plex and XBMC. For my NAS I’ll be using Plex. See:


First service up is Plex. Plex needs the component ‘avahi-daemon’. Normally this should be installed on your system. For those who don’t have it:

sudo apt-get install avahi-daemon

Next install Plex. Check for any updates on the Plex website: and download the Plex Debian package.

cd ~/Downloads 
wget -c
sudo dpkg -i plexmediaserver_0.

That’s it, now login to your plex environment with http://:32400/manage. You alse need to have a Plex account but once you do you can add libraries to your Plex server. I recommend the Ouya Plex client or RasPlex to connect.


Almost finished now, for my series I like to use Sickbeard. It’s an awesome tool that manages to capture meta data for series. It shows the quality of your series on your home NAS and the completeness.

Before we can start with Sickbeard, you need the ‘python-cheetah’ module. This module is needed by Sickbeard.

sudo apt-get install python-cheetah

Let’s download the tarball (yet again, I don’t like Git for installations).

cd ~/Downloads
wget --no-check-certificate
tar -xzvf master

Once everything is unpacked, create a directory to run Sickbeard and move your files to it. The number ‘f64b94f’ could be different in your installation. (Depends on Git check-ins)

mkdir ~/SickBeard
mv /home/nas/Downloads/midgetspy-Sick-Beard-f64b94f/* /home/nas/SickBeard/

Now test the install by running the Sickbeard python script.

cd /home/nas/SickBeard/

If Sickbeard launches without a problem, then you can add it to the startup of your server.

Autostarting is fairly easy. Just copy the ‘init.ubuntu’ file form the Sickbeard directory.

sudo cp ~/SickBeard/init.ubuntu /etc/init.d/sickbeard
sudo chmod +x /etc/init.d/sickbeard
sudo update-rc.d sickbeard defaults

This startup script needs to know which user it can run as and also the directory. These variables need to be added to the ‘/etc/default/sickbeard’ file.

sudo nano /etc/default/sickbeard

Now we can start the Sickbeard service.

sudo service sickbeard start

Sickbeard runs at http://:8080, from there you can configure your Sickbeard installation.


Last service up is Transmission. Any good home NAS must have this. It’s the most awesome remote tool to schedule torrents.

By default it should be installed. For those who don’t have it:

sudo apt-get install transmission

To start Transmission I created a startup script that allows me to run this service once. As with the RDP environment there is a chance that Transmission gets started twice due to the session creation in RDP. A simple hack is to create a script that avoids this.

sudo mkdir -p /home/nas/Scripts/start
sudo nano ~/Scripts/start/
for var in "$@"
        SERVICE="$SERVICE $var"
RESULT=`ps -aux | grep -i ${SERVICE} | grep -v grep | grep -v /bin/sh`
echo Result: $RESULT
if [ "${RESULT:-null}" = null ]; then
        echo "not running... starting $SERVICE"
        echo "running"

And add ‘transmission-gtk’ to the XFCE session:

/home/nas/Scripts/start/ /usr/bin/transmission-gtk

So that’s about it for part two. Next parts will handle Owncloud and Subsonic.


Requirements: for the backup data I will be using a partimage file.

The OS of my nas will be Xubuntu 14.04. This distro is fairly lightweight for a NAS system and gives me a sleek GUI interface. I could do without a GUI but this makes some of the services quite ‘Spartan’ to handle. A NAS is not a production environment. I want to handle sudden events, light, swift and simple. There’s no real point in debugging your NAS at 11:00 PM in a command line interface when you need to go to work at 5:00 AM in the morning.

For detailed instructions about how to install Xubuntu I’d like to refer you to Google:
The only thing that needs to be changed in the installation is login by default. This is a must, if you want to config services which will run at boot time with a GUI, you’ll need an active session to start these programs.

Let’s start with the basics (in case you didn’t download the latest updates while installing):

sudo apt-get update
sudo apt-get upgrade

Installing remote access (OpenSSH, XRDP)

Start by installing openssh, this will be the backbone of our communication with the NAS server.

sudo apt-get install ssh

By default the openSSH deamon times out, I don’t really like this so I’ll be adding a ServerAliveInterval.

sudo nano /etc/ssh/ssh_config

And add following line to it:

ServerAliveInterval 60

Next, I chose Xubuntu for a reason, I want to have XRDP installed. Scarygliders has a neat install tool which works for all *untu distro’s. I really recommend you use this file. It will take quite some time and is as slow as a snail but it works. It works flawless.
Note: It should work for all Ubuntu based distributions, however for Lubuntu and Bhodi it doesn’t seem to work very well. Xubuntu gave me a near perfect XRDP session.

I don’t like to have git installed on this system so I’ll just grab the and unzip it.

cd ~/Downloads

Once these files are unzipped run the rdp script with the command ‘–justdoit’. This will install, build and configure everything. Pretty neat, no?

cd ~/Downloads/X11RDP-o-Matic-master/
sudo ./ --justdoit

Once this is done configure your sessions. It’s fairly easy, run the following command and select your user.

sudo ./

Optional: If you, like me, have another keyboard layout instead of the regular US_en then follow these steps:
(WARNING: these commands can only be runned in your X environment, and not trough ssh)

Login to your environment and set X keyboard settings:

setxkbmap be

Now dump this keymap to a local file and dump it to the km-409.ini file of XRDP. This file is the default for all sessions. It’s a sloppy solution but it works.

xrdp-genkeymap ~/keymap.ini
sudo mv /etc/xrdp/km-0409.ini /etc/xrdp/km-0409.ini.bak
sudo mv ~/keymap.ini /etc/xrdp/km-0409.ini

Restart and try.

sudo /etc/init.d/xrdp restart

Migrating data from old drive

This part assumes you took a backup from the old drive with partimage. If you don’t have any data to migrate, you can skip this test.

First install partimage to be able to restore the data.

sudo apt-get install partimage

Next grab the block size of the partition you wish to restore. You can find this one by using fdisk and dividing the resulting size by 1024. Add some extra blocks as this result isn’t 100% exact. In my case the old backup disk was /dev/sdh1.

sudo fdisk -l /dev/sdh1

Next create an empty image file with the disk size found in the previous step.

dd if=/dev/zero of=restore.img bs=1024 count=31719727

Associate this empty disk image with a loopback device (loop0).

sudo losetup /dev/loop0 restore.img

Now you can restore the image with partimage. In my case my backup image is called ‘image.000’ and resides on a disk mounted on: ‘/media/nas/05885c86-ae41-4839-b0dc-f1282c59dea4’

sudo partimage restore /dev/loop0 /media/nas/05885c86-ae41-4839-b0dc-f1282c59dea4/image.000

Once everything is restored you can create a mount point and mount the loop0 device. This will give you access to individual backup files.

sudo mkdir /media/nas/backup
sudo mount /dev/loop0 /media/nas/backup

When you are done don’t forget to disconnect the loopback device and delete your .img file.

losetup -d /dev/loop0

Installing LAMP stack (Apache2, PHP & MySQL)

sudo apt-get install php5 libapache2-mod-php5 php5-cgi php5-cli php5-common php5-curl php5-gd php5-mysql php5-pgsql mysql-server mysql-common mysql-client

Migrating MySQL data (optional)

Sometimes it’s not possible to have a MySQL dump available. Lucky all data can be migrated from an old installation. In this example the disk is mounted on ‘/media/nas/backup/’. If you don’t have any old MySQL data to migrate, skip this step.

During this install you will be asked for a root password for the MySQL server.

First stop thye running MySQL server. The installer starts the SQL server by default.

sudo /etc/init.d/mysql stop

Next remove all generated data from the MySQL installation. As we will be replacing all data from a previous installation.

sudo rm -rf /var/lib/mysql/*

You can verify your old database at:

cd /media/nas/backup/var/lib/mysql

If you verified that the backup contains all your old data you can copy the data from the backup to your MySQL installation and reassign the right permissions.

sudo chmod 777 /media/nas/backup/var/lib/mysql
sudo cp -r /media/nas/backup/var/lib/mysql/* /var/lib/mysql
sudo chown mysql:mysql -R /var/lib/mysql
sudo chmod 700 /var/lib/mysql/

Now you database is copied however you still need one file and your MySQL server config. The debian.cnf file was generated by your previous system and is needed by the MySQL deamon.

sudo cp /media/nas/backup/etc/mysql/debian.cnf /etc/mysql/debian.cnf
sudo cp /media/nas/backup/etc/mysql/my.cnf /etc/mysql/my.cnf

All should be done now, you can start the database again.

sudo /etc/init.d/mysql start

To test if everything works you can connect to the instance and check all databases:

mysql -h localhost -u root -p<previously used password>

Oh look my old database schema’s are still there.

mysql> show databases;
| Database           |
| information_schema |
| mysql              |
| owncloud           |
| performance_schema |
| system_info        |
| test               |
6 rows in set (0.03 sec)

Migrating Apache2 data

My previous system contained an Apache2 server with some files. I wish to keep these files. If you don’t have anything to migrate you can skip this step.
On Ubuntu systems < 14.04 you can use:

sudo cp -r /media/nas/backup/var/www/* /var/www

As of Xubuntu 14.04 the new Apache2 directory is located at ‘/var/www/html’.

sudo cp -r /media/nas/backup/var/www/* /var/www/html

Repair any rights that might have gotten a little bit wacky.

sudo chown www-data:www-data -R /var/www/*

That’s it, all data should be migrated. (If you’ve kept all your data in the default folders)

Software RAID

This section handles the migration of old mdadm data. For more information about creating a software RAID system see:

Start by installing the mdadm package.

sudo apt-get install mdadm

Create your mount points for each RAID. I’m using ‘disk_raid1’ and ‘disk_raid5’.

sudo mkdir /media/disk_raid1
sudo mkdir /media/disk_raid5

Now let’s copy over the old mdadm file. This contains the layout of the old RAID.

sudo cp /media/nas/backup/etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf

And reassemble the disks.

sudo mdadm --assemble --scan

This is the result of my reassemble, note that the RAID5 didn’t succeed in recreating. However if your previous system died a clean death it shouldn’t be a real concern.

mdadm: /dev/md/1 has been started with 2 drives.
mdadm: /dev/md/5 assembled from 1 drive - not enough to start the array.
mdadm: /dev/md/5 is already in use.
nas@nas:~$ cat /proc/mdstat
Personalities : [raid1]
md5 : inactive sdd1[3](S) sdg1[1](S) sde1[0](S)
      8790402048 blocks super 1.2
md1 : active raid1 sda1[0] sdb1[1]
      976630336 blocks super 1.2 [2/2] [UU]

Before we continue repairing the RAID5 system, verify the drive mappings. This should show you that you are using the correct drives, because if you mess this up your data will be lost!

sudo lshw -short -C disk
H/W path          Device     Class      Description
/0/1/0.0.0        /dev/sda   disk       1TB SAMSUNG HD103SI
/0/2/0.0.0        /dev/sdb   disk       1TB SAMSUNG HD103SI
/0/3/0.0.0        /dev/sdc   disk       3TB ST3000DM001-1CH1
/0/4/0.0.0        /dev/sdd   disk       3TB Hitachi HDS5C303
/0/5/0.0.0        /dev/sde   disk       3TB Hitachi HDS5C303
/0/6/0.0.0        /dev/sdf   disk       1500GB SAMSUNG HD154UI
/0/7/0.0.0        /dev/sdg   disk       3TB TOSHIBA DT01ACA3
/0/8/0.0.0        /dev/sdh   disk       60GB KINGSTON SVP200S

Tip: you can also verify the super block existence on each drive by using the following command:

sudo mdadm --examine /dev/sd* | grep -E "(^\/dev|UUID)"

Verify this with the contents of your ‘mdadm.conf’ file and your previous knowledge of your array.

If the data is correct, stop the incorrect RAID5.

sudo mdadm --stop /dev/md5

Force the RAID to recreate using the correct drives, in my case these are sdd1, sdg1 and sde1. (sd[dge]1)

sudo mdadm --assemble --force /dev/md5 /dev/sd[dge]1

The output will look like this:

mdadm: forcing event count in /dev/sde1(0) from 292 upto 302
mdadm: forcing event count in /dev/sdg1(1) from 292 upto 302
mdadm: /dev/md5 has been started with 3 drives.

As you can see aboven there are twe drives which have 10 events less, this shouldn’t be a real problem.

Now your RAID5 will be started, a ‘cat /proc/msdstat’ should show a fresh initialized RAID5.

Personalities : [raid1] [raid6] [raid5] [raid4]
md5 : active raid5 sde1[0] sdd1[3] sdg1[1]
      5860267008 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
md1 : active raid1 sda1[0] sdb1[1]
      976630336 blocks super 1.2 [2/2] [UU]

Last configuration: grab your old mount points and add them again to the fstab file.

cat /media/nas/backup/etc/fstab | grep raid
sudo nano /etc/fstab

And remount.

sudo mount -a


df -h
/dev/md1        917G  7,4G  863G   1% /media/disk_raid1
/dev/md5        5,5T  2,6T  2,7T  49% /media/disk_raid5

It works! ‘df -h’ shows the drives mounted.


Install Samba.

sudo apt-get install samba

If you wish to migrate old shares copy your ‘smb.conf’ file from the backup.

sudo cp /media/nas/backup/etc/samba/smb.conf /etc/samba/smb.conf

Readd any existing users that might have been there. You can verify your ‘smb.conf’ file to see which users have access to a share.

sudo smbpasswd -a foo
sudo /etc/init.d/samba restart


For my virtualization needs I prefer VMWare, you could also use VirtualBox of VMWare player. Both serve as a decent solution. A NAS without some virtualization is just a dumb storage brick.

Let’s start by installing the VMWare workstation, following commands needs to run on the server (and not trough ssh).
This command will pop up the install wizard.

sudo sh /media/disk_raid1/varia/vmware-install/VMware-Workstation-Full-10.0.0-1295980.x86_64.bundle

The problem with VMWare 10.0.0 and a linux kernel 3.13 is that it just won’t work. As Xubuntu 14.04 uses this kernel, this system also suffers from this error. A patch can be found at:
Below is the content of the page (In case it vanishes):

nano ~/vmnet313.patch
> #else
> VNetFilterHookFn(const struct nf_hook_ops *ops,        // IN:
> #endif
<    transmit = (hooknum == VMW_NF_INET_POST_ROUTING);
>       transmit = (hooknum == VMW_NF_INET_POST_ROUTING);
>    #else
>       transmit = (ops->hooknum == VMW_NF_INET_POST_ROUTING);
>    #endif
#Change directory into the vmware module source directory
cd /usr/lib/vmware/modules/source
# untar the vmnet modules
tar -xvf vmnet.tar
#run a the patch you should have just saved earlier
patch vmnet-only/filter.c < ~/vmnet313.patch
# re-tar the modules
tar -uvf vmnet.tar vmnet-only
#delete the previous working directory
rm -r vmnet-only
# run the vmware module build program. (alternatively just run the GUI app)
/usr/lib/vmware/bin/vmware-modconfig --console --install-all

Next install the WSX bundle. This will allow access to the VMWare machines trough a modern browser using HTML5.

sudo sh /media/disk_raid1/varia/vmware-install/VMware-WSX-1.0-754035.x86_64.bundle

This component uses Python 2.6, any other version won’t work. So we need to add python alongside the newer Python versions.

sudo add-apt-repository ppa:fkrull/deadsnakes
sudo apt-get update
sudo apt-get install python2.6 python2.6-dev

Now you can start the WSX server.

sudo /etc/init.d/vmware-wsx-server start


Tip: When running VMWare images on a CPU which allows you to scale the frequency then your clock might get a little bit off if you don’t install the VMWare tools.
A little workaround to this is to add this to your cron jobs. (My example BE, Brussels NTP server Telenet)

sudo crontab -e
00 1 * * * ntpdate

There you go, one fresh NAS server ready to serve your content and configured to add more scalable services.


So in the past I’ve always assembled NAS devices from old PC’s. My previous NAS device was an old Intel Q6600 processor. This gave me a thermal design power of 105 Watt. Ouch! Time to upgrade for something much power efficient. this little quad core has an octa core and 20 Watt of TDP.


– ASRock C2750D4I
– Lian Li PC-Q25B case
– 2 x 2GB Mushkin DDR3 RAM
– 4 x 3TB disk drives
– 2 x 1TB disk drives
– 1 x 60 GB SSD disk drive
– 1 x 1.5 TB disk drive


My data disks will be set up according to the following specification:
– one RAID5 (three 3TB dives) for semi critical data;
– one RAID1 (two 1TB drives) for critical files that I never want to lose (backups);
– one 3 TB disk used for files that may be lost (virtual machines and such).

The OS will be installed on the SSD drive to provide a fast and stable system. Right now I’ve also included an old 1.5TB disc drive to the system, in the future I wish to replace this by another 3TB when the need for extra data on RAID5 arises.

All RAID configurations will be handled by software raid. There is no sane reason why you should spend too much money for a hardware RAID on a home NAS system. My system  using software RAID performs at least at 83MB/s (picture taken from when system was done, copying external disk to NAS).


Building the system

I don’t really want to explain this. It’s pretty straightforward. If you can’t read a manual or build your system then I recommend closing this page and buying a Synology instead. But for those interested: here are some juicy pictures of me building my NAS .

Unpacking the little motherboard (yes a passive cooled octa-core).

DSC_0149 DSC_0150

Clearing out my old NAS system (the Q6600 PC).

DSC_0152 DSC_0151

The final result before closing the case. I simply love the 5 hot swappable bays.

DSC_0162 DSC_0160

Booting the NAS with my little 10″ debug screen. One of the best purchases I’ve ever made in the past.


Next part will handle all the services and software.