enira.net

We are all living in a technological wasteland.

RSS
people

NAS – Part 2: Software and services

Introduction

Requirements: for the backup data I will be using a partimage file.

The OS of my nas will be Xubuntu 14.04. This distro is fairly lightweight for a NAS system and gives me a sleek GUI interface. I could do without a GUI but this makes some of the services quite ‘Spartan’ to handle. A NAS is not a production environment. I want to handle sudden events, light, swift and simple. There’s no real point in debugging your NAS at 11:00 PM in a command line interface when you need to go to work at 5:00 AM in the morning.

For detailed instructions about how to install Xubuntu I’d like to refer you to Google:
The only thing that needs to be changed in the installation is login by default. This is a must, if you want to config services which will run at boot time with a GUI, you’ll need an active session to start these programs.

Let’s start with the basics (in case you didn’t download the latest updates while installing):

sudo apt-get update
sudo apt-get upgrade

Installing remote access (OpenSSH, XRDP)

Start by installing openssh, this will be the backbone of our communication with the NAS server.

sudo apt-get install ssh

By default the openSSH deamon times out, I don’t really like this so I’ll be adding a ServerAliveInterval.

sudo nano /etc/ssh/ssh_config

And add following line to it:

ServerAliveInterval 60

Next, I chose Xubuntu for a reason, I want to have XRDP installed. Scarygliders has a neat install tool which works for all *untu distro’s. I really recommend you use this file. It will take quite some time and is as slow as a snail but it works. It works flawless.
Note: It should work for all Ubuntu based distributions, however for Lubuntu and Bhodi it doesn’t seem to work very well. Xubuntu gave me a near perfect XRDP session.

I don’t like to have git installed on this system so I’ll just grab the master.zip and unzip it.

cd ~/Downloads
wget https://github.com/scarygliders/X11RDP-o-Matic/archive/master.zip
unzip master.zip

Once these files are unzipped run the rdp script with the command ‘–justdoit’. This will install, build and configure everything. Pretty neat, no?

cd ~/Downloads/X11RDP-o-Matic-master/
sudo ./X11rdp-o-matic.sh --justdoit

Once this is done configure your sessions. It’s fairly easy, run the following command and select your user.

sudo ./RDPsesconfig.sh

Optional: If you, like me, have another keyboard layout instead of the regular US_en then follow these steps:
(WARNING: these commands can only be runned in your X environment, and not trough ssh)

Login to your environment and set X keyboard settings:

setxkbmap be

Now dump this keymap to a local file and dump it to the km-409.ini file of XRDP. This file is the default for all sessions. It’s a sloppy solution but it works.

xrdp-genkeymap ~/keymap.ini
 
sudo mv /etc/xrdp/km-0409.ini /etc/xrdp/km-0409.ini.bak
sudo mv ~/keymap.ini /etc/xrdp/km-0409.ini

Restart and try.

sudo /etc/init.d/xrdp restart

Migrating data from old drive

This part assumes you took a backup from the old drive with partimage. If you don’t have any data to migrate, you can skip this test.

First install partimage to be able to restore the data.

sudo apt-get install partimage

Next grab the block size of the partition you wish to restore. You can find this one by using fdisk and dividing the resulting size by 1024. Add some extra blocks as this result isn’t 100% exact. In my case the old backup disk was /dev/sdh1.

sudo fdisk -l /dev/sdh1

Next create an empty image file with the disk size found in the previous step.

dd if=/dev/zero of=restore.img bs=1024 count=31719727

Associate this empty disk image with a loopback device (loop0).

sudo losetup /dev/loop0 restore.img

Now you can restore the image with partimage. In my case my backup image is called ‘image.000’ and resides on a disk mounted on: ‘/media/nas/05885c86-ae41-4839-b0dc-f1282c59dea4’

sudo partimage restore /dev/loop0 /media/nas/05885c86-ae41-4839-b0dc-f1282c59dea4/image.000

Once everything is restored you can create a mount point and mount the loop0 device. This will give you access to individual backup files.

sudo mkdir /media/nas/backup
sudo mount /dev/loop0 /media/nas/backup

When you are done don’t forget to disconnect the loopback device and delete your .img file.

losetup -d /dev/loop0

Installing LAMP stack (Apache2, PHP & MySQL)

sudo apt-get install php5 libapache2-mod-php5 php5-cgi php5-cli php5-common php5-curl php5-gd php5-mysql php5-pgsql mysql-server mysql-common mysql-client

Migrating MySQL data (optional)

Sometimes it’s not possible to have a MySQL dump available. Lucky all data can be migrated from an old installation. In this example the disk is mounted on ‘/media/nas/backup/’. If you don’t have any old MySQL data to migrate, skip this step.

During this install you will be asked for a root password for the MySQL server.

First stop thye running MySQL server. The installer starts the SQL server by default.

sudo /etc/init.d/mysql stop

Next remove all generated data from the MySQL installation. As we will be replacing all data from a previous installation.

sudo rm -rf /var/lib/mysql/*

You can verify your old database at:

cd /media/nas/backup/var/lib/mysql

If you verified that the backup contains all your old data you can copy the data from the backup to your MySQL installation and reassign the right permissions.

sudo chmod 777 /media/nas/backup/var/lib/mysql
sudo cp -r /media/nas/backup/var/lib/mysql/* /var/lib/mysql
sudo chown mysql:mysql -R /var/lib/mysql
sudo chmod 700 /var/lib/mysql/

Now you database is copied however you still need one file and your MySQL server config. The debian.cnf file was generated by your previous system and is needed by the MySQL deamon.

sudo cp /media/nas/backup/etc/mysql/debian.cnf /etc/mysql/debian.cnf
sudo cp /media/nas/backup/etc/mysql/my.cnf /etc/mysql/my.cnf

All should be done now, you can start the database again.

sudo /etc/init.d/mysql start

To test if everything works you can connect to the instance and check all databases:

mysql -h localhost -u root -p<previously used password>

Oh look my old database schema’s are still there.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| owncloud           |
| performance_schema |
| system_info        |
| test               |
+--------------------+
6 rows in set (0.03 sec)
 
mysql>exit

Migrating Apache2 data

My previous system contained an Apache2 server with some files. I wish to keep these files. If you don’t have anything to migrate you can skip this step.
On Ubuntu systems < 14.04 you can use:

sudo cp -r /media/nas/backup/var/www/* /var/www

As of Xubuntu 14.04 the new Apache2 directory is located at ‘/var/www/html’.

sudo cp -r /media/nas/backup/var/www/* /var/www/html

Repair any rights that might have gotten a little bit wacky.

sudo chown www-data:www-data -R /var/www/*

That’s it, all data should be migrated. (If you’ve kept all your data in the default folders)

Software RAID

This section handles the migration of old mdadm data. For more information about creating a software RAID system see: http://ubuntuforums.org/showthread.php?t=408461

Start by installing the mdadm package.

sudo apt-get install mdadm

Create your mount points for each RAID. I’m using ‘disk_raid1’ and ‘disk_raid5’.

sudo mkdir /media/disk_raid1
sudo mkdir /media/disk_raid5

Now let’s copy over the old mdadm file. This contains the layout of the old RAID.

sudo cp /media/nas/backup/etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf

And reassemble the disks.

sudo mdadm --assemble --scan

This is the result of my reassemble, note that the RAID5 didn’t succeed in recreating. However if your previous system died a clean death it shouldn’t be a real concern.

mdadm: /dev/md/1 has been started with 2 drives.
mdadm: /dev/md/5 assembled from 1 drive - not enough to start the array.
mdadm: /dev/md/5 is already in use.
nas@nas:~$ cat /proc/mdstat
Personalities : [raid1]
md5 : inactive sdd1[3](S) sdg1[1](S) sde1[0](S)
      8790402048 blocks super 1.2
 
md1 : active raid1 sda1[0] sdb1[1]
      976630336 blocks super 1.2 [2/2] [UU]

Before we continue repairing the RAID5 system, verify the drive mappings. This should show you that you are using the correct drives, because if you mess this up your data will be lost!

sudo lshw -short -C disk
H/W path          Device     Class      Description
===================================================
/0/1/0.0.0        /dev/sda   disk       1TB SAMSUNG HD103SI
/0/2/0.0.0        /dev/sdb   disk       1TB SAMSUNG HD103SI
/0/3/0.0.0        /dev/sdc   disk       3TB ST3000DM001-1CH1
/0/4/0.0.0        /dev/sdd   disk       3TB Hitachi HDS5C303
/0/5/0.0.0        /dev/sde   disk       3TB Hitachi HDS5C303
/0/6/0.0.0        /dev/sdf   disk       1500GB SAMSUNG HD154UI
/0/7/0.0.0        /dev/sdg   disk       3TB TOSHIBA DT01ACA3
/0/8/0.0.0        /dev/sdh   disk       60GB KINGSTON SVP200S

Tip: you can also verify the super block existence on each drive by using the following command:

sudo mdadm --examine /dev/sd* | grep -E "(^\/dev|UUID)"

Verify this with the contents of your ‘mdadm.conf’ file and your previous knowledge of your array.

If the data is correct, stop the incorrect RAID5.

sudo mdadm --stop /dev/md5

Force the RAID to recreate using the correct drives, in my case these are sdd1, sdg1 and sde1. (sd[dge]1)

sudo mdadm --assemble --force /dev/md5 /dev/sd[dge]1

The output will look like this:

mdadm: forcing event count in /dev/sde1(0) from 292 upto 302
mdadm: forcing event count in /dev/sdg1(1) from 292 upto 302
mdadm: /dev/md5 has been started with 3 drives.

As you can see aboven there are twe drives which have 10 events less, this shouldn’t be a real problem.

Now your RAID5 will be started, a ‘cat /proc/msdstat’ should show a fresh initialized RAID5.

Personalities : [raid1] [raid6] [raid5] [raid4]
md5 : active raid5 sde1[0] sdd1[3] sdg1[1]
      5860267008 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
 
md1 : active raid1 sda1[0] sdb1[1]
      976630336 blocks super 1.2 [2/2] [UU]

Last configuration: grab your old mount points and add them again to the fstab file.

cat /media/nas/backup/etc/fstab | grep raid
sudo nano /etc/fstab

And remount.

sudo mount -a

Testing:

df -h
/dev/md1        917G  7,4G  863G   1% /media/disk_raid1
/dev/md5        5,5T  2,6T  2,7T  49% /media/disk_raid5

It works! ‘df -h’ shows the drives mounted.

Samba

Install Samba.

sudo apt-get install samba

If you wish to migrate old shares copy your ‘smb.conf’ file from the backup.

sudo cp /media/nas/backup/etc/samba/smb.conf /etc/samba/smb.conf

Readd any existing users that might have been there. You can verify your ‘smb.conf’ file to see which users have access to a share.

sudo smbpasswd -a foo
sudo /etc/init.d/samba restart

VMWare

For my virtualization needs I prefer VMWare, you could also use VirtualBox of VMWare player. Both serve as a decent solution. A NAS without some virtualization is just a dumb storage brick.

Let’s start by installing the VMWare workstation, following commands needs to run on the server (and not trough ssh).
This command will pop up the install wizard.

sudo sh /media/disk_raid1/varia/vmware-install/VMware-Workstation-Full-10.0.0-1295980.x86_64.bundle

The problem with VMWare 10.0.0 and a linux kernel 3.13 is that it just won’t work. As Xubuntu 14.04 uses this kernel, this system also suffers from this error. A patch can be found at:
Below is the content of the page (In case it vanishes):

nano ~/vmnet313.patch
205a206
> #if LINUX_VERSION_CODE < KERNEL_VERSION(3, 13, 0)
206a208,210
> #else
> VNetFilterHookFn(const struct nf_hook_ops *ops,        // IN:
> #endif
255c259,263
<    transmit = (hooknum == VMW_NF_INET_POST_ROUTING);
---
>    #if LINUX_VERSION_CODE < KERNEL_VERSION(3, 13, 0)
>       transmit = (hooknum == VMW_NF_INET_POST_ROUTING);
>    #else
>       transmit = (ops->hooknum == VMW_NF_INET_POST_ROUTING);
>    #endif
#Change directory into the vmware module source directory
cd /usr/lib/vmware/modules/source
# untar the vmnet modules
tar -xvf vmnet.tar
#run a the patch you should have just saved earlier
patch vmnet-only/filter.c < ~/vmnet313.patch
# re-tar the modules
tar -uvf vmnet.tar vmnet-only
#delete the previous working directory
rm -r vmnet-only
# run the vmware module build program. (alternatively just run the GUI app)
/usr/lib/vmware/bin/vmware-modconfig --console --install-all

Next install the WSX bundle. This will allow access to the VMWare machines trough a modern browser using HTML5.

sudo sh /media/disk_raid1/varia/vmware-install/VMware-WSX-1.0-754035.x86_64.bundle

This component uses Python 2.6, any other version won’t work. So we need to add python alongside the newer Python versions.

sudo add-apt-repository ppa:fkrull/deadsnakes
sudo apt-get update
sudo apt-get install python2.6 python2.6-dev

Now you can start the WSX server.

sudo /etc/init.d/vmware-wsx-server start

 

Tip: When running VMWare images on a CPU which allows you to scale the frequency then your clock might get a little bit off if you don’t install the VMWare tools.
A little workaround to this is to add this to your cron jobs. (My example BE, Brussels NTP server Telenet)

sudo crontab -e
00 1 * * * ntpdate ntp.telenet.be

There you go, one fresh NAS server ready to serve your content and configured to add more scalable services.

No Comments |

NAS – Part 1: The hardware

Introduction

So in the past I’ve always assembled NAS devices from old PC’s. My previous NAS device was an old Intel Q6600 processor. This gave me a thermal design power of 105 Watt. Ouch! Time to upgrade for something much power efficient. http://ark.intel.com/products/77987 this little quad core has an octa core and 20 Watt of TDP.

Hardware

– ASRock C2750D4I
– Lian Li PC-Q25B case
– 2 x 2GB Mushkin DDR3 RAM
– 4 x 3TB disk drives
– 2 x 1TB disk drives
– 1 x 60 GB SSD disk drive
– 1 x 1.5 TB disk drive

Setup

My data disks will be set up according to the following specification:
– one RAID5 (three 3TB dives) for semi critical data;
– one RAID1 (two 1TB drives) for critical files that I never want to lose (backups);
– one 3 TB disk used for files that may be lost (virtual machines and such).

The OS will be installed on the SSD drive to provide a fast and stable system. Right now I’ve also included an old 1.5TB disc drive to the system, in the future I wish to replace this by another 3TB when the need for extra data on RAID5 arises.

All RAID configurations will be handled by software raid. There is no sane reason why you should spend too much money for a hardware RAID on a home NAS system. My system  using software RAID performs at least at 83MB/s (picture taken from when system was done, copying external disk to NAS).

DSC_0166

Building the system

I don’t really want to explain this. It’s pretty straightforward. If you can’t read a manual or build your system then I recommend closing this page and buying a Synology instead. But for those interested: here are some juicy pictures of me building my NAS .

Unpacking the little motherboard (yes a passive cooled octa-core).

DSC_0149 DSC_0150

Clearing out my old NAS system (the Q6600 PC).

DSC_0152 DSC_0151

The final result before closing the case. I simply love the 5 hot swappable bays.

DSC_0162 DSC_0160

Booting the NAS with my little 10″ debug screen. One of the best purchases I’ve ever made in the past.

DSC_0158

Next part will handle all the services and software.

 

2 Comments |